Patent classifications
G01S13/86
SENSOR ASSEMBLY WITH LIDAR FOR AUTONOMOUS VEHICLES
A sensor assembly for autonomous vehicles includes a side mirror assembly configured to mount to a vehicle. The side mirror assembly includes a first camera having a field of view in a direction opposite a direction of forward travel of the vehicle; a second camera having a field of view in the direction of forward travel of the vehicle; and a third camera having a field of view in a direction substantially perpendicular to the direction of forward travel of the vehicle. The first camera, the second camera, and the third camera are oriented to provide, in combination with a fourth camera configured to be mounted on a roof of the vehicle, an uninterrupted camera field of view from the direction of forward travel of the vehicle to a direction opposite the direction of forward travel of the vehicle.
SENSOR ASSEMBLY WITH RADAR FOR AUTONOMOUS VEHICLES
A sensor assembly for autonomous vehicles includes a side minor assembly configured to mount to a vehicle. The side mirror assembly includes a first camera having a field of view in a direction opposite a direction of forward travel of the vehicle; a second camera having a field of view in the direction of forward travel of the vehicle; and a third camera having a field of view in a direction substantially perpendicular to the direction of forward travel of the vehicle. The first camera, the second camera, and the third camera are oriented to provide, in combination with a fourth camera configured to be mounted on a roof of the vehicle, an uninterrupted camera field of view from the direction of forward travel of the vehicle to a direction opposite the direction of forward travel of the vehicle.
SENSOR ASSEMBLY WITH LIDAR FOR AUTONOMOUS VEHICLES
A sensor assembly for autonomous vehicles includes a side mirror assembly configured to mount to a vehicle. The side mirror assembly includes a first camera having a field of view in a direction opposite a direction of forward travel of the vehicle; a second camera having a field of view in the direction of forward travel of the vehicle; and a third camera having a field of view in a direction substantially perpendicular to the direction of forward travel of the vehicle. The first camera, the second camera, and the third camera are oriented to provide, in combination with a fourth camera configured to be mounted on a roof of the vehicle, an uninterrupted camera field of view from the direction of forward travel of the vehicle to a direction opposite the direction of forward travel of the vehicle.
RADAR INSTALLATION AND CALIBRATION SYSTEMS AND METHODS
Radar installation and calibration systems and methods are provided. In one example, a controller of a radar system receives installation parameters associated with an installation of a radar system. A present orientation of a radar device of the radar system is determined and compared to the installation parameters to determine a deviation of the present orientation from the installation parameters. The deviation is sent to a coordinating device associated with the radar device to cause the deviation to be outputted as installation feedback through the coordinating device. Related systems and methods are also provided.
RADAR INSTALLATION AND CALIBRATION SYSTEMS AND METHODS
Radar installation and calibration systems and methods are provided. In one example, a controller of a radar system receives installation parameters associated with an installation of a radar system. A present orientation of a radar device of the radar system is determined and compared to the installation parameters to determine a deviation of the present orientation from the installation parameters. The deviation is sent to a coordinating device associated with the radar device to cause the deviation to be outputted as installation feedback through the coordinating device. Related systems and methods are also provided.
Methods and Systems for Predicting Properties of a Plurality of Objects in a Vicinity of a Vehicle
A computer-implemented method for predicting properties of a plurality of objects in a vicinity of a vehicle includes multiple steps that can be carried out by computer hardware components. The method includes determining a grid map representation of road-users perception data, with the road-users perception data including tracked perception results and/or untracked sensor intermediate detections. The method also includes determining a grid map representation of static environment data based on data obtained from a perception system and/or a pre-determined map. The method further includes determining the properties of the plurality of objects based on the grid map representation of road-users perception data and the grid map representation of static environment data.
MULTIMODAL SPEECH RECOGNITION METHOD AND SYSTEM, AND COMPUTER-READABLE STORAGE MEDIUM
The disclosure provides a multimodal speech recognition method and system, and a computer-readable storage medium. The method includes calculating a first logarithmic mel-frequency spectral coefficient and a second logarithmic mel-frequency spectral coefficient when a target millimeter-wave signal and a target audio signal both contain speech information corresponding to a target user; inputting the first and the second logarithmic mel-frequency spectral coefficient into a fusion network to determine a target fusion feature, where the fusion network includes at least a calibration module and a mapping module, the calibration module is configured to perform mutual feature calibration on the target audio/millimeter-wave signals, and the mapping module is configured to fuse a calibrated millimeter-wave feature and a calibrated audio feature; and inputting the target fusion feature into a semantic feature network to determine a speech recognition result corresponding to the target user. The disclosure can implement high-accuracy speech recognition.
Occlusion Constraints for Resolving Tracks from Multiple Types of Sensors
This document describes techniques for using occlusion constraints for resolving tracks from multiple types of sensors. In aspects, an occlusion constraint is applied to an association between a radar track and vision track to indicate a probability of occlusion. In other aspects, described are techniques for a vehicle to refrain from evaluating occluded radar tracks and vision tracks collected by a perception system. The probability of occlusion is utilized for deemphasizing pairs of radar tracks and vision tracks with a high likelihood of occlusion and therefore, not useful for tracking. The disclosed techniques may provide improved perception data more closely representing multiple complex data sets for a vehicle for preventing a collision with an occluded object as the vehicle operates in an environment.
IDENTIFICATION OF SPURIOUS RADAR DETECTIONS IN AUTONOMOUS VEHICLE APPLICATIONS
The described aspects and implementations enable fast and accurate verification of radar detection of objects in autonomous vehicle (AV) applications using combined processing of radar data and camera images. In one implementation, disclosed is a method and a system to perform the method that includes obtaining a radar data characterizing intensity of radar reflections from an environment of the AV, identifying, based on the radar data, a candidate object, obtaining a camera image depicting a region where the candidate object is located, and processing the radar data and the camera image using one or more machine-learning models to obtain a classification measure representing a likelihood that the candidate object is a real object.
DEEP NEURAL NETWORK FOR DETECTING OBSTACLE INSTANCES USING RADAR SENSORS IN AUTONOMOUS MACHINE APPLICATIONS
In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.