Patent classifications
G01S13/89
Method and Device for Making Sensor Data More Robust Against Adverse Disruptions
The disclosure relates to a method for making sensor data more robust to adversarial perturbations, wherein sensor data are obtained from at least two sensors, wherein the sensor data obtained from the at least two sensors are replaced in each case piecewise by means of quilting, wherein the piecewise replacement is carried out in such a way that the respectively replaced sensor data from different sensors are plausible relative to one another, and wherein the sensor data replaced piecewise are output.
BIN SENSOR
A method comprises emitting detection radiation into a container; receiving a reflection of the emitted radiation from contents of the container; interpreting the received reflection to determine the contents of the container.
BIN SENSOR
A method comprises emitting detection radiation into a container; receiving a reflection of the emitted radiation from contents of the container; interpreting the received reflection to determine the contents of the container.
GROUND HEIGHT-MAP BASED ELEVATION DE-NOISING
The disclosed technology provides solutions provides solutions for improving sensor data accuracy and in particular, for improving radar data by de-noising radar elevation measurements using a height-map. In some aspects, a process of the disclosed technology can include steps for receiving camera data corresponding with a first location, receiving radar data comprising a plurality of radar points, and processing the radar data to generate height-corrected radar data. In some aspects, the process can further include steps for projecting the height-corrected radar data into an image space to generate radar-image data. Systems and machine-readable media are also provided.
GROUND HEIGHT-MAP BASED ELEVATION DE-NOISING
The disclosed technology provides solutions provides solutions for improving sensor data accuracy and in particular, for improving radar data by de-noising radar elevation measurements using a height-map. In some aspects, a process of the disclosed technology can include steps for receiving camera data corresponding with a first location, receiving radar data comprising a plurality of radar points, and processing the radar data to generate height-corrected radar data. In some aspects, the process can further include steps for projecting the height-corrected radar data into an image space to generate radar-image data. Systems and machine-readable media are also provided.
DEEP NEURAL NETWORK FOR DETECTING OBSTACLE INSTANCES USING RADAR SENSORS IN AUTONOMOUS MACHINE APPLICATIONS
In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.
DEEP NEURAL NETWORK FOR DETECTING OBSTACLE INSTANCES USING RADAR SENSORS IN AUTONOMOUS MACHINE APPLICATIONS
In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.
Intelligent roadside unit
The present disclosure provides an intelligent roadside unit. The intelligent roadside unit includes: a radar configured to detect an obstacle within a first preset range of the intelligent roadside unit; a camera configured to capture an image of a second preset range of the intelligent roadside unit; a master processor coupled to the radar and the camera, and configured to generate a point cloud image according to information on the obstacle detected by the radar and the image detected by the camera; and a slave processor coupled to the radar and the camera, and configured to generate a point cloud image according to the information on the obstacle detected by the radar and the image detected by the camera, in which the slave processor checks the master processor, and when the original master processor breaks down, it is switched from the master processor to the slave processor.
Intelligent roadside unit
The present disclosure provides an intelligent roadside unit. The intelligent roadside unit includes: a radar configured to detect an obstacle within a first preset range of the intelligent roadside unit; a camera configured to capture an image of a second preset range of the intelligent roadside unit; a master processor coupled to the radar and the camera, and configured to generate a point cloud image according to information on the obstacle detected by the radar and the image detected by the camera; and a slave processor coupled to the radar and the camera, and configured to generate a point cloud image according to the information on the obstacle detected by the radar and the image detected by the camera, in which the slave processor checks the master processor, and when the original master processor breaks down, it is switched from the master processor to the slave processor.
All-direction high-resolution subsurface imaging using distributed moving transceivers
A subsurface imaging technique using distributed sensors is introduced. Instead of monostatic transceivers employed in conventional ground penetrating radars, the proposed technique utilizes bi-static transceivers to sample the reflected signals from the ground at different positions and create a large two-dimensional aperture for high resolution subsurface imaging. The coherent processing of the samples in the proposed imaging method eliminates the need for large antenna arrays for obtaining high lateral resolution images. In addition, it eliminates the need for sampling on a grid which is a time-consuming task in imaging using ground penetration radar. Imaging results show that the method can provide high-resolution images of the buried targets using only samples of the reflected signals on a circle with the center at the transmitter location.