Patent classifications
G01S13/865
SENSOR ASSEMBLY WITH LIDAR FOR AUTONOMOUS VEHICLES
A sensor assembly for autonomous vehicles includes a side mirror assembly configured to mount to a vehicle. The side mirror assembly includes a first camera having a field of view in a direction opposite a direction of forward travel of the vehicle; a second camera having a field of view in the direction of forward travel of the vehicle; and a third camera having a field of view in a direction substantially perpendicular to the direction of forward travel of the vehicle. The first camera, the second camera, and the third camera are oriented to provide, in combination with a fourth camera configured to be mounted on a roof of the vehicle, an uninterrupted camera field of view from the direction of forward travel of the vehicle to a direction opposite the direction of forward travel of the vehicle.
SENSOR ASSEMBLY WITH RADAR FOR AUTONOMOUS VEHICLES
A sensor assembly for autonomous vehicles includes a side minor assembly configured to mount to a vehicle. The side mirror assembly includes a first camera having a field of view in a direction opposite a direction of forward travel of the vehicle; a second camera having a field of view in the direction of forward travel of the vehicle; and a third camera having a field of view in a direction substantially perpendicular to the direction of forward travel of the vehicle. The first camera, the second camera, and the third camera are oriented to provide, in combination with a fourth camera configured to be mounted on a roof of the vehicle, an uninterrupted camera field of view from the direction of forward travel of the vehicle to a direction opposite the direction of forward travel of the vehicle.
SENSOR ASSEMBLY WITH LIDAR FOR AUTONOMOUS VEHICLES
A sensor assembly for autonomous vehicles includes a side mirror assembly configured to mount to a vehicle. The side mirror assembly includes a first camera having a field of view in a direction opposite a direction of forward travel of the vehicle; a second camera having a field of view in the direction of forward travel of the vehicle; and a third camera having a field of view in a direction substantially perpendicular to the direction of forward travel of the vehicle. The first camera, the second camera, and the third camera are oriented to provide, in combination with a fourth camera configured to be mounted on a roof of the vehicle, an uninterrupted camera field of view from the direction of forward travel of the vehicle to a direction opposite the direction of forward travel of the vehicle.
Method and Device for Making Sensor Data More Robust Against Adverse Disruptions
The disclosure relates to a method for making sensor data more robust to adversarial perturbations, wherein sensor data are obtained from at least two sensors, wherein the sensor data obtained from the at least two sensors are replaced in each case piecewise by means of quilting, wherein the piecewise replacement is carried out in such a way that the respectively replaced sensor data from different sensors are plausible relative to one another, and wherein the sensor data replaced piecewise are output.
EXTERNAL ENVIRONMENT SENSOR DATA PRIORITIZATION FOR AUTONOMOUS VEHICLE
An autonomous vehicle includes an array of sensors, a processor, and a switch. The array of sensors generate sensor data related to one or more objects in an external environment of the autonomous vehicle and the processor determines an environmental context. The switch transfers the sensor data from the array of sensors to the processor, where the switch is configured to: (a) receive first sensor data from a first sensor group of the array of sensors; (b) receive second sensor data from a second sensor group of the array of sensors; (c) determine an order of transmission of the first sensor data over the second sensor data in response to the environmental context; and (d) transmit the first sensor data to the processor prior to transmitting the second sensor data based on the order of transmission.
DEEP NEURAL NETWORK FOR DETECTING OBSTACLE INSTANCES USING RADAR SENSORS IN AUTONOMOUS MACHINE APPLICATIONS
In various examples, a deep neural network(s) (e.g., a convolutional neural network) may be trained to detect moving and stationary obstacles from RADAR data of a three dimensional (3D) space. In some embodiments, ground truth training data for the neural network(s) may be generated from LIDAR data. More specifically, a scene may be observed with RADAR and LIDAR sensors to collect RADAR data and LIDAR data for a particular time slice. The RADAR data may be used for input training data, and the LIDAR data associated with the same or closest time slice as the RADAR data may be annotated with ground truth labels identifying objects to be detected. The LIDAR labels may be propagated to the RADAR data, and LIDAR labels containing less than some threshold number of RADAR detections may be omitted. The (remaining) LIDAR labels may be used to generate ground truth data.
Methods and systems for tracking a mover's lane over time
Systems and methods for monitoring the lane of an object in an environment of an autonomous vehicle are disclosed. The methods include receiving sensor data corresponding to the object, and assigning an instantaneous probability to each of a plurality of lanes based on the sensor data as a measure of likelihood that the object is in that lane at a current time. The methods also include generating a transition matrix for each of the plurality of lanes that encode one or more probabilities that the object transitioned to that lane from another lane in the environment or from that lane to another lane in the environment at the current time. The methods then include determining an assigned probability associated with each of the plurality of lanes based on the instantaneous probability and the transition matrix as a measure of likelihood of the object occupying that lane at the current time.
Intelligent roadside unit
The present disclosure provides an intelligent roadside unit. The intelligent roadside unit includes: a radar configured to detect an obstacle within a first preset range of the intelligent roadside unit; a camera configured to capture an image of a second preset range of the intelligent roadside unit; a master processor coupled to the radar and the camera, and configured to generate a point cloud image according to information on the obstacle detected by the radar and the image detected by the camera; and a slave processor coupled to the radar and the camera, and configured to generate a point cloud image according to the information on the obstacle detected by the radar and the image detected by the camera, in which the slave processor checks the master processor, and when the original master processor breaks down, it is switched from the master processor to the slave processor.
High-definition city mapping
A vehicle generates a city-scale map. The vehicle includes one or more Lidar sensors configured to obtain point clouds at different positions, orientations, and times, one or more processors, and a memory storing instructions that, when executed by the one or more processors, cause the system to perform registering, in pairs, a subset of the point clouds based on respective surface normals of each of the point clouds; determining loop closures based on the registered subset of point clouds; determining a position and an orientation of each of the subset of the point clouds based on constraints associated with the determined loop closures; and generating a map based on the determined position and the orientation of each of the subset of the point clouds.
Method and system for generating and updating digital maps
A method and control system for generating and updating digital maps using a plurality of passages along a road portion by at least one road vehicle is provided. The method comprises obtaining positioning data and sensor data of each passage from the at least one road vehicle. Further, the method comprises forming a sub-map representation of the surrounding environment at each obtained longitudinal position based on the obtained sensor data, and estimating a longitudinal error for each obtained longitudinal position within each segment. Furthermore, the method comprises determining a new plurality of longitudinal positions of each road vehicle for each passage by applying the estimated longitudinal error on each corresponding obtained longitudinal position, and applying the determined new plurality of longitudinal positions on associated sensor data in order to generate a first layer of a map representation of the surrounding environment along the road portion.