Patent classifications
G06V20/584
Assessing perception of sensor using known mapped objects
Aspects of the disclosure relate to determining perceptive range of a vehicle in real time. For instance, a static object defined in pre-stored map information may be identified. Sensor data generated by a sensor of the vehicle may be received. The sensor data may be processed to determine when the static object is first detected in an environment of the vehicle. A distance between the object and a location of the vehicle when the static object was first detected may be determined. This distance may correspond to a perceptive range of the vehicle with respect to the sensor. The vehicle may be controlled in an autonomous driving mode based on the distance.
Systems and methods for utilizing models to detect dangerous tracks for vehicles
A device may receive accelerometer data and video data for a vehicle and may identify bounding boxes and object classes for objects near the vehicle. The device may identify tracks for the objects and may filter out tracks that are not associated with vehicles or vulnerable road users to generate one or more tracks or an indication of no tracks. The device may generate a collision cone identifying a drivable area of the vehicle to identify objects more likely to be involved in a collision and may filter out tracks from the one or more tracks, based on the bounding boxes, and to generate a subset of tracks or another indication of no tracks. The device may determine scores for the subset of tracks and may identify a track of the subset of tracks with a highest score. The device may perform actions based on the identified track.
Traffic light occlusion detection for autonomous vehicle
An occlusion detection system for an autonomous vehicle is described herein, where a signal conversion system receives a three-dimensional sensor signal from a sensor system and projects the three-dimensional sensor signal into a two-dimensional range image having a plurality of pixel values that include distance information to objects captured in the range image. A localization system detects a first object in the range image, such as a traffic light, having first distance information and a second object in the range image, such as a foreground object, having second distance information. An occlusion polygon is defined around the second object and the range image is provided to an object perception system that excludes information within the occlusion polygon to determine a configuration of the first object. A directive is output by the object perception system to control the autonomous vehicle based upon occlusion detection.
Traffic Light Detection Device and Traffic Light Detection Method
A traffic light detection device includes: an image capture unit capturing an image of surroundings; a traffic light location estimation unit estimating a location of a traffic light around the vehicle and setting a traffic light search area in which the traffic light is estimated to be present; a traffic light detection unit detecting the traffic light by searching the traffic light search area on the image; and an obstruction estimation unit. When the obstruction estimation unit estimates that a continuous obstruction state where a view of the traffic light is continuously obstructed occurs in the traffic light search area, the traffic light location estimation unit selects the traffic light search area based on the continuous obstruction state.
METHOD, DEVICE, AND SYSTEM FOR PROCESSING VEHICLE DIAGNOSIS AND INFORMATION
A method, a device, and a system for processing information of a vehicle are disclosed. In the embodiments, information about a connected vehicle is acquired, and a display image of a virtual vehicle is generated; driving information of the connected vehicle is acquired; whether the vehicle is currently in a driving state is determined according to the driving information; and when the connected vehicle is in the driving state, an orientation of a head of the virtual vehicle is adjusted to keep consistency with an orientation of a head of the connected vehicle; and when the connected vehicle is in a non-driving state, an orientation of a head of the virtual vehicle is adjusted towards a location of the connected vehicle. The present disclosure can feed back a vehicle condition in real time, dynamically and truly, and position a vehicle more accurately.
End-to-end vehicle perception system training
Techniques for a perception system of a vehicle that can detect and track objects in an environment are described herein. The perception system may include a machine-learned model that includes one or more different portions, such as different components, subprocesses, or the like. In some instances, the techniques may include training the machine-learned model end-to-end such that outputs of a first portion of the machine-learned model are tailored for use as inputs to another portion of the machine-learned model. Additionally, or alternatively, the perception system described herein may utilize temporal data to track objects in the environment of the vehicle and associate tracking data with specific objects in the environment detected by the machine-learned model. That is, the architecture of the machine-learned model may include both a detection portion and a tracking portion in the same loop.
DETERMINING ROAD LOCATION OF A TARGET VEHICLE BASED ON TRACKED TRAJECTORY
Systems and methods are provided for navigating a host vehicle. In an embodiment, a processing device may be configured to receive images captured over a time period; analyze images to identify a target vehicle; receive map information associated including a plurality of target trajectories; determine, based on analysis of the images, first and second estimated positions of the target vehicle within the time period; determine, based on the first and second estimated positions, a trajectory of the target vehicle over the time period; compare the determined trajectory to the plurality of target trajectories to identify a target trajectory traversed by the target vehicle; determine, based on the identified target trajectory, a position of the target vehicle; and determine a navigational action for the host vehicle based on the determined position.
Inferring State of Traffic Signal and Other Aspects of a Vehicle's Environment Based on Surrogate Data
A vehicle configured to operate in an autonomous mode can obtain sensor data from one or more sensors observing one or more aspects of an environment of the vehicle. At least one aspect of the environment of the vehicle that is not observed by the one or more sensors could be inferred based on the sensor data. The vehicle could be controlled in the autonomous mode based on the at least one inferred aspect of the environment of the vehicle.
Multiple Stage Image Based Object Detection and Recognition
Systems, methods, tangible non-transitory computer-readable media, and devices for autonomous vehicle operation are provided. For example, a computing system can receive object data that includes portions of sensor data. The computing system can determine, in a first stage of a multiple stage classification using hardware components, one or more first stage characteristics of the portions of sensor data based on a first machine-learned model. In a second stage of the multiple stage classification, the computing system can determine second stage characteristics of the portions of sensor data based on a second machine-learned model. The computing system can generate an object output based on the first stage characteristics and the second stage characteristics. The object output can include indications associated with detection of objects in the portions of sensor data.
METHOD AND SYSTEM FOR ANNOTATING SENSOR DATA
A computer-implemented method for annotating driving scenario sensor data, including the steps of receiving raw sensor data, the raw sensor data comprising a plurality of successive LIDAR point clouds and/or a plurality of successive camera images, recognizing objects in each image of the camera data and/or each point cloud using one or more neural networks, correlating objects within successive images and/or point clouds, removing false positive results on the basis of plausibility criteria, and exporting the annotated sensor data of the driving scenario.