Patent classifications
B60W2554/402
Methods and apparatus for depth estimation using stereo cameras in a vehicle system
A method comprises: receiving, at a processor, a first image from a first camera from a stereo camera pair and a second image from a second camera from the stereo camera pair. The method also includes determining, at the processor using a machine learning model, a first set of objects in the first image. The processor determines an object type. The processor identifies a second set of objects in the second image associated with the first plurality of objects. The method also includes calculating, at the processor, a set of disparity values between the first image and the second image based on (1) an object from the first set of objects, (2) an object from the second set of objects and associated with the object from the first set of objects, and (3) an object type of the object from the first set of objects.
Moving body behavior prediction device and moving body behavior prediction method
The present invention improves the accuracy of predicting rarely occurring behavior of moving bodies, without reducing the accuracy of predicting commonly occurring behavior of moving bodies. A vehicle 101 is provided with a moving body behavior prediction device 10. The moving body behavior prediction device 10 is provided with a first behavior prediction unit 203 and a second behavior prediction unit 207. The first behavior prediction unit 203 learns first predicted behavior 204 so as to minimize the error between behavior prediction results for moving bodies and behavior recognition results for the moving bodies after a prediction time has elapsed. The second behavior prediction unit 207 learns future second predicted behavior 208 of the moving bodies around the vehicle 101 so that the vehicle 101 does not drive in an unsafe manner.
Driver assistance device
A driver assistance device is configured to determine whether a type of a deceleration target is included in a category of a position-fixed object or a moving object, and determine whether the deceleration target is lost, and to continue the deceleration assistance on assumption that the lost deceleration target exists when the deceleration target is determined to be lost. The driver assistance device is configured to notify a driver of a host vehicle that the deceleration target is lost when the deceleration target is determined to be lost and the type of the deceleration target is included in the category of the moving object, and not to notify the driver that the deceleration target is lost when the deceleration target is determined to be lost and the type of the deceleration target is included in the category of the position-fixed object.
Trajectory modifications based on a collision zone
The described techniques relate to modifying a trajectory of a vehicle, such as an autonomous vehicle, based on an overlap area associated with an object in the environment. In examples, map data may be used, in part, to generate an initial trajectory for an autonomous vehicle to follow through an environment. In some cases, a yield trajectory may be generated based on detection of the object, and the autonomous vehicle may evaluate a cost function to determine whether to execute the yield or follow the initial trajectory. In a similar manner, the autonomous vehicle may determine a merge location of two lanes of a junction, and use the merge location to update extents of an overlap area to prevent the autonomous vehicle from blocking the junction and/or provide sufficient space to yield to the oncoming vehicle while merging.
Collision monitoring using system data
Techniques and methods for performing collision monitoring using system data. For instance, a vehicle may generate sensor data using one or more sensors. The vehicle may then analyze the sensor data using systems in order to determine parameters associated with the vehicle and parameters associated with another object. Additionally, the vehicle may determine uncertainties associated with the parameters and then process the parameters using the uncertainties. Based at least in part on the processing, the vehicle may determine a distribution of estimated locations associated with the vehicle and a distribution of estimated locations associated with the object. Using the distributions of estimated locations, the vehicle may determine the probability of collision between the vehicle and the object.
Systems and methods for limiting driver distraction
Systems and methods for limiting driver distraction, such as improving (e.g., maintaining) driver attention to driving when driving distractions are detected, are provided. A system may include at least one sensor for determining an attention of a driver on a travel path and an interface module configured to reengage attention of the driver on the travel path. An image capturing device may detect an environment surrounding the vehicle. A logic device may determine whether the environment surrounding the vehicle includes an external distraction or whether the driver is distracted by an internal distraction. The at least one sensor may monitor the driver for a distracted behavior. The driver may be required to take an action when a distraction is determined. For example, the driver may interact with a driver monitoring system to verify reengagement to driving (e.g., by identifying a second vehicle on the roadway).
Method for controlling emergency stop of autonomous vehicle
A method for controlling emergency stop of an autonomous vehicle is provided. The method includes: controlling emergency stop of an autonomous vehicle capable of recognizing a stop request from an emergency vehicle, determining whether a current situation is an emergency situation or a general situation, and performing a procedure corresponding to each situation to stop the autonomous vehicle based on each situation.
Planning accommodations for particulate matter
Techniques for detecting an object in an environment, determining a probability that the object is a region of particulate matter, and controlling a vehicle based on the probability. The region particulate matter may include steam (e.g., emitted from a man-hole cover, a dryer exhaust port, etc.), exhaust from a vehicle (e.g., car, truck, motorcycle, etc.), dust, environmental gases (e.g., resulting from sublimation, fog, evaporation, etc.), or the like. Based on the associated probability that the object is a region of particulate matter, a vehicle computing system may substantially maintain a vehicle trajectory, modify a trajectory of the vehicle to ensure the vehicle does not impact the object, stop the vehicle, or otherwise control the vehicle to ensure that the vehicle continues to progress in a safe manner. The vehicle controller may continually adjust the trajectory based on additionally acquired sensor data and associated region probabilities.
Control of Autonomous Vehicle Based on Environmental Object Classification Determined Using Phase Coherent LIDAR Data
Determining classification(s) for object(s) in an environment of autonomous vehicle, and controlling the vehicle based on the determined classification(s). For example, autonomous steering, acceleration, and/or deceleration of the vehicle can be controlled based on determined pose(s) and/or classification(s) for objects in the environment. The control can be based on the pose(s) and/or classification(s) directly, and/or based on movement parameter(s), for the object(s), determined based on the pose(s) and/or classification(s). In many implementations, pose(s) and/or classification(s) of environmental object(s) are determined based on data from a phase coherent Light Detection and Ranging (LIDAR) component of the vehicle, such as a phase coherent LIDAR monopulse component and/or a frequency-modulated continuous wave (FMCW) LIDAR component.
Semantic Occupancy Grid Management in ADAS/Autonomous Driving
An example driver assistance system includes an object detection (OD) network, a semantic segmentation network, a processor, and a memory. In an example method, an image is received and stored in the memory. An object detection (OD) polygon is generated for each object detected in the image, and each OD polygon encompasses at least a portion of the corresponding object detected in the image. A region of interest (ROI) is associated with each OD polygon. Such method may further comprise generating a mask for each ROI, each mask configured as a bitmap approximating a size of the corresponding ROI; generating at least one boundary polygon for each mask based on the corresponding mask, each boundary polygon having multiple vertices and enclosing the corresponding mask; and reducing a number of vertices of the boundary polygons based on a comparison between points of the boundary polygons and respective points on the bitmaps.