Patent classifications
G01S2013/9318
Target tracking during acceleration events
Vehicles and methods for tracking an object and controlling a vehicle based on the tracked object. A Radar-Doppler (RD) map is received from the radar sensing system of the vehicle and relative acceleration of an object with respect to the vehicle is detected based on the RD map so as to provide acceleration data. A current frame of detected object data is received from a sensing system of the vehicle. When the relative acceleration has been detected, a tracking algorithm is adapted to reduce the influence of the predictive motion model or the historical state of the object and the object is tracked using the adapted tracking algorithm so as to provide adapted estimated object data based on the object tracking. One or more vehicle actuators are controlled based on the adapted estimated object data.
METHOD FOR DETECTING AN OBSTACLE ON A ROUTE
A computer-implemented method for detecting an obstacle on a route ahead of a first vehicle. In the method, information on a second vehicle driving ahead on the route is recorded in the first vehicle by at least one sensor of the first vehicle. In the first vehicle, depending on the recorded information, a computer detects an avoidance maneuver of the second vehicle due to an obstacle or detects that the second vehicle has driven over an obstacle. An obstacle is detected on the route depending on the detected avoidance maneuver or the detection that the vehicle has driven over an obstacle. A measure for protecting the vehicle and/or the obstacle is initiated depending on the detected obstacle.
MAP CONSTRUCTION METHOD FOR AUTONOMOUS DRIVING AND RELATED APPARATUS
A map construction method and a related apparatus are provided. The method includes: obtaining, based on manual driving track data and/or an obstacle grid map, road information, intersection information, and lane information of a region through which a vehicle has traveled; obtaining road traffic direction information based on the manual driving track data and the road information, and obtaining lane traffic direction information based on the lane information and the road traffic direction information; obtaining intersection entry and exit point information based on the intersection information and the lane traffic direction information; and performing, based on the intersection entry and exit point information, an operation of generating a virtual topology center line to obtain an autonomous driving map of the region through which the vehicle has traveled, where the virtual topology center line is a traveling boundary line of the vehicle in an intersection region.
Driving Assistance System for Vehicle
An embodiment driving assistance system for a vehicle includes a driving information provision unit configured to acquire and provide driving information of a traveling vehicle, a control unit configured to generate and output a control signal for driving assistance when it is determined the vehicle travels on a rough road based on the driving information of the vehicle provided by the driving information provision unit and it is determined that the vehicle is currently in a rough road traveling state, and a steering actuator configured to generate and apply a steering assistance force according to a control value of the control signal for the driving assistance output by the control unit to a steering wheel.
Systems and methods for high velocity resolution high update rate radar for autonomous vehicles
An autonomous vehicle (AV) includes a radar sensor system and a computing system that computes velocities of an object in a driving environment of the AV based upon radar data that is representative of radar returns received by the radar sensor system. The AV can be configured to compute a first velocity of the object based upon first radar data that is representative of the radar return from a first time to a second time. The AV can further be configured to compute a second velocity of the object based upon second radar data that includes at least a portion of the first radar data and further includes additional radar data representative of a radar return received subsequent to the second time. The AV can further be configured to control one of a propulsion system, a steering system, or a braking system to effectuate motion of the AV based upon the computed velocities.
Calculating velocity of an autonomous vehicle using radar technology
Examples relating to vehicle velocity calculation using radar technology are described. An example method performed by a computing system may involve, while a vehicle is moving on a road, receiving, from two or more radar sensors mounted at different locations on the vehicle, radar data representative of an environment of the vehicle. The method may involve, based on the data, detecting at least one scatterer in the environment. The method may involve making a determination of a likelihood that the at least one scatterer is stationary with respect to the vehicle. The method may involve, based on the determination being that the likelihood is at least equal to a predefined confidence threshold, calculating a velocity of the vehicle based on the data from the sensors. The calculated velocity may include an angular and linear velocity. Further, the method may involve controlling the vehicle based on the calculated velocity.
Parking assistant and method for adaptive parking of a vehicle to optimize overall sensing coverage of a traffic environment
A method can be used for adaptive parking of a vehicle. A parking area is determined around a programmed destination of the vehicle. The parking area has more than one available parking spot for the vehicle. Parking data is acquired via a wireless communication network. The parking data for each parked vehicle includes a parking position and an individual sensing coverage of an environment sensor system of the respective parked vehicle scanning the traffic environment within the parking area. Available parking spots are ranked based on a calculated overall sensing coverage and a recommended parking spot is determined among the available parking spots based on overall sensing coverage of the traffic environment in the parking area.
Comprehensive and efficient method to incorporate map features for object detection with LiDAR
According to various embodiments, systems and methods described in the disclosure combine mapped features with point cloud features to improve object detection precision of an autonomous driving vehicle (ADV). The map features and the point cloud features can be extracted from a perception area of the ADV within a particular angle view at each driving cycle based on a position of the ADV. The map features and the point cloud features can be concatenated and provided to a neutral network for object detections.
Automatic autonomous vehicle and robot LiDAR-camera extrinsic calibration
Extrinsic calibration of a Light Detection and Ranging (LiDAR) sensor and a camera can comprise constructing a first plurality of reconstructed calibration targets in a three-dimensional space based on physical calibration targets detected from input from the LiDAR and a second plurality of reconstructed calibration targets in the three-dimensional space based on physical calibration targets detected from input from the camera. Reconstructed calibration targets in the first and second plurality of reconstructed calibration targets can be matched and a six-degree of freedom rigid body transformation of the LiDAR and camera can be computed based on the matched reconstructed calibration targets. A projection of the LiDAR to the camera can be computed based on the computed six-degree of freedom rigid body transformation.
Vehicle and method of controlling the same
A vehicle includes: recognizing a forward vehicle in response to the processing of image data captured by an image sensor disposed at the vehicle so as to have a field of view of the outside of the vehicle; obtaining a distance from the forward vehicle in response to the processing of detecting data captured by a radar disposed at the vehicle so as to have a detecting area of the outside of the vehicle; obtaining a change amount of vertical movement of the forward vehicle in the image data in response to the distance from the forward vehicle that is equal to or less than a reference distance; obtaining a height of an obstacle on a road surface corresponding to the change amount; obtaining the height of the obstacle on the road surface in the image data in response to the distance from the forward vehicle that exceeds the reference distance; identifying a driving speed of the vehicle; identifying a reference height corresponding to the driving speed of the vehicle; and outputting deceleration guide information in response to the height of the obstacle on the road surface that is greater than or equal to the reference height.