Patent classifications
B60W2552/00
Traffic light detection auto-labeling and federated learning based on vehicle-to-infrastructure communications
A method for traffic light auto-labeling includes aggregating vehicle-to-infrastructure (V2I) traffic light signals at an intersection to determine transition states of each driving lane at the intersection during operation of an ego vehicle. The method also includes automatically labeling image training data to form auto-labeled image training data for a traffic light recognition model within the ego vehicle according to the determined transition states of each driving lane at the intersection. The method further includes planning a trajectory of the ego vehicle to comply with a right-of-way according to the determined transition states of each driving lane at the intersection according to a trained traffic light detection model. A federated learning module may train the traffic light recognition model using the auto-labeled image training data during the operation of the ego vehicle.
System and method for proactive lane assist
A proactive pedal algorithm is used to modify an accelerator pedal map to ensure the deceleration when the accelerator pedal is released matches driver expectation. Modifying the accelerator pedal map provides the driver of a vehicle the sensation that the vehicle resists moving when travelling in dense scenes with potentially high deceleration requirements and coasts easily in scenes with low deceleration requirements. The accelerator pedal map is modified based on a scene determination to classify other remote vehicles as in-lane, neighbor-lane, or on-coming.
Predicting terrain traversability for a vehicle
Embodiments of the present disclosure relate generally to generating and utilizing three-dimensional terrain maps for vehicular control. Other embodiments may be described and/or claimed.
Systems and methods for vehicle navigation
Systems and methods are provided for vehicle navigation. In one implementation, at least one processor may be programmed to receive, from a camera, a captured image representative of features in an environment of the vehicle. The processor may generate a warped image based on the received captured image, which may simulate a view of the features in the environment of the vehicle from a simulated viewpoint elevated relative to an actual position of the camera. The processor may further identify a road feature represented in the warped image, which may be transformed in one or more respects relative to a representation of the road feature in the captured image. The processor may then determine a navigational action for the vehicle based on the identified feature represented in the warped image and cause at least one actuator system of the vehicle to implement the determined navigational action.
Concept For Supporting a Motor Vehicle Being Guided in at Least Partially Automated Manner
A method for at least partially automated driving of a motor vehicle includes the steps of: determining that a need exists for infrastructure-based, at least partially automated driving of the motor vehicle, and transmitting, via a communication network, a request to transmit a plurality of infrastructure data based on which the motor vehicle is drivable in an at least partially automated manner, in response to determining that a need exists for infrastructure-based, at least partially automated driving of the motor vehicle. Receiving, via the communication network, the infrastructure data in response to transmitting the request. Generating a plurality of control signals for at least partially automated controlling of a lateral and/or a longitudinal operation of the motor vehicle based on the infrastructure data, and outputting the generated control signals.
Systems and methods for implementing an autonomous vehicle response to sensor failure
Among other things, we describe techniques for implementing a vehicle response to sensor failure. In general, one innovative aspect of the subject matter described in this specification can be embodied in methods that include receiving information from a plurality of sensors coupled to a vehicle, determining that a level of confidence of the received information from at least one sensor of a first subset of sensors of the plurality of sensors is less than a first threshold, comparing a number of sensors in the first subset of sensors to a second threshold, and adjusting the driving capability of the vehicle to rely on information received from a second subset of sensors of the plurality of sensors, wherein the second subset of sensors excludes the at least one sensor of the first subset of sensors.
Lane detection and tracking techniques for imaging systems
A method for tracking a lane on a road is presented. The method comprises receiving, by one or more processors from an imaging system, a set of pixels associated with lane markings. The method further includes generating, by the one or more processors, a predicted spline comprising (i) a first spline and (ii) a predicted extension of the first spline in a direction in which the imaging system is moving. The first spline describes a boundary of a lane and is generated based on the set of pixels. The predicted extension of the first spline is generated based at least in part on a curvature of at least a portion of the first spline.
Drivable surface identification techniques
The present disclosure relates generally to identification of drivable surfaces in connection with autonomously performing various tasks at industrial work sites and, more particularly, to techniques for distinguishing drivable surfaces from non-drivable surfaces based on sensor data. A framework for the identification of drivable surfaces is provided for an autonomous machine to facilitate it to autonomously detect the presence of a drivable surface and to estimate, based on sensor data, attributes of the drivable surface such as road condition, road curvature, degree of inclination or declination, and the like. In certain embodiments, at least one camera image is processed to extract a set features from which surfaces and objects in a physical environment are identified, and to generate additional images for further processing. The additional images are combined with a 3D representation, derived from LIDAR or radar data, to generate an output representation indicating a drivable surface.
System and method for providing vehicle collision avoidance at an intersection
A system and method for estimating and communicating a path of travel of a reference vehicle by road side equipment (RSE) that includes establishing communication between the RSE and an on-board equipment of the reference vehicle and receiving vehicle parameters of the reference vehicle from the on-board of the reference vehicle. The system and method also include estimating the path of travel of the reference vehicle based on the vehicle parameters of the reference vehicle and environmental parameters determined by the RSE. The system and method further include establishing communication between the RSE and an on-board equipment of a target vehicle and communicating the estimated path of travel of the reference vehicle from the RSE to the target vehicle, wherein a probability of collision between the reference vehicle and the target vehicle is determined based on the estimated path of travel of the reference vehicle.
Driving assist device and driving assist method
A driving assist device includes a first sensor, a second sensor, and a control device. The control device does not execute an inter-vehicle distance control under a predetermined first condition upon determination that at least one preceding object is detected based on the output of one of the first sensor and the second sensor without being detected based on the output of the other of the first and second sensors; and an environment of a non-detection sensor that is the other of the first and second sensors satisfies a first requirement for determination of a reliability of the output of the non-detection sensor; and the control device executes the inter-vehicle distance control under a predetermined second condition upon determination that the environment of the non-detection sensor satisfies a second requirement for determination of the reliability of the output of the non-detection sensor.