B60W2554/402

Full uncertainty for motion planning in autonomous vehicles
11634162 · 2023-04-25 · ·

Systems and methods for motion planning by a vehicle computing system of an autonomous vehicle are provided. The vehicle computing system can input sensor data to a machine-learned system including one or more machine-learned models. The computing system can obtain, as an output of the machine-learned model(s), motion prediction(s) associated with object(s) detected by the system. The system can convert a shape of the object(s) into a probability of occupancy by convolving an occupied area of the object(s) with a continuous uncertainty associated with the object(s). The system can determine a probability of future occupancy of a plurality of locations in the environment at future times based at least in part on the motion prediction(s) and the probability of occupancy of the object(s). The system can provide the motion prediction(s) and the probability of future occupancy of the plurality of locations to a motion planning system of the autonomous vehicle.

Methods and systems for prioritizing computing methods for autonomous vehicles

A method includes receiving sensor data associated with one or more inputs associated with a road portion, determining a level of risk associated with each of the one or more inputs, determining an estimated amount of computing resources that each of a plurality of candidate computing methods will consume, and selecting one or more computing methods from the plurality of candidate computing methods to associate with the one or more inputs based on the levels of risk associated with the one or more inputs and the estimated amount of computing resources that the candidate computing methods will consume.

Light detection and ranging (LIDAR) system having a polarizing beam splitter
11635502 · 2023-04-25 · ·

A LIDAR system includes a plurality of LIDAR units. Each of the LIDAR units includes a housing defining a cavity. Each of the LIDAR units further includes a plurality of emitters disposed within the cavity. Each of the plurality of emitters is configured to emit a laser beam. The LIDAR system includes a rotating mirror and a retarder. The retarder is configurable in at least a first mode and a second mode to control a polarization state of a plurality of laser beams emitted from each of the plurality of LIDAR units. The LIDAR system includes a polarizing beam splitter positioned relative to the retarder such that the polarizing beam splitter receives a plurality of laser beams exiting the retarder. The polarizing beam is configured to transmit or reflect the plurality of laser beams exiting the retarder based on the polarization state of the laser beams exiting the retarder.

SYSTEMS AND METHODS FOR OPERATING AN AUTONOMOUS VEHICLE

An autonomous vehicle (AV) includes features that allows the AV to comply with applicable regulations and statutes for performing safe driving operation. Example embodiments relate to an autonomous vehicle having a trailer coupled to a rear thereof. An example method includes continuously predicting a trailer trajectory that is distinct from a planned trajectory of the autonomous vehicle. The method further includes determining that the predicted trailer trajectory is within a minimum avoidance distance away from a stationary vehicle located on a roadway on which the autonomous vehicle is located. The method further includes modifying the planned trajectory of the autonomous vehicle such that the predicted trailer trajectory satisfies the minimum avoidance distance. The method further includes causing the autonomous vehicle to navigate along the modified trajectory based on transmitting instructions to one or more subsystems of the autonomous vehicle.

EMERGENCY VEHICLE DETECTION SYSTEM AND METHOD
20230063047 · 2023-03-02 ·

In an embodiment, a method includes: receiving ambient sound; determining if the ambient sound includes a siren; in accordance with determining that the ambient sound includes a siren, determining a first location associated with the siren; receiving a camera image; determining if the camera image includes a flashing light; in accordance with determining that the camera image includes a flashing light, determining a second location associated with the flashing light; 3D data; determining if the 3D data includes an object; in accordance with determining that the 3D data includes an object, determining a third location associated with the object; determining a presence of an emergency vehicle based on the siren, detected flashing light and detected object; determining an estimated location of the emergency vehicle based on the first, second and third locations; and initiating an action related to the vehicle based on the determined presence and location.

AUTONOMOUS VEHICLE MANEUVER IN RESPONSE TO EMERGENCY PERSONNEL HAND SIGNALS

A control device associated with an autonomous vehicle detects that an emergency personnel is altering traffic on a road using an emergency-related hand signal to divert the traffic from a road anomaly, such as a road accident. The control device determines an interpretation of the emergency-related hand signal. The control device determines a proposed trajectory for the autonomous vehicle according to the interpretation of the emergency-related hand signal. In certain embodiments, the control device may navigate the autonomous vehicle according to the interpretation of the emergency-related hand signal. In certain embodiments, the control device may transmit the proposed trajectory to an oversight server for confirmation. In certain embodiments, the oversight may confirm or override the proposed trajectory.

Vehicle system for recognizing objects
11661068 · 2023-05-30 · ·

A vehicle system includes an electronic control unit. The electronic control unit is configured to execute a first program, a second program, and a third program. The first program is configured to recognize an object present around a vehicle, the second program is configured to store information related to the recognized object as time-series map data, and the third program is configured to predict a future position of the object based on the stored time-series map data. The first program and the third program are configured to be (i) first, individually optimized based on first training data corresponding to output of the first program and second training data corresponding to output of the third program, and (ii) then, collectively optimized based on the second training data corresponding to the output of the third program.

PREDICTION SAMPLING TECHNIQUES

Techniques for determining unified futures of objects in an environment are discussed herein. Techniques may include determining a first feature associated with an object in an environment and a second feature associated with the environment and based on a position of the object in the environment, updating a graph neural network (GNN) to encode the first feature and second feature into a graph node representing the object and encode relative positions of additional objects in the environment into one or more edges attached to the node. The GNN may be decoded to determine a distribution of predicted positions for the object in the future. A predicted position of the object at a subsequent timestep may be determined by sampling from the distribution of predicted positions according to various sampling strategies. Alternatively, the predicted position of the object may be overwritten using a candidate position of the object.

FOCUSING PREDICTION DISTRIBUTION OUTPUT FOR EFFICIENT SAMPLING

Techniques for determining unified futures of objects in an environment are discussed herein. Techniques may include determining a first feature associated with an object in an environment and a second feature associated with the environment and based on a position of the object in the environment, updating a graph neural network (GNN) to encode the first feature and second feature into a graph node representing the object and encode relative positions of additional objects in the environment into one or more edges attached to the node. The GNN may be decoded to determine a distribution of predicted positions for the object in the future that meet a criterion, allowing for more efficient sampling. A predicted position of the object in the future may be determined by sampling from the distribution.

ENCODING RELATIVE OBJECT INFORMATION INTO NODE EDGE FEATURES

Techniques for determining unified futures of objects in an environment are discussed herein. Techniques may include determining a first feature associated with an object in an environment and a second feature associated with the environment and based on a position of the object in the environment, updating a graph neural network (GNN) to encode the first feature and second feature into a graph node representing the object and encode relative positions of additional objects in the environment into one or more edges attached to the node. The GNN may be decoded to determine a predicted position of the object at a subsequent timestep. Further, a predicted trajectory of the object may be determined using predicted positions of the object at various timesteps.