Patent classifications
B60W60/00272
Systems and Methods for Mitigating Vehicle Pose Error Across an Aggregated Feature Map
Systems and methods for improved vehicle-to-vehicle communications are provided. A system can obtain sensor data depicting its surrounding environment and input the sensor data (or processed sensor data) to a machine-learned model to perceive its surrounding environment based on its location within the environment. The machine-learned model can generate an intermediate environmental representation that encodes features within the surrounding environment. The system can receive a number of different intermediate environmental representations and corresponding locations from various other systems, aggregate the representations based on the corresponding locations, and perceive its surrounding environment based on the aggregated representations. The system can determine relative poses between the each of the systems and an absolute pose for each system based on the representations. Each representation can be aggregated based on the relative or absolute poses of each system and weighted according to an estimated accuracy of the location corresponding to the representation.
Mapped driving paths for autonomous vehicle
A method for receiving autonomous vehicle (AV) driving path data associated with a driving path in a roadway of a geographic location. The driving path data associated with a trajectory for an AV in a roadway and trajectory points in a trajectory of the driving path in the roadway for determining at least one feature of the roadway positioned a lateral distance from a first trajectory of the one or more trajectories of the driving path of an AV based on the map data. The method includes receiving map data associated with a map of a geographic location, determining a driving path for an AV in a roadway, generating driving path information based on a trajectory point in a trajectory of the driving path, and providing driving path data associated with the driving path to an AV for controlling the AV on the roadway.
REMOTE SUPPORT SYSTEM AND REMOTE SUPPORT METHOD
A remote support system is configured to determine whether a vehicle collides with an object to be avoided when the vehicle gets into a remote control request situation, stop to send a remote control request when the remote support system determines that the vehicle does not collide with the object, generate a first speed plan for the vehicle to continue autonomous driving at a predicted collision position and a second speed plan for the vehicle to stop before reaching the predicted collision position when the remote support system determines that the vehicle collides with the object, and determine to send a remote control request based on the degree of deviation between these speed plans.
AUTONOMOUS DRIVING CRASH PREVENTION
Autonomous vehicles must accommodate various road configurations such as straight roads, curved roads, controlled intersections, uncontrolled intersections, and many others. Autonomous driving systems must make decisions about the speed and distance of traffic and about obstacles including obstacles that obstruct the view of the autonomous vehicle's sensors. For example, at intersections, the autonomous driving system must identify vehicles in the path of the autonomous vehicle or potentially in the path based on a planned path, estimate the distance to those vehicles, and estimate the speeds of those vehicles. Then, based on those and the road configuration and environmental conditions, the autonomous driving system must decide whether it is safe to proceed along the planned path or not, and when it is safe to proceed.
SENSOR FUSION FOR AUTONOMOUS MACHINE APPLICATIONS USING MACHINE LEARNING
In various examples, a multi-sensor fusion machine learning model—such as a deep neural network (DNN)—may be deployed to fuse data from a plurality of individual machine learning models. As such, the multi-sensor fusion network may use outputs from a plurality of machine learning models as input to generate a fused output that represents data from fields of view or sensory fields of each of the sensors supplying the machine learning models, while accounting for learned associations between boundary or overlap regions of the various fields of view of the source sensors. In this way, the fused output may be less likely to include duplicate, inaccurate, or noisy data with respect to objects or features in the environment, as the fusion network may be trained to account for multiple instances of a same object appearing in different input representations.
SYSTEMS AND METHODS FOR ESTIMATING CUBOIDS FROM LIDAR, MAP AND IMAGE DATA
Systems and methods for operating an autonomous vehicle. The methods comprising: obtaining, by a computing device, a LiDAR dataset; plotting, by a computing device, the LiDAR dataset on a 3D graph to define a 3D point cloud; using, by a computing device, the LiDAR dataset and contents of a vector map to define a cuboid on the 3D graph that encompasses points of the 3D point cloud that are associated with an object in proximity to the vehicle, where the vector map comprises lane information; and using the cuboid to facilitate driving-related operations of the autonomous vehicle.
CONVOLUTION OPERATOR SELECTION
A vehicle system includes one or more sensors configured to capture aspects of an environment and a computing device. The computing device is configured to receive information about the environment captured by the one or more sensors, determine one or more structures within the environment based on the received information, select a kernel that is parameterized for predicting a vehicle trajectory based on the one or more structures determined within the environment, and perform a convolution of the selected kernel and an array defining the environment, wherein the convolution predicts a future trajectory of a vehicle within the environment.
VEHICLE TRAJECTORY MODIFICATION FOR FOLLOWING
Techniques for determining to modify a trajectory based on an object are discussed herein. A vehicle can determine a drivable area of an environment, capture sensor data representing an object in the environment, and perform a spot check to determine whether or not to modify a trajectory. Such a spot check may include processing to incorporate an actual or predicted extent of the object into the drivable area to modify the drivable area. A distance between a reference trajectory and the object can be determined at discrete points along the reference trajectory, and based on a cost, distance, or intersection associated with the trajectory and the modified area, the vehicle can modify its trajectory. One trajectory modification includes following, which may include varying a longitudinal control of the vehicle, for example, to maintain a relative distance and velocity between the vehicle and the object.
SYSTEMS AND METHODS FOR LONG-TERM PREDICTION OF LANE CHANGE MANEUVER
A method comprises making initial predictions of whether a first vehicle will perform a lane change at a plurality of future time steps based on sensor data captured by an egovehicle; and in response to making an initial prediction that the first vehicle will perform a lane change at a first one of the future time steps, making final predictions that the first vehicle will perform a lane change at each of a plurality of time steps subsequent to the first one of the future time steps.
Trajectory prediction on top-down scenes and associated model
Techniques are discussed for determining prediction probabilities of an object based on a top-down representation of an environment. Data representing objects in an environment can be captured. Aspects of the environment can be represented as map data. A multi-channel image representing a top-down view of object(s) in the environment can be generated based on the data representing the objects and map data. The multi-channel image can be used to train a machine learned model by minimizing an error between predictions from the machine learned model and a captured trajectory associated with the object. Once trained, the machine learned model can be used to generate prediction probabilities of objects in an environment, and the vehicle can be controlled based on such prediction probabilities.