Patent classifications
B60W2554/4029
SENSOR FUSION FOR AUTONOMOUS MACHINE APPLICATIONS USING MACHINE LEARNING
In various examples, a multi-sensor fusion machine learning model – such as a deep neural network (DNN) – may be deployed to fuse data from a plurality of individual machine learning models. As such, the multi-sensor fusion network may use outputs from a plurality of machine learning models as input to generate a fused output that represents data from fields of view or sensory fields of each of the sensors supplying the machine learning models, while accounting for learned associations between boundary or overlap regions of the various fields of view of the source sensors. In this way, the fused output may be less likely to include duplicate, inaccurate, or noisy data with respect to objects or features in the environment, as the fusion network may be trained to account for multiple instances of a same object appearing in different input representations.
Device, method, and storage medium
A device includes a storage device configured to store a program; and a hardware processor, wherein, the hardware processor executes the program stored in the storage device to: recognize positions of a plurality of traffic participants; determine a temporary goal which each of the plurality of traffic participants is trying to reach in the future, based on the recognition results; and simulate a movement process in which each of the plurality of traffic participants moves toward the temporary goal using a movement model to estimate an action in the future of each of the plurality of traffic participants.
Vehicle control system
The vehicle control system includes a first controller configured to generate a target trajectory for the automated driving, and a second controller configured to execute vehicle travel control such that the vehicle follows the target trajectory. During the automated driving, the second controller controls a travel control amount which is a control amount of the vehicle travel control, acquire driving environment information, and execute preventive safety control for intervening in the travel control amount based on the driving environment information. The first controller includes a memory device in which information of an intervention suppression area is stored. When the vehicle travels in the intervention suppression area during the automated driving, the first controller outputs a suppression instruction for the preventive safety control to the second controller. And the second controller suppresses intervention of the travel control amount by the preventive safety control when the suppression instruction is received.
Object state tracking and prediction using supplemental information
System, methods, and embodiments described herein relate to predicting a future state of an object detected in a vicinity of a vehicle. In one embodiment, a method for predicting a state of an object includes detecting, at a plurality of discrete times [t, t−1, t−2, . . . ], a respective plurality of states of the object, obtaining, based at least in part on a present location of the vehicle, supplemental information, associated with an environment of the present location, that indicates at least a speed reduction factor, executing a prediction operation to determine a predicted state of the object at a time t+1 based at least in part on the detected plurality of states and the supplemental information, determining an actual state of the object at a time t+1 based on data from the one or more sensors, and modifying the prediction operation based at least in part on the actual state.
Predicting behaviors of road agents using intermediate intention signals
An autonomous vehicle includes sensor subsystem(s) that output a sensor signal. A perception subsystem (i) detects an agent in a vicinity of the autonomous vehicle and (ii) generates a motion signal that describes at least one of a past motion or a present motion of the agent. An intention prediction subsystem processes the sensor signal to generate an intention signal that describes at least one intended action of the agent. A behavior prediction subsystem processes the motion signal and the intention signal to generate a behavior prediction signal that describes at least one predicted behavior of the agent. A planner subsystem processes the behavior prediction signal to plan a driving decision for the autonomous vehicle.
Predicting crossing behavior of agents in the vicinity of an autonomous vehicle
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium that generates path prediction data for agents in the vicinity of an autonomous vehicle using machine learning models. One method includes identifying an agent in a vicinity of an autonomous vehicle navigating an environment and determining that the agent is within a vicinity of a crossing zone, which can be marked or unmarked. In response to determining that the agent is within a vicinity of a crossing zone: (i) features of the agent and of the crossing zone can be obtained; (ii) a first input that includes the features can be processed using a first machine learning model that is configured to generate a first crossing prediction that characterizes future crossing behavior of the agent, and (iii) a predicted path for the agent for crossing the roadway can be determined from at least the first crossing prediction.
SYSTEMS AND METHODS FOR RECONSTRUCTION OF A VEHICULAR CRASH
A system for notifying emergency services of a vehicular crash may (i) receive sensor data of a vehicular crash from at least one mobile device associated with a user; (ii) generate a scenario model of the vehicular crash based upon the received sensor data; (iii) store the scenario model; and/or (iv) transmit a message to one or more emergency services based upon the scenario model. As a result, the speed and accuracy of deploying emergency services to the vehicular crash location is increased. The system may also utilize vehicle occupant positional data, and internal and external sensor data to detect potential imminent vehicle collisions, take corrective actions, automatically engage autonomous or semi-autonomous vehicle features, and/or generate virtual reconstructions of the vehicle collision.
AUTONOMOUS VEHICLE APPLICATION
Methods and systems for communicating between autonomous vehicles are described herein. Such communication may be performed for signaling, collision avoidance, path coordination, and/or autonomous control. A computing device may receive data for the same road segment from autonomous vehicles, including (i) an indication of a location within the road segment, and (ii) an indication of a condition of the road segment. The computing device may generate, from the data for the same road segment, an overall indication of the condition of the road segment, which may include a recommendation to vehicles approaching the road segment. Additionally, the computing device may receive a request from a computing device within a vehicle approaching the road segment to display vehicle data. The overall indication for the road segment may then be displayed on a user interface of the computing device.
VIRTUAL TESTING OF AUTONOMOUS ENVIRONMENT CONTROL SYSTEM
Methods and systems for assessing, detecting, and responding to malfunctions involving components of autonomous vehicles and/or smart homes are described herein. Autonomous operation features and related components can be assessed using direct or indirect data regarding operation. Such assessment may be performed to determine the robustness of autonomous systems, including the use of virtual assessment of software components within a simulated environment. To this end, a server may retrieve one or more routines associated with autonomous operation. The server may also generate a set of test data associated with test conditions. The server may also execute an emulator that virtually simulates autonomous environment. The test data may be presented to the routines executing in the emulator to generate output data. The server may then analyze the output data to determine a quality metric.
Virtual testing of autonomous environment control system
Methods and systems for assessing, detecting, and responding to malfunctions involving components of autonomous vehicles and/or smart homes are described herein. Autonomous operation features and related components can be assessed using direct or indirect data regarding operation. Such assessment may be performed to determine the robustness of autonomous systems, including the use of virtual assessment of software components within a simulated environment. To this end, a server may retrieve one or more routines associated with autonomous operation. The server may also generate a set of test data associated with test conditions. The server may also execute an emulator that virtually simulates autonomous environment. The test data may be presented to the routines executing in the emulator to generate output data. The server may then analyze the output data to determine a quality metric.