Patent classifications
B60W30/18159
METHOD AND APPARATUS FOR DETECTING UNEXPECTED CONTROL STATE IN AUTONOMOUS DRIVING SYSTEM
The disclosure describes various embodiments for detecting an unexpected control state of an autonomous driving system. According to an embodiment, an exemplary method of detecting an unexpected control state of an autonomous driving system include the operations of generating environmental data of a vehicle; determining, by the autonomous driving system, a first control state based on the environmental data of the vehicle; determining, by a reference model, a second control state based on the environmental data, wherein the reference model defines at least one scenario each corresponding to a plurality of expected control states and a state switching condition, and in each of the expected control states corresponding to the scenario, an action of the vehicle in the scenario obeys a traffic rule; and determining the unexpected control state of the autonomous driving system by comparing the first control state with the second control state.
VEHICLE CONTROL SYSTEM
A vehicle control system capable of ensuring safety at a low cost even when a control device fails, includes a first control device that implements at least two automatic driving-related functions based on information from external sensors and/or information from a map database, a second control device that implements fewer automatic driving-related functions than the first control device based on the information from the sensors and/or the map database, and a vehicle motion control device that automatically controls a driving state of a host vehicle based on a function planned by the first or second control device including: a backup determination unit that determines whether the future function planned by the first or second control device is backed up by the second control device; and an interface that notifies a driver that system responsibility is switched to the driver, when the backup is not available.
Trajectory prediction on top-down scenes and associated model
Techniques are discussed for determining prediction probabilities of an object based on a top-down representation of an environment. Data representing objects in an environment can be captured. Aspects of the environment can be represented as map data. A multi-channel image representing a top-down view of object(s) in the environment can be generated based on the data representing the objects and map data. The multi-channel image can be used to train a machine learned model by minimizing an error between predictions from the machine learned model and a captured trajectory associated with the object. Once trained, the machine learned model can be used to generate prediction probabilities of objects in an environment, and the vehicle can be controlled based on such prediction probabilities.
Vehicle travel control device, and vehicle travel control system
Provided and disclosed herein is a vehicle travel control device capable of improving fuel consumption performance of a host vehicle by, in response to receiving vehicle information indicating a deceleration cause has occurred ahead of the host vehicle, rapidly stopping or suppressing the generation of a driving force by the host vehicle.
Driving assistance method and driving assistance device
A controller performs: trajectory generation processing of generating a target travel trajectory in such a way that, when a distance between a turning position of an own vehicle and a parked vehicle satisfies a predetermined condition, the own vehicle passes beside the parked vehicle at a predetermined side position with a predetermined interval interposed between the parked vehicle and the own vehicle on one side of the parked vehicle and a turning end position in a case of turning at a position before a position of the parked vehicle or a turning start position in a case of turning after having passed beside the parked vehicle on the route coincides with the predetermined side position in a width direction of a road on which the parked vehicle is parked; and processing of performing travel control, based on the target travel trajectory.
Method for predicting exiting intersection of moving obstacles for autonomous driving vehicles
A moving obstacle such as a vehicle within a proximity of an intersection and one or more exits of the intersection are identified. An obstacle state evolution of a spatial position of the moving obstacle over a period of time is determined. For each of the exits, an intersection exit encoding of the exit is determined based on intersection exit features of the exit. An aggregated exit encoding based on aggregating all of the intersection exit encodings for the exits is determined. For each of the exits, an exit probability of the exit that the moving obstacle likely exits the intersection through the exit is determined based on the obstacle state evolution and the aggregated exit encoding. Thereafter, a trajectory of the ADV is planned to control the ADV to avoid a collision with the moving obstacle based on the exit probabilities of the exits.
DYNAMIC AUTONOMOUS CONTROL ENGAGEMENT
This application relates to techniques for dynamically determining whether to engage an autonomous controller of a vehicle. A computing system may receive a request to engage the autonomous controller (e.g., autonomous mode) of the vehicle. In some examples, the request may be received from a simulation computing system configured to test an updated autonomous controller in a simulation. Based on a determination that conditions associated with engaging autonomy are satisfied, the computing system engages the autonomous controller. Based on a determination that conditions associated with engaging autonomy are not satisfied, the computing system disables the engagement of the autonomous controller such that the vehicle is controlled according to an initial operational mode (e.g., manual mode, semi-autonomous mode, previous version of the autonomous controller, etc.).
AUTONOMOUS CONTROL ENGAGEMENT
This application relates to techniques for determining whether to engage an autonomous controller of a vehicle based on previously recorded data. A computing system may receive, from a vehicle computing system, data representative of a vehicle being operated in an environment, such as by an autonomous controller. The computing system may generate a simulation associated with the vehicle operation and configured to test an updated autonomous controller. The computing system may determine one or more first time periods associated with the vehicle operations that satisfy one or more conditions associated with engaging an autonomous controller and one or more second time periods associated with the vehicle operations that fail to satisfy the one or more conditions. The computing system may enable an engagement of the autonomous controller during the one or more first time periods and disable the engagement during the one or more second time periods.
Driver transition assistance for transitioning to manual control for vehicles with autonomous driving modes
Aspects of the disclosure relate to controlling a transition between a manual driving mode and an autonomous driving mode of a vehicle. For instance, one or more processors of one or more control computing devices may control the vehicle in the autonomous driving mode. While controlling the vehicle in the autonomous driving mode and decelerating at a given rate, the processors may receive at a user input of the vehicle input requesting a transition from the autonomous driving mode to the manual driving mode. In response to the input, the processors may transition the vehicle to the manual driving mode. After transitioning the vehicle to the manual driving mode, the processors may send deceleration signals to a deceleration actuator thereby causing the vehicle to continue to decelerate at the given rate.
Method and system for controlling safety of ego and social objects
A method or system for controlling safety of both an ego vehicle and social objects in an environment of the ego vehicle, comprising: receiving data representative of at least one social object and determining a current state of the ego vehicle based on sensor data; predicting an ego safety value corresponding to the ego vehicle, for each possible behavior action in a set of possible behavior actions, based on the current state; predicting a social safety value corresponding to the at least one social object in the environment of the ego vehicle, based on the current state, for each possible behavior action; and selecting a next behavior action for the ego vehicle, based on the ego safety values, the social safety values, and one or more target objectives for the ego vehicle.