Patent classifications
B60W2554/4029
MALICIOUS EVENT DETECTION FOR AUTONOMOUS VEHICLES
A system comprises an autonomous vehicle (AV) and a control device operably coupled with the AV. The control device detects a series of events within a threshold period of time, where a number of series of events in the series of events is above a threshold number. The series of events taken in the aggregate within the threshold period of time deviates from a normalcy mode. The normalcy mode comprises events that are expected to the encountered by the AV. The control device determines whether the series of events corresponds to a malicious event, where the malicious event indicates tampering with the AV. In response to determining that the series of events corresponds to the malicious event, the series of events are escalated to be addressed.
Control system, control method, vehicle, and computer-readable storage medium
A control system of a vehicle, the vehicle including a detection unit for detecting external information related to an outside of surroundings of the vehicle, the external information being used to control a driven state of the vehicle is provided. The control system performs a method comprising: obtaining map information of surroundings of a route on which the vehicle travels based on position information of the vehicle, and specifying, from among pieces of detection range information corresponding to the map information, detection range information corresponding to the detection unit; and controlling the driven state of the vehicle based on the specified detection range information and the external information.
Vehicle operation using a dynamic occupancy grid
Methods for operating a vehicle in an environment include receiving light detection and ranging (LiDAR) data from a LiDAR of the vehicle. The LiDAR data represents objects located in the environment. A dynamic occupancy grid (DOG) is generated based on a semantic map. The DOG includes multiple grid cells. Each grid cell represents a portion of the environment. For each grid cell, a probability density function is generated based on the LiDAR data. The probability density function represents a probability that the portion of the environment represented by the grid cell is occupied by an object. A time-to-collision (TTC) of the vehicle and the object less than a threshold time is determined based on the probability density function. Responsive to determining that the TTC is less than the threshold time, a control circuit of the vehicle operates the vehicle to avoid a collision of the vehicle and the object.
Driving support system that executes a risk avoidance control for reducing a risk of collision with an object in front of a vehicle
A driving support system executes a risk avoidance control for reducing a risk of collision with an object in front of a vehicle. A risk potential field represents a risk value as a function of position. An obstacle potential field is a risk potential field in which the risk value is maximum at a position of the object and decreases as a distance from the object increases. A vehicle center potential field is the risk potential field in which a valley of the risk value extends in a lane longitudinal direction from a position of the vehicle. A first risk potential field is the sum of the vehicle center potential field and the obstacle potential field. The driving support system executes a steering control such that the vehicle follows the first valley of the risk value represented by the first risk potential field.
COMMUNICATING VEHICLE INFORMATION TO PEDESTRIANS
Among other things, techniques are described for expressive vehicle systems. These techniques may include obtaining, with at least one processor, data associated with an environment, the environment comprising a vehicle and at least one object; determining an expressive maneuver including a deceleration of the vehicle such that the vehicle stops at least a first distance away from the at least one object and the vehicle reaches a peak deceleration when the vehicle is a second distance away from the at least one object; generating data associated with control of the vehicle based on the deceleration associated with the expressive maneuver; and transmitting the data associated with the control of the vehicle to cause the vehicle to decelerate based on the deceleration associated with the expressive maneuver.
Method and system for conditional operation of an autonomous agent
A method for conditional operation of an autonomous agent includes: collecting a set of inputs; processing the set of inputs; determining a set of policies for the agent; evaluating the set of policies; and operating the ego agent. A system for conditional operation of an autonomous agent includes a set of computing subsystems (equivalently referred to herein as a set of computers) and/or processing subsystems (equivalently referred to herein as a set of processors), which function to implement any or all of the processes of the method.
UNMANNED VEHICLE, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING SYSTEM
An unmanned vehicle is configured to execute an acquisition process that acquires sound identification information selected by one or more other unmanned vehicles located nearby. The sound identification information is included in multiple types of sound identification information each associated with a sound that has a different characteristic. The unmanned vehicle is further configured to execute a sound selection process that refers to the acquired sound identification information to select sound identification information that is different from the sound identification information selected by at least one of the other unmanned vehicles and an output control process that outputs a sound having a characteristic corresponding to the selected sound identification information.
Leveraging Vehicle-to-Everything (V2X) for Host Vehicle Trajectories
The methods and systems herein enable leveraging V2X for host vehicle trajectories. A host vehicle broadcasts a trajectory request that is received by a remote system. The remote system determines that a tracked object being tracked by the remote system corresponds to the host vehicle and determines a trajectory of the tracked object. The remote system sends a response to the host vehicle indicating the trajectory of the tracked object as a trajectory of the host vehicle. The host vehicle calculates scores for one or more safety metrics for the received trajectory and an internally generated trajectory (e.g., using separate hardware components). Based on the scores, the host vehicle selects from among the trajectories or generates a trajectory and sends the selected/generated trajectory to a vehicle system such that the vehicle system can operate the host vehicle according to the selected/generated trajectory.
Using image augmentation with simulated objects for training machine learning models in autonomous driving applications
In various examples, systems and methods are disclosed that preserve rich, detail-centric information from a real-world image by augmenting the real-world image with simulated objects to train a machine learning model to detect objects in an input image. The machine learning model may be trained, in deployment, to detect objects and determine bounding shapes to encapsulate detected objects. The machine learning model may further be trained to determine the type of road object encountered, calculate hazard ratings, and calculate confidence percentages. In deployment, detection of a road object, determination of a corresponding bounding shape, identification of road object type, and/or calculation of a hazard rating by the machine learning model may be used as an aid for determining next steps regarding the surrounding environment—e.g., navigating around the road debris, driving over the road debris, or coming to a complete stop—in a variety of autonomous machine applications.
FULL SPEED RANGE ADAPTIVE CRUISE CONTROL SYSTEM FOR DETERMINING AN ADAPTIVE LAUNCH TIME FOR A VEHICLE
A full speed range adaptive cruise control system for a vehicle that stops at an intersection includes one or more controllers that execute instructions to receive localization data and situational data related to the vehicle. The one or more controllers determine, based on localization data and situational data, that the vehicle is approaching an intersection and will come to a stop at the intersection, where the vehicle is part of a queue including one or more surrounding vehicles. In response to determining the vehicle has come to a stop, the controller determines a position of the vehicle within the queue and an overall length of the queue. The controller calculates an adaptive launch time based on at least the position of the vehicle within the queue, the overall length of the queue, the situational data, and a timing delay associated with the queue.