Patent classifications
B60W60/0027
VEHICLE CONTROL SYSTEM, VEHICLE INTEGRATED CONTROL DEVICE, ELECTRONIC CONTROL DEVICE, NETWORK COMMUNICATION DEVICE, VEHICLE CONTROL METHOD AND COMPUTER READABLE MEDIUM
A vehicle control system (500) controls a vehicle whereon a plurality of ECUs (30) and a vehicle integrated control device (10) to control the plurality of ECUs (30) are mounted. The vehicle integrated control device (10) includes a control target value operation unit to calculate a control target value to control the plurality of ECUs (30). Further, the vehicle integrated control device (10) includes a prediction control value operation unit to estimate a state of the vehicle in the future, and to calculate a prediction control value to control the plurality of ECUs (30). The vehicle integrated control device (10) includes an instruction signal generation unit to generate an instruction signal including an operation instruction and a prediction control instruction. Each of the plurality of ECUs (30) includes an actuator control unit to control an actuator (50) based on the prediction control instruction.
BATCH CONTROL FOR AUTONOMOUS VEHICLES
A system for instructing an Autonomous Vehicle (AV) to perform a minimal risk condition maneuver comprises a fleet of AVs and an oversight server. The oversight server receives macro information that applies to a plurality of AVs from the fleet. The oversight server generates a batch command based on the macro information. The batch command is associated with one or more conditions. The oversight server determines whether each AV meets the one or more conditions. If the oversight server determines that the AV meets the one or more conditions, the oversight server sends the batch command to the AV. The batch command includes instructions to perform a minimal risk condition maneuver.
SENSOR RECOGNITION INTEGRATION DEVICE
Provided is a sensor recognition integration device capable of reducing the load of integration processing so as to satisfy the minimum necessary accuracy required for vehicle travel control, and capable of improving processing performance of an ECU and suppressing an increase in cost. A sensor recognition integration device B006 that integrates a plurality of pieces of object information related to an object around an own vehicle detected by a plurality of external recognition sensors includes: a prediction update unit 100 that generates predicted object information obtained by predicting an action of the object; an association unit 101 that calculates a relationship between the predicted object information and the plurality of pieces of object information; an integration processing mode determination unit 102 that switches an integration processing mode for determining a method of integrating the plurality of pieces of object information on the basis of a positional relationship between a specific region (for example, a boundary portion) in an overlapping region of detection regions of the plurality of external recognition sensors and the predicted object information; and an integration target information generation unit 104 that integrates the plurality of pieces of object information associated with the predicted object information on the basis of the integration processing mode.
Trajectory classification
Techniques to predict object behavior in an environment are discussed herein. For example, such techniques may include inputting data into a model and receiving an output from the model representing a discretized representation. The discretized representation may be associated with a probability of an object reaching a location in the environment at a future time. A vehicle computing system may determine a trajectory and a weight associated with the trajectory using the discretized representation and the probability. A vehicle, such as an autonomous vehicle, can be controlled to traverse an environment based on the trajectory and the weight output by the vehicle computing system.
Systems and methods for navigating a vehicle among encroaching vehicles
Systems and methods use cameras to provide autonomous navigation features. In one implementation, a method for navigating a user vehicle may include acquiring, using at least one image capture device, a plurality of images of an area in a vicinity of the user vehicle; determining from the plurality of images a first lane constraint on a first side of the user vehicle and a second lane constraint on a second side of the user vehicle opposite to the first side of the user vehicle; enabling the user vehicle to pass a target vehicle if the target vehicle is determined to be in a lane different from the lane in which the user vehicle is traveling; and causing the user vehicle to abort the pass before completion of the pass, if the target vehicle is determined to be entering the lane in which the user vehicle is traveling.
Autonomous vehicle computing system compute architecture for assured processing
Systems and methods are directed to an autonomy computing system of an autonomous vehicle. The autonomy computing system can include first functional circuitry configured to generate a first output associated with a first autonomous compute function of the autonomous vehicle based on sensor data using first neural networks. The autonomy computing system can include second functional circuitry configured to generate a second output associated with the first autonomous compute function of the autonomous vehicle based on the sensor data and neural networks. The autonomy computing system can include monitoring circuitry configured to determine a difference between the first output of the first functional circuitry and the second output of the second functional circuitry. The autonomy computing system can include a vehicle control system configured to generate vehicle control signals for the autonomous vehicle based on the outputs.
Predicting jaywalking behaviors of vulnerable road users
Jaywalking behaviors of vulnerable road users (VRUs) such as cyclists or pedestrians can be predicted. Location data is obtained that identifies a location of a VRU within a vicinity of a vehicle. Environmental data is obtained that describes an environment of the VRU, where the environmental data identifies a set of environmental features in the environment of the VRU. The system can determine a nominal heading of the VRU, and generate a set of predictive inputs that indicate, for each of at least a subset of the set of environmental features, a physical relationship between the VRU and the environmental feature. The physical relationship can be determined with respect to the nominal heading of the VRU and the location of the VRU. The set of predictive inputs can be processed with a heading estimation model to generate a predicted heading offset (e.g., a target heading offset) for the VRU.
METHOD AND DEVICE FOR PREDICTING A FUTURE ACTION OF AN OBJECT FOR A DRIVING ASSISTANCE SYSTEM FOR VEHICLE DRIVABLE IN HIGHLY AUTOMATED FASHION
A method for predicting a future action of an object for a driving assistance system for a highly automated mobile vehicle. At least one sensor signal from at least one vehicle sensor of the vehicle is read in, the sensor signal representing at least one piece of kinematic object information concerning the object that is detected by the vehicle sensor at an instantaneous point in time. A planner signal from a planner of the autonomous driving assistance system is read in, the planner signal representing at least one piece of semantic information concerning the object or the surroundings of the object at a point in time in the past. The kinematic object information is fused with the semantic information to obtain a fusion signal. A prediction signal is determined using the fusion signal, the prediction signal representing the future action of the object.
LANE CHANGE NEGOTIATION METHODS AND SYSTEMS
In various embodiments, methods, systems, and vehicles are provided for executing a lane change for a host vehicle. In various embodiments, a method includes: receiving, by a processor, an indication that a lane change from an initial lane to an intended lane is desired for the host vehicle; defining, by the processor, an initial lane center target, a negotiation target, and an intended lane center target based on the desired lane change; and controlling, by the processor, the host vehicle to at least one of the initial lane center target, the negotiation target, and the intended lane center target based on a finite state machine, wherein the initial lane center target is at or in proximity to a determined center of the initial lane, wherein the intended lane center target is at or in proximity to a determined center of the intended lane, and wherein the negotiation target is offset from the initial lane center target and within the initial lane.
Hierarchical vehicle action prediction
This application is directed to predicting vehicle actions according to a hierarchy of interconnected vehicle actions. The hierarchy of interconnected vehicle actions includes a plurality of predefined vehicle actions that are organized to define a plurality of vehicle action sequences. A first vehicle obtains one or more images of a road and a second vehicle, and predicts a sequence of vehicle actions of the second vehicle through the hierarchy of interconnected vehicle actions using the one or more images. The first vehicle is controlled to drive at least partially autonomously based on the predicted sequence of vehicle actions of the second vehicle. In some embodiments, the hierarchy of interconnected vehicle actions includes a first action level that is defined according to a stage of a trip and corresponds to three predefined vehicle actions of: “start a trip,” “move in a trip,” and “complete a trip.”