Patent classifications
B60W60/001
Navigating a vehicle based on data processing using synthetically generated images
A user-generated graphical representation can be sent into a generative network to generate a synthetic image of an area including a road, the user-generated graphical representation including at least three different colors and each color from the at least three different colors representing a feature from a plurality of features. A determination can be made that a discrimination network fails to distinguish between the synthetic image and a sensor detected image. The synthetic image can be sent, in response to determining that the discrimination network fails to distinguish between the synthetic image and the sensor-detected image, into an object detector to generate a non-user-generated graphical representation. An objective function can be determined based on a comparison between the user-generated graphical representation and the non-user-generated graphical representation. A perception model can be trained using the synthetic image in response to determining that the objective function is within a predetermined acceptable range.
Method for performing automatic valet parking
A method for performing automatic valet parking, which includes selecting a road scenario applicable to a roadway; notifying a driver to release manual control elements of a motor vehicle and to leave the motor vehicle; checking whether the control elements have been released and the driver has left the motor vehicle and, in this case, entering an EXPLORE mode in which the motor vehicle is slowly driven autonomously and searches for a free car space or a parking space using the vehicle's own environmental sensors, before the motor vehicle is placed in a parking position; and then to change from the EXPLORE mode to a PARKING mode in which the motor vehicle is parked in the car space or in the parking space from the parking position by means of the longitudinal and lateral controllers and using the environmental data previously obtained from the environmental sensors in the EXPLORE mode.
Method and control device for controlling a motor vehicle
A method for controlling in an automated manner a motor vehicle (10) traveling on a road (12) in a current lane (14) is suggested, wherein the road (12) has at least one further lane (16). The method comprises the following steps: At least two preliminary driving maneuvers are generated and/or received, which include a lane change from the current lane (14) to the at least one further lane (16) and a starting time of the lane change. The starting times of the at least two preliminary driving maneuvers are at different times. The at least two driving maneuvers are compared taking into account the respective starting times. One of the starting times is selected based on the comparison. Further, a control device for a system for controlling a motor vehicle is also suggested.
Machine-learned model training for pedestrian attribute and gesture detection
Techniques for detecting attributes and/or gestures associated with pedestrians in an environment are described herein. The techniques may include receiving sensor data associated with a pedestrian in an environment of a vehicle and inputting the sensor data into a machine-learned model that is configured to determine a gesture and/or an attribute of the pedestrian. Based on the input data, an output may be received from the machine-learned model that indicates the gesture and/or the attribute of the pedestrian and the vehicle may be controlled based at least in part on the gesture and/or the attribute of the pedestrian. The techniques may also include training the machine-learned model to detect the attribute and/or the gesture of the pedestrian.
Target-orientated navigation system for a vehicle using a generic navigation system and related method
A target-orientated navigation system and related method for a vehicle having a generic navigation system includes one or more processors and a memory. The memory includes one or more modules that cause the processor to receive perception data, discretize the perception data into a plurality of lattices, generate a collision probability array having a plurality of cells that correspond to the plurality of lattices, determine which cells of the collision probability array satisfy a safety criteria, receive an artificial potential field array having a plurality of cells that correspond to the plurality of cells of the collision probability array, generate, an objective score array having a plurality of cells corresponding to the cells of the collision probability array, and direct a vehicle control system of the vehicle to guide the vehicle to a location representative of a cell in the objective score array that has a highest value.
Processing data for driving automation system
A method of processing data for a driving automation system, the method comprising steps of: obtaining sound data from a microphone of an autonomous vehicle; processing the sound data to obtain a sound characteristic; and updating a context of the autonomous vehicle based on the sound characteristic.
Vehicle and passenger transportation system
A vehicle is configured to transport a passenger through autonomous travel. The vehicle includes an in-vehicle camera configured to image the passenger to generate an image, a camera controller configured to control the in-vehicle camera, and a passenger information detection unit configured to detect information about the passenger. The passenger information detection unit measures the number of passengers and the camera controller stops an operation of the in-vehicle camera in a case where the number of passengers is one.
Signaling techniques for sensor fusion systems
This disclosure provides methods, devices and systems for a vehicle user equipment (VUE) to obtain extrinsic information about an object or location. The VUE may transmit a request for information about the object or the location to a road side unit (RSU). The RSU may receive the request and determine a set of extrinsic information for the first UE regarding the object or the location based on a set of information from one or more other UEs. The extrinsic information includes information that is not provided by the VUE. The RSU may transmit the set of extrinsic information to the VUE. The VUE may determine whether to accept a feature of the object or the location in the extrinsic information based on the set of extrinsic information and a set of intrinsic information detected by the VUE, The VUE may select an autonomous driving action based on the accepted feature.
Optimization for distributing autonomous vehicles to perform scouting
Aspects of the disclosure relate to distributing vehicles to perform scouting. This may involve receiving a request for a scouting objective for a vehicle, and in response, identifying a set of scouting objectives that the vehicle is eligible to visit. Each scouting objective of the set is associated with one or more scouting quests, and each scouting quest is associated with a plurality of scouting objectives. For each given scouting objective in the set of scouting objectives, an overall weight may be determined using combined weights for the given scouting objective and any scouting quests with which the given scouting objective is associated. One or more scouting objectives of the set of scouting objectives may be selected using the determined overall weights. The one or more selected scouting objectives may be provided to the vehicle.
Exception handling for autonomous vehicles
Aspects of the technology relate to exception handling for a vehicle. For instance, a current trajectory for the vehicle and sensor data corresponding to one or more objects may be received. Based on the received sensor data, projected trajectories of the one or more objects may be determined. Potential collisions with the one or more objects may be determined based on the projected trajectories and the current trajectory. One of the potential collisions that is earliest in time may be identified. Based on the one of the potential collisions, a safety-time-horizon (STH) may be identified. When a runtime exception occurs, before performing a precautionary maneuver to avoid a collision, waiting no longer than the STH for the runtime exception to resolve.