Patent classifications
B60W50/06
Driving scenario machine learning network and driving environment simulation
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating a driving scenario machine learning network and providing a simulated driving environment. One of the operations is performed by receiving video data that includes multiple video frames depicting an aerial view of vehicles moving about an area. The video data is processed and driving scenario data is generated which includes information about the dynamic objects identified in the video. A machine learning network is trained using the generated driving scenario data. A 3-dimensional simulated environment is provided which is configured to allow an autonomous vehicle to interact with one or more of the dynamic objects.
Driving scenario machine learning network and driving environment simulation
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating a driving scenario machine learning network and providing a simulated driving environment. One of the operations is performed by receiving video data that includes multiple video frames depicting an aerial view of vehicles moving about an area. The video data is processed and driving scenario data is generated which includes information about the dynamic objects identified in the video. A machine learning network is trained using the generated driving scenario data. A 3-dimensional simulated environment is provided which is configured to allow an autonomous vehicle to interact with one or more of the dynamic objects.
Advanced Neural Network Training System
Disclosed are systems, apparatuses, methods, and computer-readable media to train a neural network model implemented into a perception stack in an autonomous vehicle (AV) for detecting objects. A method includes pretraining an uninitialized ML model to yield a first ML model; training the first ML model with a first testing dataset for a first number of iterations based on a first configuration; analyzing the first ML model based on a convergence of the first ML model and a previous iteration of training; generating a report based on the analysis of the first ML; and after generating the report, training the first ML model to yield a second ML model.
Advanced Neural Network Training System
Disclosed are systems, apparatuses, methods, and computer-readable media to train a neural network model implemented into a perception stack in an autonomous vehicle (AV) for detecting objects. A method includes pretraining an uninitialized ML model to yield a first ML model; training the first ML model with a first testing dataset for a first number of iterations based on a first configuration; analyzing the first ML model based on a convergence of the first ML model and a previous iteration of training; generating a report based on the analysis of the first ML; and after generating the report, training the first ML model to yield a second ML model.
Automated driving assistance apparatus
An automated driving assistance apparatus includes an occupant's emotion learning section and a control parameter setting section. The occupant's emotion learning section creates an occupant's emotion model based on vehicle driving state information and occupant's emotion information. The occupant's emotion model is used to estimate an emotion of the occupant from a vehicle driving state. The control parameter setting section calculates an ideal driving state based on the occupant's emotion model, and set a control parameter for automated driving of the vehicle based on the ideal driving state. The control parameter setting section output input values relevant to the vehicle driving state to the occupant's emotion model, and select, from the input values received by the occupant's emotion model, an input value that causes a current occupant's emotion to become closer to a target emotion as the ideal driving state of the vehicle.
Automated driving assistance apparatus
An automated driving assistance apparatus includes an occupant's emotion learning section and a control parameter setting section. The occupant's emotion learning section creates an occupant's emotion model based on vehicle driving state information and occupant's emotion information. The occupant's emotion model is used to estimate an emotion of the occupant from a vehicle driving state. The control parameter setting section calculates an ideal driving state based on the occupant's emotion model, and set a control parameter for automated driving of the vehicle based on the ideal driving state. The control parameter setting section output input values relevant to the vehicle driving state to the occupant's emotion model, and select, from the input values received by the occupant's emotion model, an input value that causes a current occupant's emotion to become closer to a target emotion as the ideal driving state of the vehicle.
Calibrating a drive system for an axle of a motor vehicle
The disclosure relates to a method for calibrating a drive system for an axle of a motor vehicle; wherein the drive system includes at least one electric machine as the drive unit, a drive shaft driven by the drive unit, a first output shaft and a second output shaft, as well as a first clutch connecting the drive shaft to the first output shaft and a second clutch connecting the drive shaft to the second output shaft.
Calibrating a drive system for an axle of a motor vehicle
The disclosure relates to a method for calibrating a drive system for an axle of a motor vehicle; wherein the drive system includes at least one electric machine as the drive unit, a drive shaft driven by the drive unit, a first output shaft and a second output shaft, as well as a first clutch connecting the drive shaft to the first output shaft and a second clutch connecting the drive shaft to the second output shaft.
Method and apparatus for calibrating camera
A method and apparatus for calibrating a camera are provided. A specific embodiment of the method includes: acquiring an image-point cloud sequence, the image-point cloud sequence including at least one group of an initial image and point cloud data collected at a same time, the initial image being collected by a camera provided on an autonomous vehicle and the point cloud data being collected by a radar provided on the autonomous vehicle; determining target point cloud data of initial images corresponding to groups of image-point cloud in the image-point cloud sequence; and matching the target point cloud data with the corresponding initial images in the image-point cloud sequence to determine a correction parameter of the camera.
Method and apparatus for calibrating camera
A method and apparatus for calibrating a camera are provided. A specific embodiment of the method includes: acquiring an image-point cloud sequence, the image-point cloud sequence including at least one group of an initial image and point cloud data collected at a same time, the initial image being collected by a camera provided on an autonomous vehicle and the point cloud data being collected by a radar provided on the autonomous vehicle; determining target point cloud data of initial images corresponding to groups of image-point cloud in the image-point cloud sequence; and matching the target point cloud data with the corresponding initial images in the image-point cloud sequence to determine a correction parameter of the camera.