Patent classifications
G06T2207/30261
Localizing a moving object
A reference pose of an object in a coordinate system of a map of an area is determined. The reference pose is based on a three-dimensional (3D) reference model representing the object. A first pose of the object is determined as the object moves with respect to the coordinate system. The first pose is determined based on the reference pose and sensor data collected by the sensor at a first time. A second pose of the object is determined as the object continues to move with respect to the coordinate system. The second pose is determined based on the reference pose, the first pose, and sensor data collected by the sensor at a second time consecutive to the first time.
Method for a sensor-based and memory-based representation of a surroundings, display device and vehicle having the display device
A method for a sensor-based and memory-based representation of a surroundings of a vehicle. The vehicle includes an imaging sensor for detecting the surroundings. The method includes: detecting a sequence of images; determining distance data on the basis of the detected images and/or of a distance sensor of the vehicle, the distance data comprising distances between the vehicle and objects in the surroundings of the vehicle; generating a three-dimensional structure of a surroundings model on the basis of the distance data; recognizing at least one object in the surroundings of the vehicle on the basis of the detected images, in particular by a neural network; loading a synthetic object model on the basis of the recognized object; adapting the generated three-dimensional structure of the surroundings model on the basis of the synthetic object model and on the basis of the distance data; and displaying the adapted surroundings model.
END-TO-END SYSTEM TRAINING USING FUSED IMAGES
Provided are methods for end-to-end perception system training using fused images, which can include fusing different types of images to form a fused image, extracting features from the fused image, calculating a loss, and modifying at least one network parameter of an image semantic network based on the loss. Systems and computer program products are also provided.
REMOTE MONITORING DEVICE, REMOTE MONITORING SYSTEM, AND REMOTE MONITORING METHOD
A remote operator that remotely assists in traveling the autonomous traveling vehicle refer to a remote monitoring terminal. A remote monitoring device acquires data of a plurality of sensors mounted on the autonomous traveling vehicle, executes an image generation processing for generating an around-view of the vehicle by processing the data, calculates the lane width of the lane traveled by the vehicle based on the map information, calculates the vehicle width ratio of the vehicle with respect to the lane width, and when the vehicle width ratio is greater than a predetermined determination threshold, provides the generated around-view by the image generation processing to the remote monitoring terminal.
OPERATIONS USING SPARSE VOLUMETRIC DATA
A volumetric data structure models a particular volume representing the particular volume at a plurality of levels of detail. A first entry in the volumetric data structure includes a first set of bits representing voxels at a first level of detail, the first level of detail includes the lowest level of detail in the volumetric data structure, values of the first set of bits indicate whether a corresponding one of the voxels is at least partially occupied by respective geometry, where the volumetric data structure further includes a number of second entries representing voxels at a second level of detail higher than the first level of detail, the voxels at the second level of detail represent subvolumes of volumes represented by voxels at the first level of detail, and the number of second entries corresponds to a number of bits in the first set of bits with values indicating that a corresponding voxel volume is occupied.
AUTO CLEAN MACHINE, CLIFF DETERMINING METHOD AND SURFACE TYPE DETERMINING METHOD
An auto clean machine, comprising: a chassis; a first light source, configured to emit first light; a second light source, configured to emit second light; an optical sensor, configured to sense optical data generated according to reflected light of the second light or according to reflected light of the first light; and a control circuit, configured to analyze optical information of the optical data; wherein if the first light source is activated, the second light source is de-activated and the control circuit determines variation of the optical information is larger than a variation threshold, the control circuit changes the first light source to be non-activated and the second light source to be activated.
Method of extracting feature from image using laser pattern and device and robot of extracting feature thereof
Provided herein are a method of extracting a feature from an image using a laser pattern and an identification device and a robot including the same, and the identification device for extracting a feature from an image using a laser pattern, which includes a first camera coupled to a laser filter and configured to generate a first image including a pattern of a laser which is reflected from an object, a second camera configured to capture an area overlapping an area captured by the first camera to generate a second image, and a controller configured to generate a mask for distinguishing an effective area using the pattern included in the first image and extract a feature from the second image by applying the mask to the second image.
Image processing device, image processing method, and monitoring system
An image processing device includes: a reception unit that receives at least one first image provided from at least one first camera capturing an image of a region in which an object exists and a plurality of second images provided from a plurality of second cameras capturing images of a region including a dead region hidden by the object and invisible from a position of the first camera; and an image processing unit that generates a complementary image, as an image of a mask region in the at least one first image corresponding to the object, from the plurality of second images and generates a synthetic display image by combining the at least one first image and the complementary image.
Bounding box estimation and lane vehicle association
Disclosed are techniques for estimating a 3D bounding box (3DBB) from a 2D bounding box (2DBB). Conventional techniques to estimate 3DBB from 2DBB rely upon classifying target vehicles within the 2DBB. When the target vehicle is misclassified, the projected bounding box from the estimated 3DBB is inaccurate. To address such issues, it is proposed to estimate the 3DBB without relying upon classifying the target vehicle.
OBSTACLE TRAJECTORY PREDICTION METHOD AND APPARATUS
This specification discloses an obstacle trajectory prediction method and apparatus. In embodiments of the present disclosure, a global interaction feature under joint action of a vehicle and obstacles is determined according to historical status information and current status information of the vehicle, historical status information and current status information of the obstacles, and a future motion trajectory planned by the vehicle; an individual interaction feature of a to-be-predicted obstacle is determined according to the global interaction feature and current status information of the to-be-predicted obstacle; and a future motion trajectory of the to-be-predicted obstacle is predicted through the individual interaction feature and information about an environment around the vehicle.