G06V10/803

Device and method for virtualizing driving environment, and vehicle

A device for virtualizing a driving environment surrounding a first node, which includes: a data acquisition device, configured to acquire position data of the first node, position data and sensing data of at least one second node, where the at least one second node and the first node are in a first communication network; and a scene construction device, configured to construct a scene virtualizing the driving environment surrounding the first node based on the position data of the fist node and the at least one second node, and on the sensing data of the at least one second node. Accordingly, by utilizing position data and sensor data of a node, a scene for virtualizing a driving environment can be constructed in real time for a driver, which improves driving safety.

Identifying objects within images from different sources

Techniques are disclosed for providing a notification that a person is at a particular location. For example, a resident device may receive from a user device an image that shows a face of a first person, the image being captured by a first camera of the user device. The resident device may also receive, from another device having a second camera, a second image showing a portion of a face of a second person, the second camera having a viewable area showing a particular location. The resident device may determine a score indicating a level of similarity between a first set of characteristics associated with the face of the first person and a second set of characteristics associated with the face of a second person. The resident device may then provide to the user device a notification based on determining the score.

UNMANNED FORKLIFT
20220375206 · 2022-11-24 ·

An image obtaining section obtains a taken image from an imaging device. A pallet type identification section has a learning model for combinations of images of a plurality of types of pallets and types of the pallets, and identifies a type of a target pallet by inputting, to the learning model, the taken image of the target pallet, which is obtained by the image obtaining section. A pallet position/shape obtaining section obtains position/shape data of the target pallet from a distance measuring device for measuring a distance to the target pallet. A pallet deviation detection section previously stores position/shape data of the pallets and performs comparison between the stored position/shape data corresponding to the identified type of the target pallet and the position/shape data of the target pallet.

Electronic device for vehicle and method of operating electronic device for vehicle

Disclosed is an electronic device for a vehicle, including a processor receiving first image data from a first camera, receiving second image data from a second camera, receiving first sensing data from a first lidar, generating a depth image based on the first image data and the second image data, and fusing the first sensing data for each of divided regions in the depth image.

Automated labeling system for autonomous driving vehicle lidar data
11592570 · 2023-02-28 · ·

A system and method for using high-end perception sensors such as high-end LIDARs to automatically label sensor data of low-end LIDARs of autonomous driving vehicles is disclosed. A perception system operating with a high-end LIDAR may process sensed data from the high-end LIDAR to detect objects and generate metadata of objects surrounding the vehicle. The confidence level of correctly identifying the objects using the high-end LIDAR may be further enhanced by fusing the data from the high-end LIDAR with data from other sensors such as cameras and radars. The method may use the detected objects and metadata of the detected objects processed from the data captured by the high-end LIDAR and other sensors as ground truth to label data of a same scene captured by a low-end LIDAR mounted on the vehicle. A neural network may use the labeled sensor data from the low-end LIDAR during offline supervised training.

Flexible multi-channel fusion perception
11592565 · 2023-02-28 · ·

A method may include obtaining first sensor data from a first sensor system and second sensor data from a second sensor system. The first and the second sensor systems may capture sensor data from a total measurable world. The method may include identifying a first object included in the first sensor data and a second object included in the second sensor data and determining first parameters corresponding to the first object and second parameters corresponding to the second object. The first parameters may be compared with the second parameters and whether the first object and the second object are a same object may be determined based on the comparing the first parameters and the second parameters. Responsive to determining that the first object and the second object are the same object, a set of objects representative of objects in the total measurable world including the same object may be generated.

Method and apparatus to classify structures in an image

Disclosed is a system and method for segmentation of selected data. In various embodiments, automatic segmentation of fiber tracts in an image data may be performed. The automatic segmentation may allow for identification of specific fiber tracts in an image.

System and method for automated learning from sensors

A computer-implemented method includes receiving first inputs associated with a first modality and second inputs associated with a second modality; processing the received first and second inputs with convolutional neural networks (CNN), wherein a set of first weights are used to handle the first inputs and a second set of weights are used to handle the second inputs; determining a loss for each of the first and the second inputs based on a loss function that applies the first set of weights, the second set of weights, and a presence of a co-occurrence; generating a shared feature space as an output of the CNNs, wherein a distance between cells associated with the first inputs and the second inputs in the shared feature space is determined based on the loss associated with each of the first inputs and the second inputs; and based on the shared feature space, providing an output.

DETECTING AN OBJECT IN AN IMAGE USING MULTIBAND AND MULTIDIRECTIONAL FILTERING

A detection method includes performing multiband filtering on a first area to obtain a plurality of band sub-images, the first area being an area in a first video frame, and performing multidirectional filtering on the plurality of band sub-images to obtain a plurality of direction sub-images. The method further includes acquiring a direction-band fused feature of the first area according to the plurality of direction sub-images, and inputting the direction-band fused feature into a detection model, and performing detection based on the direction-band fused feature using the detection model to detect whether the first area comprises an object.

IMAGE GAZE CORRECTION METHOD, APPARATUS, ELECTRONIC DEVICE, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT

An image gaze correction method, apparatus, electronic device, computer-readable storage medium, and computer program product. The image gaze correction method includes: acquiring a to-be-corrected eye image from a to-be-corrected image, generating, based on the to-be-corrected eye image, an eye motion flow field and an eye contour mask, the eye motion flow field being used for adjusting a pixel position in the to-be-corrected eye image, and the eye contour mask being used for indicating a probability that the pixel position in the to-be-corrected eye image belongs to an eye region, performing, based on the eye motion flow field and the eye contour mask, gaze correction processing on the to-be-corrected eye image to obtain a corrected eye image, and generating a gaze corrected image based on the corrected eye image.