G06V10/803

Method and device for fusion of measurements from different information sources
11521027 · 2022-12-06 · ·

The invention relates to a method and a device for fusion of measurements from various information sources (I 1, I 2, . . . , I m) in conjunction with filtering of a filter vector, wherein the information sources (I 1, I 2, . . . , I m) comprise one or more environment detection sensor(s) of an ego vehicle, wherein in each case at least one measured quantity derived from the measurements is contained in the filter vector, wherein the measurements from at least one individual information source (I 1; I 2; . . . , I m) are mapped nonlinearly to the respective measured quantity, wherein at least one of these mapping operations depends on at least one indeterminate parameter, wherein the value to be determined of the at least one indeterminate parameter is estimated from the measurements of the different information sources (I 1, I 2, . . . , I m) and wherein the filter vector is not needed for estimating the at least one indeterminate parameter.

Method and device for detecting body temperature, electronic apparatus and storage medium

A method and device for detecting a body temperature, electronic apparatus and storage medium are provided, which relate to the field of infrared temperature measurement. The method includes: performing face recognition on an optical static image, to determine at least one face image in the optical static image and coordinates of the face image; performing coordinate transformation on a thermal imaging static image and/or the optical static image, to determine thermal imaging information of the face image, wherein the optical static image and the thermal imaging static image include a same image acquisition target with a same face; and determining a body temperature corresponding to the face image, according to the thermal imaging information of the face image. in the embodiment of the present application, efficiency of body temperature detection in public places can be improved and cross infection can be prevented.

Vehicle accident notification device, system including the same, and method thereof

A vehicle accident notification device includes a processor that determines whether a vehicle accident occurs and a degree of the vehicle accident based on a sensing result of a sensor and vehicle information received from a device of the host vehicle and automatically provides a notification of the vehicle accident, a communicator that communicates with other devices, and a storage storing a collision reference value that is used to determine the degree of the vehicle accident in advance.

METHOD AND DEVICE FOR IMAGE FUSION, COMPUTING PROCESSING DEVICE, AND STORAGE MEDIUM
20220383463 · 2022-12-01 ·

The present application relates to an image-fusion method and apparatus, a computing and processing device and a storage medium. The method includes: based on a same one target scene, acquiring a plurality of exposed images of different exposure degrees; acquiring a first exposed-image fusion-weight diagram corresponding to each of the exposed images, wherein the first exposed-image fusion-weight diagram contains fusion weights corresponding to pixel points of the exposed image; acquiring a region area of each of overexposed regions in each of the exposed images; for each of the exposed images, by using the region area of each of the overexposed regions in the exposed image, performing smoothing filtering to the first exposed-image fusion-weight diagram corresponding to the exposed image, to obtain a second exposed-image fusion-weight diagram corresponding to the exposed image; and according to each of the second exposed-image fusion-weight diagrams, performing image-fusion processing to the plurality of exposed images, to obtain a fused image. Accordingly, the present application can balance the characteristics of the different overexposed regions, and prevent the losing of the details of the small overexposed regions, to enable the obtained fused image to be more realistic.

METHOD, APPARATUS, ELECTRONIC DEVICE AND MEDIUM FOR IMAGE SUPER-RESOLUTION AND MODEL TRAINING
20220383452 · 2022-12-01 ·

The embodiments of the present application provide method, apparatus, electronic device, and medium for image super-resolution and model training. The method includes: inputting the image to be processed into a first super-resolution network model and a second super-resolution network model trained in advance, respectively; the first super-resolution network model is a trained convolutional neural network; the second super-resolution network model is a generative network included in a trained generative adversarial network; obtaining a first image output from the first super-resolution network model and a second image output from the second super-resolution network model; fusing the first image and the second image to obtain a target image, wherein the resolution of the target image is greater than the resolution of the image to be processed.

VIRTUAL OBJECT LIP DRIVING METHOD, MODEL TRAINING METHOD, RELEVANT DEVICES AND ELECTRONIC DEVICE
20220383574 · 2022-12-01 ·

A virtual object lip driving method performed by an electronic device includes: obtaining a speech segment and target face image data about a virtual object; and inputting the speech segment and the target face image data into a first target model to perform a first lip driving operation, so as to obtain first lip image data about the virtual object driven by the speech segment. The first target model is trained in accordance with a first model and a second model, the first model is a lip-speech synchronization discriminative model with respect to lip image data, and the second model is a lip-speech synchronization discriminative model with respect to a lip region in the lip image data.

IMAGE PROCESSING METHOD, MODEL TRAINING METHOD, RELEVANT DEVICES AND ELECTRONIC DEVICE
20220383626 · 2022-12-01 ·

An image processing method includes: obtaining a first categorical feature and M first image features corresponding to M first images respectively, each first image being associated with a task index, task indices associated with different first images being different from each other, M being a positive integer; fusing the M first image features with the first categorical feature respectively so as to obtain M first target features; performing feature extraction on the M first target features so as to obtain M second categorical features; selecting a second categorical feature corresponding to each task index from the M second categorical features, and performing regularization corresponding to the task index on the second categorical feature, to obtain a third categorical feature corresponding to the task index; and performing image processing in accordance with M third categorical features so as to obtain M first image processing results of the M first images.

Real-time Map and Prediction Diagnostics

Detecting prediction errors includes detecting a road user; determining respective predicted data for the road user; storing, in a data structure, the respective predicted data; storing, in the data structure, actual data of the road user; obtaining an average prediction displacement error using at least one of the actual data and at least two corresponding respective predicted data; and determining a prediction accuracy based on the average prediction displacement error. Detecting map errors includes detecting a road user; storing, in a data structure, actual data of the road user; storing, in the data structure, map data corresponding to the actual data; obtaining an average map displacement error based on a comparison of at least some of the actual data and corresponding at least some map data; and determining a map accuracy based on the average map displacement error.

AUTOMATED PRODUCT IDENTIFICATION WITHIN HOSTED AND STREAMED VIDEOS
20220382808 · 2022-12-01 ·

Automated product identification within hosted and streamed videos is performed based on video content of a video received at an online video platform and text content associated with the video. First embeddings representative of one or more first candidate products are determined based on video content of the video, such as one or more frames selected from within the video. Second embeddings representative of one or more second candidate products are determined based on text content associated with the video, such as a title, description, or transcript of the video. A product candidate index is produced based on the second embeddings. A product identification representative of a product featured in the video is determined based on a comparison of the first embeddings against entries of the product candidate index, such as including by a nearest neighbor search responsive to the comparison. An indication of the product identification is then output at the online video platform.

AUTOMATED ACCESSIBILTY ASSESSMENT AND RATING SYSTEM AND METHOD THEREOF
20220383527 · 2022-12-01 · ·

An automated system and method for assessing and rating accessibility are provided. A processor collects raw data corresponding to geographical objects obtained from sensors, the raw data including tagged and non-tagged data; operates on the raw data to extract features and reduce dimensionality of the raw data, thereby generating processed data having extracted features; generates accessibility data from the processed data; uses supervised machine learning techniques to develop models from the processed data; and implements the models and generates accessibility tags based on the extracted features. A database is configured to store geographical data related to the geographical objects and the accessibility tags corresponding to the geographical locations. An API is configured to access the database and provide a user interface for a user device to use an application to display the accessibility data and the accessibility tags on the user device customized to a disability.