G06V20/70

Systems and methods for utilizing images to determine the position and orientation of a vehicle

Described are systems and methods to utilize images to determine the position and/or orientation of a vehicle (e.g., an autonomous ground vehicle) operating in an unstructured environment (e.g., environments such as sidewalks which are typically absent lane markings, road markings, etc.). The described systems and methods can determine the vehicle's position and orientation based on an alignment of annotated images captured during operation of the vehicle with a known annotated reference map. The translation and rotation applied to obtain alignment of the annotated images with the known annotated reference map can provide the position and the orientation of the vehicle.

FACE IMAGE PROCESSING METHOD AND APPARATUS, FACE IMAGE DISPLAY METHOD AND APPARATUS, AND DEVICE

A face image processing method and apparatus, a face image display method and apparatus, and a device are provided, belonging to the technical field of image processing. The method includes: acquiring a first face image of a person; invoking an age change model to predict a texture difference map of the first face image at a specified age, the texture difference map being used for reflecting a texture difference between a face texture in the first face image and a face texture of a second face image of the person at the specified age; and performing image processing on the first face image based on the texture difference map to obtain the second face image.

FACE IMAGE PROCESSING METHOD AND APPARATUS, FACE IMAGE DISPLAY METHOD AND APPARATUS, AND DEVICE

A face image processing method and apparatus, a face image display method and apparatus, and a device are provided, belonging to the technical field of image processing. The method includes: acquiring a first face image of a person; invoking an age change model to predict a texture difference map of the first face image at a specified age, the texture difference map being used for reflecting a texture difference between a face texture in the first face image and a face texture of a second face image of the person at the specified age; and performing image processing on the first face image based on the texture difference map to obtain the second face image.

LEARNING DATA GENERATION DEVICE AND DEFECT IDENTIFICATION SYSTEM
20230039064 · 2023-02-09 ·

A learning data generation device that can generate learning data suitable for learning of an identification model. The learning data generation device has a function of cutting out part of first image data as second image data, a function of generating a two-dimensional graphic corresponding to the area of the second image data and representing a pseudo defect, a function of generating third image data by combining the second image data and the two-dimensional graphic, and a function of assigning a label corresponding to the two-dimensional graphic to the third image data. By using the third image data for learning of the identification model, a highly accurate identification model can be generated.

LEARNING DATA GENERATION DEVICE AND DEFECT IDENTIFICATION SYSTEM
20230039064 · 2023-02-09 ·

A learning data generation device that can generate learning data suitable for learning of an identification model. The learning data generation device has a function of cutting out part of first image data as second image data, a function of generating a two-dimensional graphic corresponding to the area of the second image data and representing a pseudo defect, a function of generating third image data by combining the second image data and the two-dimensional graphic, and a function of assigning a label corresponding to the two-dimensional graphic to the third image data. By using the third image data for learning of the identification model, a highly accurate identification model can be generated.

Scene change method and system combining instance segmentation and cycle generative adversarial networks

A scene change method and system combining instance segmentation and cycle generative adversarial networks are provided. The method includes: processing a video of a target scene and then inputting the video into an instance segmentation network to obtain segmented scene components, that is, obtain mask cut images of the target scene; and processing targets in the mask cut images of the target scene by using cycle generative adversarial networks according to the requirements of temporal attributes to generate data in a style-migrated state, and generating style-migrated targets with unfixed spatial attributes into a style-migrated static scene according to a specific spatial trajectory to achieve a scene change effect.

Image tagging based upon cross domain context

A method described herein includes receiving a digital image, wherein the digital image includes a first element that corresponds to a first domain and a second element that corresponds to a second domain. The method also includes automatically assigning a label to the first element in the digital image based at least in part upon a computed probability that the label corresponds to the first element, wherein the probability is computed through utilization of a first model that is configured to infer labels for elements in the first domain and a second model that is configured to infer labels for elements in the second domain. The first model receives data that identifies learned relationships between elements in the first domain and elements in the second domain, and the probability is computed by the first model based at least in part upon the learned relationships.

Ambiguous lane detection event miner
11551459 · 2023-01-10 · ·

A computer system obtains a plurality of road images captured by one or more cameras attached to one or more vehicles. The one or more vehicles execute a model that facilitates driving of the one or more vehicles. For each road image of the plurality of road images, the computer system determines, in the road image, a fraction of pixels having an ambiguous lane marker classification. Based on the fraction of pixels, the computer system determines whether the road image is an ambiguous image for lane marker classification. In accordance with a determination that the road image is an ambiguous image for lane marker classification, the computer system enables labeling of the image and adds the labeled image into a corpus of training images for retraining the model.

Ambiguous lane detection event miner
11551459 · 2023-01-10 · ·

A computer system obtains a plurality of road images captured by one or more cameras attached to one or more vehicles. The one or more vehicles execute a model that facilitates driving of the one or more vehicles. For each road image of the plurality of road images, the computer system determines, in the road image, a fraction of pixels having an ambiguous lane marker classification. Based on the fraction of pixels, the computer system determines whether the road image is an ambiguous image for lane marker classification. In accordance with a determination that the road image is an ambiguous image for lane marker classification, the computer system enables labeling of the image and adds the labeled image into a corpus of training images for retraining the model.

METHOD AND DEVICE FOR OBTAINING SIMILAR FACE IMAGES AND FACE IMAGE INFORMATION

The present invention provides a method and device for acquiring a similar human face picture and acquiring information about a human face picture. It mainly relates to the field of Internet technology, and mainly aims to provide the user a similar human face picture including a similar person when a similar picture is provided. The method comprising: acquiring a human face picture specified by a user; conducting human face identification to the human face picture to identify a similar human face picture of the human face picture from human face pictures that have already been collected; and displaying the similar human face picture to the user.