Patent classifications
G06V40/171
Machine Learning Architecture for Imaging Protocol Detector
Systems and methods disclosed herein use a first machine learning architecture and a second machine learning architecture where the first machine learning architecture executes on a first processor and receives a first image representing a mouth of a user, determines user feedback for outputting to the user based on a first machine learning model, and outputs the user feedback for capturing a second image representing the mouth of the user. The second machine learning architecture executes on a second processor and receives the first image and the second image, and generates a 3D model of at least a portion of a dental arch of the user based on the first image and the second image where the 3D model is generated based on a second machine learning model of the second machine learning architecture.
METHOD AND APPARATUS FOR PROCESSING IMAGE SIGNAL, ELECTRONIC DEVICE, AND COMPUTER-READABLE STORAGE MEDIUM
A method and apparatus for processing an image signal, an electronic device, and a computer-readable storage medium. The method includes: obtaining a digital image signal of a target image, the target image including object imaging corresponding to an object, identifying a first area of the object imaging in the target image from the digital image signal, removing the object imaging from the target image based on the first area, to obtain a background image corresponding to an original background, performing image inpainting processing on the first area of the background image to obtain a filled image, the filled image including the original background and a perspective background connected to the original background, identifying a second area in the object imaging, and removing an imaging portion corresponding to the second area from the object imaging, and superimposing the obtained adjusted object imaging on the first area.
IMAGE GAZE CORRECTION METHOD, APPARATUS, ELECTRONIC DEVICE, COMPUTER-READABLE STORAGE MEDIUM, AND COMPUTER PROGRAM PRODUCT
An image gaze correction method, apparatus, electronic device, computer-readable storage medium, and computer program product related to the field of artificial intelligence technologies are provided. The image gaze correction method includes: acquiring an eye image from an image; performing feature extraction processing on the eye image to obtain feature information of the eye image; performing, based on the feature information and a target gaze direction, gaze correction processing on the eye image to obtain an initially corrected eye image and an eye contour mask; performing, by using the eye contour mask, adjustment processing on the initially corrected eye image to obtain a corrected eye image; and generating a gaze corrected image based on the corrected eye image.
SKINCARE AND FACIAL SCANNING SYSTEMS AND METHODS FOR PROVIDING SKINCARE DEVICE CONNECTIVITY AND SETTING CONFIGURATIONS
Skincare and facial scanning systems and methods are disclosed herein for providing skincare device connectivity and setting configurations. A skincare device comprising a sensor is configure to scan, and to deposit a cosmetic ink composition onto, the skin of a user. A skincare application (app), communicatively coupled to the skincare device, generates a user-specific electronic analysis based on use of the skincare device on skin of a face portion of the user, wherein at least a portion of the user-specific electronic analysis is configured for display on a graphic user interface. In some aspects, the skincare device comprises an identification certificate configured to uniquely identify the skincare device to the skincare app upon a connection between the skincare device and the skincare app. The connection may be a persistent connection maintaining connectivity between the skincare device and the skincare app for a plurality of uses of the skincare device.
ASYMMETRIC FACIAL EXPRESSION RECOGNITION
The present disclosure describes techniques for facial expression recognition. A first loss function may be determined based on a first set of feature vectors associated with a first set of images depicting facial expressions and a first set of labels indicative of the facial expressions. A second loss function may be determined based on a second set of feature vectors associated with a second set of images depicting asymmetric facial expressions and a second set of labels indicative of the asymmetric facial expressions. The first loss function and the second loss function may be used to determine a maximum loss function. The maximum loss function may be applied during training of a model. The trained model may be configured to predict at least one asymmetric facial expression in a subsequently received image.
Method and apparatus for waking up device, electronic device, and storage medium
A method and apparatus for waking up a device, an electronic device, and a storage medium are provided, which are related to fields of image processing and deep learning. The method includes: acquiring an environment image of a surrounding environment of a target device in real time, and recognizing a face region of a user in the environment image; acquiring a plurality of facial landmarks in the face region, and acquiring a left eye image and a right eye image according to the facial landmarks; acquiring a left eye sight classification result and a right eye sight classification result according to the left eye image and the right eye image; and waking up the target device in a case of determining that the user is looking at the target device according to the left eye sight classification result and the right eye sight classification result.
Systems and methods for reconstruction and rendering of viewpoint-adaptive three-dimensional (3D) personas
An exemplary method includes maintaining a receiver-side mesh-vertices list, receiving duplicative-vertex information from a sender, and responsively reducing the receiver-side mesh-vertices list in accordance with the received duplicative-vertex information, and rendering, using the reduced receiver-side mesh-vertices list, viewpoint-adaptive three-dimensional (3D) personas of a subject at least in part by weighting video pixel colors from different video-camera vantage points of video cameras that capture video streams of the subject, the weighting being performed according to a respective geometric relationship of each video-camera vantage point to a user-selected viewpoint.
Method for simulating the rendering of a make-up product on a body area
A method for simulating a rendering of a makeup product on a body area including the steps of: acquiring an image of the body area without makeup of a subject, determining first color parameters of the pixels of the image corresponding to the body area without makeup, identifying the pixels of the body area without makeup exhibiting highest brightness or red component value, and determining second color parameters of the pixels of the image corresponding to the body area, wherein the second color parameters render a making up of the body area by the makeup product.
System and method for determining probability that a vehicle driver is associated with a driver identifier
A method for driver identification including recording a first image of a vehicle driver; extracting a set of values for a set of facial features of the vehicle driver from the first image; determining a filtering parameter; selecting a cluster of driver identifiers from a set of clusters, based on the filtering parameter; computing a probability that the set of values is associated with each driver identifier of the cluster; determining, at the vehicle sensor system, driving characterization data for the driving session; and in response to the computed probability exceeding a first threshold probability: determining that the new set of values corresponds to one driver identifier within the selected cluster, and associating the driving characterization data with the one driver identifier.
Video analysis for obtaining optical properties of a face
Disclosed is a system and method for obtaining optical properties of skin on a human face through face video analysis. Video of the face is captured, landmarks on the face and tracked, regions-of-interest are defined and tracked using the landmarks, some measurements/optical properties are obtained, the time-based video is transformed into an angular domain, and additional measurements/optical properties are obtained. Such optical properties can be measured using video in real-time or video that has been pre-recorded.