Patent classifications
G06V40/162
SELF-SERVICE CHECKOUT TERMINAL, METHOD AND CONTROL DEVICE
In accordance with various embodiments, a self-service checkout terminal can comprise: a capture device having at least one sensor, wherein the capture device is configured: to capture first biometric data with reference to a person at the self-service checkout terminal; to capture second biometric data with reference to an official identity certificate if the identity certificate is presented to the capture device; to capture a product identifier of a product if the product is presented to the capture device; a control device configured for: firstly determining a sales restriction to which the product is subject, on the basis of the product identifier; comparing the first biometric data with the second biometric data; secondly determining whether the person satisfies a criterion of the sales restriction on the basis of a result of the comparing and on the basis of the second biometric data.
METHOD, APPARATUS, DEVICE, AND STORAGE MEDIUM FOR TRAINING IMAGE PROCESSING MODEL
An image processing model can more accurately process a face image with an occluded face while reducing calculations and improving operation speed of a processing device, reducing training time and costs. A predicted recognition result of a sample face image and occlusion indication information based on an image processing model is obtained. The occlusion indication information indicates an image feature of a face occlusion area of the sample face image. A recognition error based on the predicted recognition result and a target recognition result is also obtained. A classification error is obtained based on the occlusion indication information and a target occlusion pattern corresponding to the sample face image. An occlusion pattern of the sample face image indicates a position and a size of the face occlusion area. A model parameter of the image processing model is updated based on the recognition error and the classification error.
Method and apparatus for determining image quality
Embodiments of the present disclosure disclose a method and apparatus for determining image quality. The method comprises: acquiring a to-be-recognized image and facial region information used for indicating a facial region in the to-be-recognized image; extracting a face image from the to-be-recognized image on the basis of the facial region information; inputting the face image into a pre-trained convolutional neural network to obtain probabilities of each pixel comprised in the face image belonging to a category indicated by each category identifier in a preset category identifier set; inputting the face image into a pre-trained key face point positioning model to obtain coordinates of each key face point comprised in the face image; determining a probability of the face image being obscured on the basis of the probabilities and the coordinates; and determining whether the quality of the face image is up to standard on the basis of the probability.
NEURAL NETWORK ARCHITECTURE FOR FACE TRACKING
The present disclosure describes techniques for face tracking. The techniques comprise receiving landmark data associated with a plurality of images indicative of at least one facial part. Representative images corresponding to the plurality of images may be generated based on the landmark data. Each representative image may depict a plurality of segments, and each segment may correspond to a region of the at least one facial part. The plurality of images and corresponding representative images may be input into a neural network to train the neural network to predict a feature associated with a subsequently received image comprising a face. An animation associated with a facial expression may be controlled based on output from the trained neural network.
Object reconstruction with texture parsing
Techniques are provided for generating one or more three-dimensional (3D) models. In one example, an image of an object (e.g., a face or other object) is obtained, and a 3D model of the object in the image is generated. The 3D model includes geometry information. Color information for the 3D model is determined, and a fitted 3D model of the object is generated based on a modification of the geometry information and the color information for the 3D model. In some cases, the color information (e.g., determination and/or modification of the color information) and the fitted 3D model can be based on one or more vertex-level fitting processes. A refined 3D model of the object is generated based on the fitted 3D model and depth information associated with the fitted 3D model. In some cases, the refined 3D model can be based on a pixel-level refinement or fitting process.
FACE DETECTION DEVICE, METHOD AND FACE UNLOCK SYSTEM
A face detection device based on a convolutional neural network is provided. The device includes a feature extractor assembly and a detector assembly. The feature extractor assembly includes a first feature extractor, a second feature extractor and a third feature extractor. The first feature extractor is used to apply a first set of convolution kernels on an input grayscale image thereby generate a set of basic feature maps. The second feature extractor is used to apply a second set of convolution kernels on the set of basic feature maps and thereby generate more than one set of intermediate feature maps, which are concatenated. The third feature extractor is used to perform at least one convolution operation on a concatenated layer. The detector assembly includes at least one detector whose input is derived from one of the second feature extractor and the third feature extractor.
METHOD FOR FACE LIVENESS DETECTION, ELECTRONIC DEVICE AND STORAGE MEDIUM
A method, an electronic device, and a storage medium are disclosed. The method includes: acquiring a color sequence verification code; controlling a screen of an electronic device to sequentially generate colors based on a sequence of the colors included in the color sequence verification code; controlling a camera of the electronic device to collect an image of a face of a target object in each of the colors to acquire an image sequence; performing a face liveness verification on the target object to acquire a liveness score value; acquiring difference images corresponding respectively to the colors of the images of the image sequence based on the image sequence; performing a color verification based on the color sequence verification code and the difference images; and determining a face liveness detection result of the target object based on
Skin tone assisted digital image color matching
In implementations of skin tone assisted digital image color matching, a device implements a color editing system, which includes a facial detection module to detect faces in an input image and in a reference image, and includes a skin tone model to determine a skin tone value reflective of a skin tone of each of the faces. A color matching module can be implemented to group the faces into one or more face groups based on the skin tone value of each of the faces, match a face group pair as an input image face group paired with a reference image face group, and generate a modified image from the input image based on color features of the reference image, the color features including face skin tones of the respective faces in the face group pair as part of the color features applied to modify the input image.
DEPTH PROCESSOR
A depth processor including a region of interest determination circuit and a depth decoder is provided. The region of interest determination circuit is configured to determine a size of a region of interest of an input image. The depth decoder is coupled to the region of interest determination circuit and configured to generate a depth map of the input image according to a filter size. The filter size is set according to the size of the region of interest of the input image.
System and method for face recognition
A system and a method for face recognition are disclosed. The system also includes an image capturing subsystem configured to capture one or more images of faces. The system also includes a feature extraction subsystem configured to extract one or more features from the one or more images of faces. The system also includes a feature comparison subsystem configured to compare the one or more extracted features in a local database. The system also includes a feature transmission subsystem configured to transmit the one or more images and one or more extracted features to a remote server. The feature transmission subsystem is also configured to compare the one or more transmitted features to the one or more features pre-stored in the remote server. The system also includes a feature regeneration subsystem configured to regenerate the one or more matched features in the local database from the remote server.