Patent classifications
G06V40/162
COMPUTER-IMPLEMENTED DETECTION AND PROCESSING OF ORAL FEATURES
Described herein are computer-implemented methods for identifying and classifying one or more regions of interest in a facial region and augmenting an appearance of the regions of interest in an image. For example, a region of interest may include one or more of: a teeth region, a lip region, a mouth region, or a gum region. User selected templates for teeth, gums, smile, etc. may be used to replace the analogous facial features in an input image provided by the user, for example from an image library or taken with an image sensor. The computer-implemented methods described herein may use one or more trained machine learning models and one or more algorithms to identify and classify regions of interest in an input image.
FACE PARSING METHOD AND RELATED DEVICES
A facial parsing method and apparatus, a facial parsing network training method and apparatus, an electronic device and a non-transitory computer-readable storage medium, which relate to the field of artificial intelligence. The facial parsing method includes inputting a facial image into a pre-trained facial parsing neural network; extracting a semantic feature of the facial image by using a semantic perception sub-network; extracting a boundary feature of the facial image by using a boundary perception sub-network; and processing the cascaded semantic feature and boundary feature by using a fusion sub-network, to obtain a facial region to which each pixel in the facial image belongs. The method can improve the resolution capability of a neural network for boundary pixels between different facial regions of a facial image, thereby improving the precision of facial parsing.
SYSTEM AND METHOD FOR DIGITAL MEASUREMENTS OF SUBJECTS
A method for performing digital measurements by obtaining a first video stream of a user at a first distance to a camera; using an element appearing in the first video stream to generate a transformation factor to convert pixel distance in the first video stream to actual physical distance in the real world; using the transformation factor to obtain a first digital measurement in the first video stream; obtaining a second video stream at a second distance, larger than the first distance; using the first digital measurement and an angular measurement to an item appearing in the second video stream to determine a measurement of the second distance.
COUNTERFEIT IMAGE DETECTION
A computer, including a processor and a memory, the memory including instructions to be executed by the processor to acquire a first image from a first camera by illuminating a first object with a first light and determine an object status as one of a real object or a counterfeit object by comparing a first measure of pixel values corresponding to the first object to a threshold.
METHOD FOR CLASSIFICATION OF CHILD SEXUAL ABUSIVE MATERIALS (CSAM) IN AN ANIMATED GRAPHICS
There is provided a method of training a machine learning model, comprising: extracting faces from first images, creating an age training dataset comprising records each including a face and a ground truth label indicating whether the face is below a legal age, training an age component on the age training dataset for generating a first outcome indicative of a target face of the target image being below the legal age, creating a sexuality training dataset comprising second records each including a second image and ground truth label indicative of sexuality, training a sexuality component on the sexuality training dataset for generating a second outcome indicative of sexuality depicted in the target image, defining a combination component that receives an input of a combination of the first outcome and the second outcome, and generates a third outcome indicative of child sexual abusive materials (CSAM) depicted in the target image.
Detection of skin reflectance in biometric image capture
In examples, a relative skin reflectance of a captured image of a subject is determined. The determination selects from the captured image pixels of the subject's face and pixels in the background and normalizes luminance values of the skin pixels using the background pixels. The relative skin reflectance value is determined for the captured image, based on the normalized luminance values of the skin pixels. Optionally the relative skin reflectance value is qualified, based on thresholds of skin reflectance values, as suitable for biometric use. Optionally, a non-qualifying captured image is flagged and, optionally, another image is acquired, or the non-conforming image is processed further to transform the image into a suitable image for biometric analysis.
Color correction for video communications using display content color information
Video presence systems are described that detect an area of interest (e.g., facial region) within captured image data and analyze the area of interest using known color information of the content currently being presented by a display, along with measured ambient light color information from a color sensor, to determine whether the area of interest (e.g., face) is currently illuminated by the display or whether the AOI is not affected by the display and instead only illuminated by the ambient light. Upon determining that the display casts color shade on the AOI, the video presence system pre-processes the image data of the detected area of interest to correct the color back to skin color under ambient light prior to performing general white balance correction.
Sensor device and signal processing method
A sensor device includes an array sensor having a plurality of detection elements arrayed in one or two dimensional manner, a signal processing unit configured to acquire a detection signal by the array sensor and perform signal processing, and a calculation unit. The calculation unit detects an object from the detection signal by the array sensor, and gives an instruction, to the signal processing unit, on region information generated on the basis of the detection of the object as region information regarding the acquisition of the detection signal from the array sensor or the signal processing for the detection signal.
Securing displayed data on computing devices
Techniques for securing displayed data on computing devices are disclosed. One example technique includes upon determining that the computing device is unlocked, capturing and analyzing an image in a field of view of the camera of the computing device to determine whether the image includes a human face. In response to determining that the image includes a human face, the technique includes determining facial attributes of the human face in the image via facial recognition and whether the human face is that of an authorized user of the computing device. In response to determining that the human face is not one of an authorized user of the computing device, the technique includes converting user data on the computing device from an original language to a new language to output on a display of the computing device, thereby securing the displayed user data even when the computing device is unlocked.
FACIAL RECOGNITION METHOD AND APPARATUS, DEVICE, AND MEDIUM
This application discloses a facial recognition method and apparatus, a device and a medium, which relates to the field of image processing. The method includes: fusing a color map and a depth map of a facial image to obtain a fused image of the facial image, the fused image including two -dimensional information and depth information of the facial image (202); dividing the fused image into blocks to obtain at least two image blocks of the fused image (204); irreversibly shuffling pixels in the at least two image blocks to obtain a pixel-confused facial image (206); and determining an object identifier corresponding to the facial image according to the pixel-confused facial image (208).