G06V40/165

Methods and systems for user authentication
11698954 · 2023-07-11 · ·

A user device, such as a smartphone or laptop, may be password (passphrase) protected. The user device may combine biometric input analysis, such as facial recognition, with viseme analysis to authenticate a user attempting to use a password (passphrase) to access the user device. Secure authentication methods and systems are described that account for variations in how, based on the user's emotion (e.g., mood, temperament, unique pronunciation, etc. . . . ), a password (passphrase) may be presented to the user device.

EMBEDDED AUTHENTICATION SYSTEMS IN AN ELECTRONIC DEVICE

This invention is directed to an electronic device with an embedded authentication system for restricting access to device resources. The authentication system may include one or more sensors operative to detect biometric information of a user. The sensors may be positioned in the device such that the sensors may detect appropriate biometric information as the user operates the device, without requiring the user to perform a step for providing the biometric information (e.g., embedding a fingerprint sensor in an input mechanism instead of providing a fingerprint sensor in a separate part of the device housing). In some embodiments, the authentication system may be operative to detect a visual or temporal pattern of inputs to authenticate a user. In response to authenticating, a user may access restricted files, applications (e.g., applications purchased by the user), or settings (e.g., application settings such as contacts or saved game profile).

IMAGE GENERATION DEVICE, IMAGE GENERATION METHOD, AND STORAGE MEDIUM STORING PROGRAM
20230214975 · 2023-07-06 · ·

An image generation device includes: at least one memory storing a set of instructions; and at least one processor configured to execute the set of instructions to: select a second face image from a plurality of face images stored in advance based on directions of faces included in the plurality of face images and a direction of a face included in an input first face image; deform the second face image based on feature points of the face included in the first face image and feature points of a face included in the second face image such that a face region of the second face image matches a face region of the first face image; and generate a third face image in which the face region of the first face image is synthesized with a region other than the face region of the deformed second face image.

INFORMATION ACQUISITION APPARATUS, INFORMATION ACQUISITION METHOD, AND STORAGE MEDIUM
20230214010 · 2023-07-06 · ·

There is provided an information acquisition apparatus including an output means for outputting guidance information for guiding a subject to move a head while gazing at a predetermined position, and an image acquisition means for acquiring an image including an iris of the subject after outputting the guidance information.

IMAGE PROCESSING APPARATUS, IMAGE PROCESSING METHOD, AND RECORDING MEDIUM
20230215038 · 2023-07-06 ·

The present technology relates to an image processing apparatus, an image processing method, and a recording medium capable of appropriately determining a direction in which a subject being imaged faces.

The present technology includes a detector that detects a face and a predetermined part of a subject in a captured image, a face direction determiner that determines a direction in which the face detected by the detector faces, a part direction determiner that determines a direction in which the predetermined part detected by the detector faces, and a first direction decider that decides a direction in which the subject faces by using a determination result by the face direction determiner and a determination result by the part direction determiner. The present technology can be applied to an image processing apparatus that controls framing.

COMPUTER-IMPLEMENTED DETECTION AND PROCESSING OF ORAL FEATURES

Described herein are computer-implemented methods for identifying and classifying one or more regions of interest in a facial region and augmenting an appearance of the regions of interest in an image. For example, a region of interest may include one or more of: a teeth region, a lip region, a mouth region, or a gum region. User selected templates for teeth, gums, smile, etc. may be used to replace the analogous facial features in an input image provided by the user, for example from an image library or taken with an image sensor. The computer-implemented methods described herein may use one or more trained machine learning models and one or more algorithms to identify and classify regions of interest in an input image.

Efficient convolutional neural networks and techniques to reduce associated computational costs

A computing system is disclosed including a convolutional neural configured to receive an input that describes a facial image and generate a facial object recognition output that describes one or more facial feature locations with respect to the facial image. The convolutional neural network can include a plurality of convolutional blocks. At least one of the convolutional blocks can include one or more separable convolutional layers configured to apply a depthwise convolution and a pointwise convolution during processing of an input to generate an output. The depthwise convolution can be applied with a kernel size that is greater than 3×3. At least one of the convolutional blocks can include a residual shortcut connection from its input to its output.

Multi-algorithm-based face recognition system and method with optimal dataset partitioning for a cloud environment
11544962 · 2023-01-03 · ·

A system and method of face recognition comprising multiple phases implemented in a parallel architecture. The first phase is a normalization phase whereby a captured image is normalized to the same size, orientation, and illumination of stored images in a preexisting database. The second phase is a feature extraction/distance matrix phase where a distance matrix is generated for the captured image. In a coarse recognition phase, the generated distance matrix is compared with distance matrices in the database using Euclidean distance matches to create candidate lists, and in a detailed recognition phase, multiple face recognition algorithms are applied to the candidate lists to produce a final result. The distance matrices in the normalized database may be broken into parallel lists for parallelization in the feature extraction/distance matrix phase, and the candidate lists may also be grouped according to a dissimilarity algorithm for parallel processing in the detailed recognition phase.

Method for aligning a three-dimensional model of a dentition of a patient to an image of the face of the patient recorded by camera

The present invention relates to a computer implemented method for aligning a three-dimensional model (6) of a patient's dentition to an image of the face of the patient recorded by a camera (3), the image including the mouth opening, comprising: estimating the positioning of the camera (3) relative to the face of the patient during recording of the image to obtain an estimated positioning, retrieving the three-dimensional model (6) of the dentition of the patient, rendering a two-dimensional image (7) of the dentition of the patient using the virtual camera (8) processing the three-dimensional model (6) of the dentition at the estimated positioning, carrying out feature detection in a dentition area in the mouth opening of the image (1) of the patient recorded by the camera (3) and in the rendered image (7) by performing edge detection and/or a color-based tooth likelihood determination in the respective images and forming a detected feature image for the or each detected feature, calculating a measure of deviation between the detected feature images of the image taken by the camera (3) and the detected feature image of the rendered image, varying the positioning of the virtual camera (8) to a new estimated positioning and repeating the preceding three steps in an optimization process to minimize the deviation measure to determine the best fitting positioning of the virtual camera (8).

Equipment utilizing human recognition and method for utilizing the same

A method for utilizing human recognition and a method utilizing the same are provided. The method for utilizing human recognition includes updating a moving image database to include information about a moving image in which a cluster subject appears, the information being extracted based on clustering using a face feature; receiving a search condition; and detecting moving image information using the database. According to the present disclosure, a skeleton can be analyzed and a face can be recognized using an artificial intelligence (AI) model performing deep learning through a fifth generation (5G) network, and using the analysis result, a photographing composition can be determined, and moving image information can be constructed at an edge.