G06V40/171

Method for unlocking mobile device using authentication based on ear recognition and mobile device performing the same

Exemplary embodiments relate to a method for unlocking a mobile device using authentication based on ear recognition including obtaining an image of a target showing at least part of the target's body in a lock state, extracting a set of ear features of the target from the image of the target, when the image of the target includes at least part of the target's ear, and determining if the extracted set of ear features of the target satisfies a preset condition, and a mobile device performing the same.

DIGITAL MAKEUP ARTIST

A digital makeup artist system includes a mobile device, a database system storing cosmetic routine information, common makeup looks, cosmetic products for skin types and ethnicity, and user look preferences of a user. The mobile device includes a user interface for interacting with a digital makeup artist. The digital makeup artist performs an interactive dialog with the user in order to capture needs of the user, including types of makeup look, indoor or outdoor look, skin condition, facial problem areas, favorite facial features. The computation circuitry analyzes the user's face image to identify face parts, analyzes the face image to determine facial characteristics, and generates image frames to be displayed in synchronization with the interaction with the digital makeup artist based on the analyzed face image, needs of the user, the stored cosmetic routine information, common makeup looks, cosmetic products for skin types and ethnicity, and the user look preferences.

Biometric Authentication Using Head-Mounted Devices
20230222197 · 2023-07-13 ·

A head-mounted wearable device includes a frame mountable on a head of a user; an infrared imaging device arranged to image a face of the user when the frame is mounted on the head of the user; and a computing system configured to perform operations including causing the infrared imaging device to capture an image of the face of the user using infrared light received at the infrared camera and initiating a biometric authentication process based on the image. The head-mounted wearable device may include a visible-light imaging device to image the face of the user with the computing system configured to perform operations including causing the visible-light imaging device to capture a second image of the face of the user using visible light received at the visible-light imaging device, with the biometric authentication process being based in part on the second image.

Electronic apparatus and control method

An electronic apparatus includes: a detector that detects an object present within a predetermined detection range; and an embedded controller that based on a detection result by the detector determines that the object has left the electronic apparatus when the object is no longer detected within a first detection range of the predetermined detection range after the object has previously been detected within the first detection range, and determines that the object has approached the electronic apparatus after the object is detected within a second detection range in the predetermined detection range wider than the first detection range when no objects were previously detected within the predetermined detection range. While the object determined to have left is detected within the second detection range, the embedded controller determines that the object has re-approached based on a detection position of the object moving toward the first detection range.

Neural network image processing apparatus

A neural network image processing apparatus arranged to acquire images from an image sensor and to: identify a ROI containing a face region in an image; determine at plurality of facial landmarks in the face region; use the facial landmarks to transform the face region within the ROI into a face region having a given pose; and use transformed landmarks within the transformed face region to identify a pair of eye regions within the transformed face region. Each identified eye region is fed to a respective first and second convolutional neural network, each network configured to produce a respective feature vector. Each feature vector is fed to respective eyelid opening level neural networks to obtain respective measures of eyelid opening for each eye region. The feature vectors are combined and to a gaze angle neural network to generate gaze yaw and pitch values substantially simultaneously with the eyelid opening values.

Transparent display system, parallax correction method and image outputting method

A parallax correction method for a transparent display system is provided. The transparent display system includes a transparent display device located between a background object and a user. The parallax correction method includes the following steps. A gaze point is displayed on the transparent display device. An image including the transparent display device, the background object and the user is captured. At least two display anchor points and at least two corresponding background object anchor points are detected according to the image. The display anchor points are located on the transparent display device, and the background object anchor points are located on the background object. A plurality of visual extension lines extending from the display anchor points and the corresponding background object anchor points are obtained. An equivalent eye position of the ocular dominance of the user is obtained according an intersection of the visual extension lines.

System and method of acquiring coordinates of pupil center point

A system and a method of calculating coordinates of a pupil center point are provided. The system for acquiring the coordinates of the pupil center point includes a first camera, a second camera, a storage and a processor. The first camera is configured to capture a first image including a face and output the first image to the processor, the second camera is configured to capture a second image including a pupil and output the second image to the processor, a resolution of the first camera is smaller than a resolution of the second camera, and the storage is configured to store processing data, and the processor is configured to: acquire the first image and the second image; extract a first eye region corresponding to an eye from the first image; convert the first eye region into the second image, to acquire a second eye region corresponding to the eye in the second image; and detect a pupil in the second eye region and acquire the coordinates of the pupil center point.

IMAGE GENERATION DEVICE, IMAGE GENERATION METHOD, AND STORAGE MEDIUM STORING PROGRAM
20230214975 · 2023-07-06 · ·

An image generation device includes: at least one memory storing a set of instructions; and at least one processor configured to execute the set of instructions to: select a second face image from a plurality of face images stored in advance based on directions of faces included in the plurality of face images and a direction of a face included in an input first face image; deform the second face image based on feature points of the face included in the first face image and feature points of a face included in the second face image such that a face region of the second face image matches a face region of the first face image; and generate a third face image in which the face region of the first face image is synthesized with a region other than the face region of the deformed second face image.

ANIMATED EXPRESSIVE ICON

Embodiments described herein include an expressive icon system to present an animated graphical icon, wherein the animated graphical icon is generated by capture facial tracking data at a client device. In some embodiments, the system may track and capture facial tracking data of a user via a camera associated with a client device (e.g., a front facing camera, or a paired camera), and process the facial tracking data to animate a graphical icon.

Interaction Method for Electronic Device for Skin Detection, and Electronic Device
20230215208 · 2023-07-06 ·

An interaction method for an electronic device for skin detection includes recognizing a hand action and a face of a user to determine a target hand action; determining a detection target corresponding to the hand action based on the target hand action; determining from an extended content library based on the detection target and a shape of the detection target, extended content associated with the detection target and the shape of the detection target, and outputting the extended content.