G06V40/169

Gaze tracking method and gaze tracking device using ihe same

A gaze tracking method includes the following steps. Firstly, a to-be-analyzed facial image is captured. Then, whether the to-be-analyzed facial image conforms to a customized 3D face model is determined If not, a customized 3D face model of the to-be-analyzed facial image is created. If yes, an eye area image of the to-be-analyzed facial image is obtained. Then, according to the customized 3D face model and the to-be-analyzed facial image, head posture information is obtained. Then, an eye camera coordinate value referenced to a camera coordinate system is obtained. Then, the eye camera coordinate value is converted into an eye frame coordinate value referenced to a display frame coordinate system. Then, according to the eye frame coordinate value, the head posture information and an eyeball radius, an eyeball center point coordinate value is obtained, and accordingly a gaze coordinate value is obtained.

COMPENSATION FOR FACE COVERINGS IN CAPTURED AUDIO

The technology disclosed herein enables compensation for attenuation caused by face coverings in captured audio. In a particular embodiment, a method includes determining that a face covering is positioned to cover the mouth of a user of a user system. The method further includes receiving audio that includes speech from the user and adjusting amplitudes of frequencies in the audio to compensate for the face covering.

Makeup item presenting system, makeup item presenting method, and makeup item presenting server

A terminal images first and second images respectively indicating facial images of a user before and after makeup, acquires information on a type or region of the makeup performed by the user, and transmits the first and second images and the information on the type or region of the makeup in association with each other to a server. The server deduces a makeup color of the makeup performed by the user based on the first and second images and the information on the type or region of the makeup performed by the user, and extracts at least one similar makeup item having the makeup color based on information on the makeup color and a makeup item database, and transmits information on at least one similar makeup item to the terminal. A terminal displays information on at least one similar makeup item transmitted from the server to a display unit.

Cleaning system for cosmetic dispensing device
11478063 · 2022-10-25 · ·

An apparatus is described for dispensing cosmetic material from a cartridge onto a dispensing surface. The apparatus includes a retractable plate disposed beneath the detachable portion, where a dispensing end of the at least one cartridge is configured to penetrate the retractable plate and the detachable portion. When the retractable plate is at a stable highest position, the dispensing end of the at least one cartridge is flush with the surface of the retractable plate. When the retractable plate is moved to a predetermined stable downward position below the highest position, and the detachable portion is placed on the retractable surface, the dispensing end of the at least one cartridge is flush with the dispensing surface and the retractable plate is configured to move even further downward to an unstable position in response to a downward force on the detachable portion.

IMAGE ANNOTATION USING PRIOR MODEL SOURCING
20220335239 · 2022-10-20 ·

A method of image annotation includes selecting a plurality of annotation models related to an annotation task for an image, obtaining a candidate annotation map for the image from each of the plurality of annotation models, and selecting at least one of the candidate annotation maps to be displayed via a user interface, the candidate annotation maps comprising suggested annotations for the image. The method further includes receiving user selections or modifications of at least one of the suggested annotations from the candidate annotation map and generating a final annotation map based on the user selections or modifications.

System And Method For Video Processing

The present invention relates to a system for video processing, wherein the system (10) comprises: an input unit (11), a processing unit (12) and an output unit (13). The input unit (11) inputs a video which includes one or more events defining a boundary of a respective scene within the video. The processing unit (12) processes the video to identify the event and insert a cue point at the boundary. The output unit (13) outputs the processed video. A method (20) for video processing is also disclosed.

Periocular and audio synthesis of a full face image
11636652 · 2023-04-25 · ·

Systems and methods for synthesizing an image of the face by a head-mounted device (HMD) are disclosed. The HMD may not be able to observe a portion of the face. The systems and methods described herein can generate a mapping from a conformation of the portion of the face that is not imaged to a conformation of the portion of the face observed. The HMD can receive an image of a portion of the face and use the mapping to determine a conformation of the portion of the face that is not observed. The HMD can combine the observed and unobserved portions to synthesize a full face image.

Computer application method and apparatus for generating three-dimensional face model, computer device, and storage medium

A computer application method for generating a three-dimensional (3D) face model is provided, performed by a face model generation model running on a computer device, the method including: obtaining a two-dimensional (2D) face image as input to the face model generation model; extracting global features and local features of the 2D face image; obtaining a 3D face model parameter based on the global features and the local features; and outputting a 3D face model corresponding to the 2D face image based on the 3D face model parameter.

Method and apparatus with blur estimation

A processor-implemented method with blur estimation includes: acquiring size information of an input image; resizing the input image to generate a target image of a preset size; estimating a blur of the target image; and estimating a blur of the input image based on the size information of the input image.

Optical data exchange while preserving social distancing

For scanning optical patterns, such as two-dimensional QR codes, with a mobile device at increased distances, a first image is acquired. A region of interest likely containing the optical pattern in the first image is identified. The mobile device then zooms in on the region of interest and a second image is acquired. The optical pattern is then decoded using the second image.