G06V40/168

Information processing apparatus, information processing method, and program
11710347 · 2023-07-25 · ·

An information processing apparatus (100) includes an acquisition unit (122) that acquires a first image from which person region feature information regarding a region including other than a face of a retrieval target person is extracted, a second image in which a collation result with the person region feature information indicates a match, and a facial region is detected, and result information indicating a collation result between face information stored in a storage unit and face information extracted from the facial region, and a display processing unit (130) that displays at least two of the first image, the second image, and the result information on an identical screen.

SYSTEM AND METHOD FOR AUTOMATIC DRIVER IDENTIFICATION
20180012092 · 2018-01-11 ·

A method for driver identification including recording a first image of a vehicle driver; extracting a set of values for a set of facial features of the vehicle driver from the first image; determining a filtering parameter; selecting a cluster of driver identifiers from a set of clusters, based on the filtering parameter; computing a probability that the set of values is associated with each driver identifier of the cluster; determining, at the vehicle sensor system, driving characterization data for the driving session; and in response to the computed probability exceeding a first threshold probability: determining that the new set of values corresponds to one driver identifier within the selected cluster, and associating the driving characterization data with the one driver identifier.

SYSTEM, METHOD, AND COMPUTER PROGRAM FOR CAPTURING AN IMAGE WITH CORRECT SKIN TONE EXPOSURE
20230005294 · 2023-01-05 ·

A system and method are provided for capturing an image with correct skin tone exposure. In use, one or more faces having threshold skin tone are detected within a scene. Based on the detected one or more faces, a high dynamic range (HDR) capture mode is enabled. Further, the scene image is captured using the HDR capture mode.

METHOD AND SYSTEM FOR DETECTING CHILDREN'S SITTING POSTURE BASED ON FACE RECOGNITION OF CHILDREN
20230237694 · 2023-07-27 ·

A method and system for detecting children’s sitting posture based on face recognition of children are provided in the disclosure, which relates to the technical field of children’s sitting posture correction. By automatically identifying children’s ages, real-time detection and intelligent supervision can be performed on children’s sitting posture according to different ages of the children. According to the disclosure, the human bone relation information can be obtained only by simply and comprehensively calculating bone position information of several key parts of the human body, such as eyes, shoulders, nose, legs, knees, feet and the like, and then the sitting posture condition of the human body can be determined by comparing the human bone relation information with a corresponding set threshold. It is not necessary to carry out separate model training on sitting postures, but only to measure key data, which greatly decreases time and accuracy of sitting posture detection.

FACIAL STRUCTURE ESTIMATION APPARATUS, METHOD FOR ESTIMATING FACIAL STRUCTURE, AND PROGRAM FOR ESTIMATING FACIAL STRUCTURE
20230237834 · 2023-07-27 · ·

A facial structure estimation apparatus includes a controller 13. The controller 13 stores, as learning data, a parameter indicating a relationship between a face image and a structure of the face image. The controller 13 learns a relationship between a first face image and a facial structure corresponding to the first face image. The controller 13 learns a relationship between a second face image of a certain person and a facial structure corresponding to the second face image. The controller 13 learns a relationship between a second face image of the certain person detected using infrared light and a facial structure corresponding to the second face image.

OBJECT TRACKING METHOD AND OBJECT TRACKING DEVICE
20230237835 · 2023-07-27 ·

The present disclosure provides an object tracking method and an object tracking device. The method includes: acquiring a human-face region of an image frame so as to determine a human-body region; extracting a human-body feature from the human-body region, and determining whether a plurality of historical object trajectories match the human-body feature; in response to that one of the plurality of historical object trajectories matches the human-body feature, updating an age of the human-body feature to a preset value; and in response to that none of the plurality of historical object trajectories matches the human-body feature, adding an object trajectory corresponding to the human-body feature to the plurality of historical object trajectories. Thus, a better tracking effect may be achieved.

OBJECT RECOGNITION METHOD AND APPARATUS, ELECTRONIC DEVICE AND READABLE STORAGE MEDIUM
20230005296 · 2023-01-05 ·

Provided is an object recognition method which includes obtaining a first visible-light image acquired by the first camera device and a second visible-light image acquired by the second camera device; performing exposure processing on the first visible-light image according to the luminance information of the bright area image of the first visible-light image and performing exposure processing on the second visible-light image according to the luminance information of the dark area images of the first visible-light image and/or the second visible-light image, where the dark area image is an area image having a luminance value less than or equal to the preset value; and performing target object detection on the first visible-light image obtained after exposure processing and the second visible-light image obtained after exposure processing and recognizing and verifying a target object according to the detection result.

Method and apparatus of adaptive infrared projection control

A processor or control circuit of an apparatus receives data of an image based on sensing by one or more image sensors. The processor or control circuit also detects a region of interest (ROI) in the image. The processor or control circuit then adaptively controls a light projector with respect to projecting light toward the ROI.

Systems and methods for automated makeup application
11568675 · 2023-01-31 ·

Systems and methods for automated makeup application allow a user to select and apply desired makeup styles to the user's face. The systems and methods include a computer application with a graphical user interface which allows selection of a look from a plurality of preconfigured looks. A camera coupled with a robotic arm records a face map and color coding and sends that data to be stored on a virtual server database. The application calculates formula quantity and a pump extracts desired formula amounts from appropriate formula cartridges which it releases into reservoirs on the robotic arm's head. An airbrush compressor mixes the formula and plug triggers release one of several airbrush nozzles to start spraying the user's face with formula. A cleaning mechanism is provided between makeup applications and after the final application.

Electronic device and controlling method thereof

An electronic device and a controlling method thereof are provided. A controlling method of an electronic device according to the disclosure includes: performing first learning for a neural network model for acquiring a video sequence including a talking head of a random user based on a plurality of learning video sequences including talking heads of a plurality of users, performing second learning for fine-tuning the neural network model based on at least one image including a talking head of a first user different from the plurality of users and first landmark information included in the at least one image, and acquiring a first video sequence including the talking head of the first user based on the at least one image and pre-stored second landmark information using the neural network model for which the first learning and the second learning were performed.