Patent classifications
G06V40/164
Video analysis for obtaining optical properties of a face
Disclosed is a system and method for obtaining optical properties of skin on a human face through face video analysis. Video of the face is captured, landmarks on the face and tracked, regions-of-interest are defined and tracked using the landmarks, some measurements/optical properties are obtained, the time-based video is transformed into an angular domain, and additional measurements/optical properties are obtained. Such optical properties can be measured using video in real-time or video that has been pre-recorded.
Systems and methods to adapt and optimize human-machine interaction using multimodal user-feedback
Systems and methods for human-machine interaction. An adaptive behavioral control system of a human-machine interaction system controls an interaction sub-system to perform a plurality of actions for a first action type in accordance with a computer-behavioral policy, each action being a different alternative action for the action type. The adaptive behavioral control system detects a human reaction of an interaction participant to the performance of each action of the first action type from data received from a human reaction detection sub-system. The adaptive behavioral control system stores information indicating each detected human reaction in association with information identifying the associated action. In a case where stored information indicating detected human reactions for the first action type satisfy an update condition, the adaptive behavioral control system updates the computer-behavioral policy for the first action type.
FACIAL STRUCTURE ESTIMATION APPARATUS, METHOD FOR ESTIMATING FACIAL STRUCTURE, AND PROGRAM FOR ESTIMATING FACIAL STRUCTURE
A facial structure estimation apparatus includes a controller 13. The controller 13 stores, as learning data, a parameter indicating a relationship between a face image and a structure of the face image. The controller 13 learns a relationship between a first face image and a facial structure corresponding to the first face image. The controller 13 learns a relationship between a second face image of a certain person and a facial structure corresponding to the second face image. The controller 13 learns a relationship between a second face image of the certain person detected using infrared light and a facial structure corresponding to the second face image.
Face recognition method, terminal device using the same, and computer readable storage medium
A backlight face recognition method, a terminal device using the same, and a computer readable storage medium are provided. The method includes: performing a face detection on each original face image in an original face image sample set to obtain a face frame corresponding to the original face image; capturing the corresponding original face images from the original face image sample set, and obtaining a new face image containing background pixels corresponding to the captured original face images from the original face image sample set; preprocessing all the obtained new face images to obtain a backlight sample set and a normal lighting sample set; and training a convolutional neural network using the backlight sample set and the normal lighting sample set until the convolutional neural network reaches a preset stopping condition. The trained convolutional neural network will improve the accuracy of face recognition in complex background and strong light.
HUMAN ABNORMAL BEHAVIOR RESPONSE METHOD AND MOBILITY AID ROBOT USING THE SAME
Response methods to human abnormal behaviors for a mobility aid robot having a user-facing camera are disclosed. The mobility aid robot responds to human abnormal behaviors by detecting a face of a human during the robot aiding the human to move through the camera, comparing an initial size of the face and an immediate size of the face in response to the face of the human having detected during the robot aiding the human to move, determining the human as in abnormal behavior(s) in response to the immediate size of the face being smaller than the initial size of the face, and performing response(s) corresponding to the abnormal behavior(s) in response to the human being in the abnormal behavior(s), where the response(s) include slowing down the robot.
ADAPTING AUTOMATED ASSISTANT BASED ON DETECTED MOUTH MOVEMENT AND/OR GAZE
Adapting an automated assistant based on detecting: movement of a mouth of a user; and/or that a gaze of the user is directed at an assistant device that provides an automated assistant interface (graphical and/or audible) of the automated assistant. The detecting of the mouth movement and/or the directed gaze can be based on processing of vision data from one or more vision components associated with the assistant device, such as a camera incorporated in the assistant device. The mouth movement that is detected can be movement that is indicative of a user (to whom the mouth belongs) speaking.
Method and device for sending information
Disclosed in the embodiments of the present disclose are a method and device for sending information. A particular embodiment of the method comprises: acquiring user input information input to a user terminal; determining, from a target expression image set, at least one expression image to be sent to the user terminal and matching the user input information, and a presentation order of the at least one expression image; and sending presentation information to the user terminal in response to determining that, during a historical time period, the user terminal presents the at least one expression image according to the presentation order less than or equal to a target number of times, wherein the presentation information is for instructing the user terminal to present the at least one expression image according to the presentation order.
Fake video detection
Detection of whether a video is a fake video derived from an original video and altered is undertaken using both image analysis and frequency domain analysis of one or more frames of the video. The analysis may be implemented using neural networks.
APPARATUS, METHOD, AND PROGRAM PRODUCT FOR ENHANCING PRIVACY VIA FACIAL FEATURE OBFUSCATION
There is disclosed an information handling system, including a camera, which may include an input lens, and an image signal processor, which may include circuitry that converts an analog image received at the input lens to a digital image data structure; a device interface may include circuitry that provides the image data structure to an information handling device, and a modifier circuit that modifies human features of the digital image data structure before the digital image data structure is provided to the device interface.
ELECTRONIC DEVICE AND OPERATING METHOD OF ELECTRONIC DEVICE FOR CORRECTING ERROR DURING GAZE DIRECTION RECOGNITION
Disclosed is an electronic device for correcting an error during recognition of a gaze direction. The electronic device includes: a camera module including a camera configured to generate an image frame, a memory, and a processor, and the processor may be configured to: perform face recognition with respect to the image frame, perform extraction of a feature point of the face in response to the face being recognized during the face recognition, recognize a face direction of the face in response to the feature point of the face being extracted, generate gaze direction information by recognizing a gaze direction of the face in response to the recognition of the face direction, generate filtered gaze direction information by performing filtering with respect to the gaze direction information in response to the gaze direction being recognized, and store the filtered gaze direction information in the memory.