Patent classifications
G06V40/165
REALISTIC HEAD TURNS AND FACE ANIMATION SYNTHESIS ON MOBILE DEVICE
Provided are systems and methods for realistic head turns and face animation synthesis. An example method includes receiving a source frame of a source video, where the source frame includes a head and a face of a source actor, generating source pose parameters corresponding to a pose of the head and a facial expression of the source actor; receiving a target image including a target head and a target face of a target person, determining target identity information associated with the target head and the target face of the target person, replacing source identity information in the source pose parameters with the target identity information to obtain further source pose parameters, and generating an output frame of an output video that includes a modified image of the target face and the target head adopting the pose of the head and the facial expression of the source actor.
Method for generating special effect program file package, method for generating special effect, electronic device, and storage medium
A method for generating a special effect program file package and a method for generating a special effect are provided. The method for generating a special effect program file package includes: importing a sub-material; obtaining a parameter value of a playback parameter of the sub-material and establishing a correspondence between a display position of the sub-material and at least one predetermined key point; and generating a special effect program file package according to the sub-material, the correspondence and the parameter value. The method for generating a special effect includes: importing a special effect program file package; obtaining a parameter value of a playback parameter of a sub-material in the special effect program file package; performing key point detection on a video image; and generating a special effect of the sub-material on the video image based on the detected key point and the parameter value of the playback parameter.
Occlusion detection for facial recognition processes
Occlusion of facial features may be detected and assessed in an image captured by a camera on a device. Landmark heat maps may be used to estimate the location of landmarks such as the eyes, mouth, and nose of a user's face in the captured image. An occlusion heat map may also be generated for the captured image. The occlusion heat map may include values representing the amount of occlusion in regions of the face. The estimated locations of the eyes, mouth, and nose may be used in combination with the occlusion heat map to assess occlusion scores for the landmarks. The occlusion scores for the landmarks may be used control one or more operations of the device.
Augmented reality experiences of color palettes in a messaging system
The subject technology receives image data including a representation of a physical item. The subject technology analyzes the image data to determine an object corresponding to the physical item. The subject technology identifies a set of colors corresponding to a set of regions of the determined object. The subject technology analyzes second image data to detect a second object corresponding to a representation of a particular body part of a user. The subject technology generates augmented reality content based at least in part on the identified set of colors and the detected second object. The subject technology causes display, at a client device, the augmented reality content applied to the detected second object.
ENTITY IDENTIFICATION AND AUTHENTICATION USING A COMBINATION OF INDEPENDENT IDENTIFICATION TECHNOLOGIES OR PLATFORMS AND APPLICATIONS THEREOF
Techniques are described for identifying and/or authenticating entities using a combination of independent identification technologies and/or platforms. In one embodiment, a system can comprising a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory.
The computer executable components can comprise a reception component that receives, from an entity, a request to authorize the entity based on image data of a person, wherein the request comprises the image data, and an authentication component that determines whether the person included in the image data corresponds to the entity.
HEAD MOUNTED DISPLAYS WITH AN ADJUSTABLE LAYER
In example implementations, an apparatus is provided. The apparatus includes an eye mount, an adjustable layer, and a foam layer. A first side of the eye mount is to be coupled to a display portion of a head mounted display. The adjustable layer is coupled to an outer perimeter of a second side of the eye mount. The foam layer is coupled to the adjustable layer.
Method and apparatus for processing image
Embodiments of the present disclosure disclose a method and apparatus for processing an image. A specific embodiment of the method includes: acquiring a feature map of a target image, where the target image contains a target object; determining a local feature map of a target size in the feature map; combining features of different channels in the local feature map to obtain a local texture feature map; and obtaining location information of the target object based on the local texture feature map.
System and method of utilizing computer-aided identification with medical procedures
The disclosure provides a system that may receive an identification of a first patient; may receive a first template that includes first multiple locations associated with a face of the first patient and associated with the identification of the first patient; may determine second multiple locations associated with a face of a current patient; may determine a second template of the face of the current patient based at least on the second multiple locations associated with the face of the current patient; may determine if the first template matches the second template; if the first template matches the second template, may provide an indication that the current patient has been correctly identified as the first patient; and if the first template does not match the second template, may provide an indication that the current patient has not been identified.
Geometrically constrained, unsupervised training of convolutional autoencoders for extraction of eye landmarks
The disclosure relates to systems, methods and programs for geometrically constrained, unsupervised training of convolutional autoencoders on unlabeled images for extracting eye landmarks. Disclosed systems for unsupervised deep learning of gaze estimation in eyes' image data are implementable in a computerized system. Disclosed methods include capturing an unlabeled image comprising the eye region of a user; and training a plurality of convolutional autoencoders on the unlabeled image comprising the eye region of a user using an initial geometrically regularized loss function to determine a plurality of eye landmarks.
EYE CENTER LOCALIZATION METHOD AND LOCALIZATION SYSTEM THEREOF
An eye center localization method includes performing an image sketching step, a frontal face generating step, an eye center marking step and a geometric transforming step. The image sketching step is performed to drive a processing unit to sketch a face image from the image. The frontal face generating step is performed to drive the processing unit to transform the face image into a frontal face image according to a frontal face generating model. The eye center marking step is performed to drive the processing unit to mark a frontal eye center position information on the frontal face image. The geometric transforming step is performed to drive the processing unit to calculate two rotating variables between the face image and the frontal face image, and calculate the eye center position information according to the two rotating variables and the frontal eye center position information.