Patent classifications
G06V40/175
SYSTEMS AND METHODS FOR PROVIDING MEDIA RECOMMENDATIONS
Systems and methods are described for presenting identifiers for media assets recommended to users identified using facial recognition. Each of a first and a second user in a vicinity of user equipment is identified, and a first recommended media asset and a second media asset are determined based on respective user profiles of the first user and the second user. A first identifier selectable to access the first recommended media asset and a second identifier selectable to access the second recommended media asset are generated for display, and a recommended media asset associated with a selected identifier is generated for display.
System, method and apparatus for detecting facial expression for motion capture
A system, method and apparatus for detecting facial expressions according to EMG signals.
Emotion detection
Estimating emotion may include obtaining an image of at least part of a face, and applying, to the image, an expression convolutional neural network (“CNN”) to obtain a latent vector for the image, where the expression CNN is trained from a plurality of pairs each comprising a facial image and a 3D mesh representation corresponding to the facial image. Estimating emotion may further include comparing the latent vector for the image to a plurality of previously processed latent vectors associated with known emotion types to estimate an emotion type for the image.
SYSTEMS AND METHODS FOR FACIAL ATTRIBUTE MANIPULATION
Systems and techniques are described for image processing. An imaging system receives an identity image and an attribute image. The identity image depicts a first person having an identity. The attribute image depicts a second person having an attribute, such as a facial feature, an accessory worn by the second person, and/or an expression. The imaging system uses trained machine learning model(s) to generate a combined image based on the identity image and the attribute image. The combined image depicts a virtual person having both the identity of the first person and the attribute of the second person. The imaging system outputs the combined image, for instance by displaying the combined image or sending the combined image to a receiving device. In some examples, the imaging system updates the trained machine learning model(s) based on the combined image.
System and method for recognition and annotation of facial expressions
The innovation disclosed and claimed herein, in aspects thereof, comprises systems and methods of identifying AUs and emotion categories in images. The systems and methods utilized a set of images that include facial images of people. The systems and methods analyze the facial images to determine AUs and facial color due to facial blood flow variations that are indicative of an emotion category. In aspects, the analysis can include Gabor transforms to determine the AUs, AU intensities and emotion categories. In other aspects, the systems and method can include color variance analysis to determine the AUs, AU intensities and emotion categories. In further aspects, the analysis can include deep neural networks that are trained to determine the AUs, emotion categories and their intensities.
TECHNIQUE FOR CONTROLLING VIRTUAL IMAGE GENERATION SYSTEM USING EMOTIONAL STATES OF USER
A method of operating a virtual image generation system comprises allowing an end user to interact with a three-dimensional environment comprising at least one virtual object, presenting a stimulus to the end user in the context of the three-dimensional environment, sensing at least one biometric parameter of the end user in response to the presentation of the stimulus to the end user, generating biometric data for each of the sensed biometric parameter(s), determining if the end user is in at least one specific emotional state based on the biometric data for the each of the sensed biometric parameter(s), and performing an action discernible to the end user to facilitate a current objective at least partially based on if it is determined that the end user is in the specific emotional state(s).
UTILIZING A MACHINE LEARNING MODEL TRAINED TO DETERMINE SUBTLE POSE DIFFERENTIATIONS TO AUTOMATICALLY CAPTURE DIGITAL IMAGES
The present disclosure describes systems, non-transitory computer-readable media, and methods for utilizing a machine learning model trained to determine subtle pose differentiations to analyze a repository of captured digital images of a particular user to automatically capture digital images portraying the user. For example, the disclosed systems can utilize a convolutional neural network to determine a pose/facial expression similarity metric between a sample digital image from a camera viewfinder stream of a client device and one or more previously captured digital images portraying the user. The disclosed systems can determine that the similarity metric satisfies a similarity threshold, and automatically capture a digital image utilizing a camera device of the client device. Thus, the disclosed systems can automatically and efficiently capture digital images, such as selfies, that accurately match previous digital images portraying a variety of unique facial expressions specific to individual users.
Methods and systems for processing image data
A computer-implemented system for processing image data. The system comprises: a sensor operable to capture image data comprising an image of an environment of the sensor; and a processing circuit comprising: a processor; and a computer-readable storage medium comprising computer-readable instructions which, when executed, cause the processor to: receive the image data from the sensor; process the image data using a first path through a neural network to obtain first data, the first path configured to indicate a presence in the environment of one or more objects of a predetermined object type; process the image data using a second path through a neural network to obtain second data, the second path configured to indicate a presence in the environment of one or more object characteristics corresponding to the predetermined object type; and generate output data using the first and second data, wherein the first and second paths are arranged to be enable the first and second data to be obtained in parallel.
AUGMENTED REALITY SPEECH BALLOON SYSTEM
Disclosed is an augmented reality system to generate and cause display of an augmented reality interface at a client device. Various embodiments may detect speech, identify a source of the speech, transcribe the speech to a text string, generate a speech bubble based on properties of the speech and that includes a presentation of the text string, and cause display of the speech bubble at a location in the augmented reality interface based on the source of the speech.
INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND RECORDING MEDIUM
To provide an information processing system, an information processing method, and a recording medium, which are capable of assisting in a search for a moving image using a graph of data which is not obtained from image analysis associated with a moving image.
[Solution] Provided is an information processing system, including: a moving image data acquiring unit configured to acquire moving image data; a communication unit configured to receive sensor data associated with the moving image data and chronological data corresponding to a shooting time of the moving image data; an image signal processing unit configured to perform image analysis on the moving image data and generate image analysis result data; and a control unit configured to generate an interface including the moving image data and graphs of at least two pieces of data among the sensor data, the chronological data, and the image analysis result data.