Patent classifications
G06T13/40
Exercise Method and Equipment
An exercise method and an exercise equipment are provided. The exercise method includes: determining an exercise guiding video according to a selected music input/audio signal, wherein the exercise guiding video includes a first exercise guiding video and/or a second exercise guiding video, the first exercise guiding video is a live video automatically generated according to the selected music input/audio signal, the second exercise guiding video is a video previously recorded according to the selected music input/audio signal; generating CGA and special-effect/animated feedbacks corresponding to the music information/audio signal and instruction/cuing in the exercise guiding video; playing the exercise guiding video, the CGA, the special-effect/animated feedbacks and the selected music input/audio signal on a display and computing device; receiving user performance data; displaying interactive feedback data on the display and computing device, according to a result obtained by matching the user performance data with music information/audio signal analyzed from selected music input/audio signal.
ASYMMETRIC FACIAL EXPRESSION RECOGNITION
The present disclosure describes techniques for facial expression recognition. A first loss function may be determined based on a first set of feature vectors associated with a first set of images depicting facial expressions and a first set of labels indicative of the facial expressions. A second loss function may be determined based on a second set of feature vectors associated with a second set of images depicting asymmetric facial expressions and a second set of labels indicative of the asymmetric facial expressions. The first loss function and the second loss function may be used to determine a maximum loss function. The maximum loss function may be applied during training of a model. The trained model may be configured to predict at least one asymmetric facial expression in a subsequently received image.
ASYMMETRIC FACIAL EXPRESSION RECOGNITION
The present disclosure describes techniques for facial expression recognition. A first loss function may be determined based on a first set of feature vectors associated with a first set of images depicting facial expressions and a first set of labels indicative of the facial expressions. A second loss function may be determined based on a second set of feature vectors associated with a second set of images depicting asymmetric facial expressions and a second set of labels indicative of the asymmetric facial expressions. The first loss function and the second loss function may be used to determine a maximum loss function. The maximum loss function may be applied during training of a model. The trained model may be configured to predict at least one asymmetric facial expression in a subsequently received image.
Eye image selection
Systems and methods for eye image set selection, eye image collection, and eye image combination are described. Embodiments of the systems and methods for eye image set selection can include comparing a determined image quality metric with an image quality threshold to identify an eye image passing an image quality threshold, and selecting, from a plurality of eye images, a set of eye images that passes the image quality threshold.
Eye image selection
Systems and methods for eye image set selection, eye image collection, and eye image combination are described. Embodiments of the systems and methods for eye image set selection can include comparing a determined image quality metric with an image quality threshold to identify an eye image passing an image quality threshold, and selecting, from a plurality of eye images, a set of eye images that passes the image quality threshold.
Context sensitive avatar captions
Systems and methods are provided for performing operations including: receiving, by a messaging application, input that selects an option to generate a message using an avatar with a caption; presenting, by the messaging application, the avatar and a caption entry region proximate to the avatar; populating, by the messaging application, the caption entry region with a text string comprising one or more words; determining, by the messaging application, context based on the one or more words in the text string; and modifying, by the messaging application, an expression of the avatar based on the determined context.
Context sensitive avatar captions
Systems and methods are provided for performing operations including: receiving, by a messaging application, input that selects an option to generate a message using an avatar with a caption; presenting, by the messaging application, the avatar and a caption entry region proximate to the avatar; populating, by the messaging application, the caption entry region with a text string comprising one or more words; determining, by the messaging application, context based on the one or more words in the text string; and modifying, by the messaging application, an expression of the avatar based on the determined context.
Methods and system for adaptive avatar-based real-time holographic communication
Systems and methods provide an adaptive avatar-based real-time holographic communication service. End devices implement a customized avatar for holographic communications. With the assistance of a network edge platform, a user's facial motions and gestures are extracted in real time and applied to the customized avatar in the form of an augmented-reality-based or virtual-reality-based holographic entity. When a connection to the network edge platform is interrupted or not available, a master holographic entity provides a graceful fallback to a less resource-intensive avatar presentation using, for example, a user's prerecorded motions as a basis for rendering avatar movement. The customized avatar may be automatically adjusted/altered depending on with whom a user is communicating (e.g., a friend vs. a business associate) or a purpose for the communication (e.g., a professional meeting vs. a social activity).
Methods and system for adaptive avatar-based real-time holographic communication
Systems and methods provide an adaptive avatar-based real-time holographic communication service. End devices implement a customized avatar for holographic communications. With the assistance of a network edge platform, a user's facial motions and gestures are extracted in real time and applied to the customized avatar in the form of an augmented-reality-based or virtual-reality-based holographic entity. When a connection to the network edge platform is interrupted or not available, a master holographic entity provides a graceful fallback to a less resource-intensive avatar presentation using, for example, a user's prerecorded motions as a basis for rendering avatar movement. The customized avatar may be automatically adjusted/altered depending on with whom a user is communicating (e.g., a friend vs. a business associate) or a purpose for the communication (e.g., a professional meeting vs. a social activity).
VIEWING TERMINAL, VIEWING METHOD, VIEWING SYSTEM, AND PROGRAM
A student terminal is for viewing a class given in a virtual space that is immersive. The student terminal includes: a VR function section configured to display the virtual space according to virtual space information; and an input section for receiving a video capturing a desk of a student who views the class. The VR function section extracts, from the video, an area including a top plate of the desk corresponding to a desk object in the virtual space, and performs image composition for fitting a video capturing the area onto a top plate of the desk object.