G10L2021/105

SYSTEM, METHOD, AND COMPUTER PROGRAM FOR TRANSMITTING FACE MODELS BASED ON FACE DATA POINTS
20220239866 · 2022-07-28 ·

A system, method, and computer program are provided for receiving face models based on face nodal points. In use, a real-time face model is received, wherein the real-time face model includes one or more face nodal points. Real-time face nodal points are received, including additional one or more face nodal points. The real-time face model is manipulated based on the real-time face nodal points.

Using machine-learning models to determine movements of a mouth corresponding to live speech
11211060 · 2021-12-28 · ·

Disclosed systems and methods predict visemes from an audio sequence. In an example, a viseme-generation application accesses a first audio sequence that is mapped to a sequence of visemes. The first audio sequence has a first length and represents phonemes. The application adjusts a second length of a second audio sequence such that the second length equals the first length and represents the phonemes. The application adjusts the sequence of visemes to the second audio sequence such that phonemes in the second audio sequence correspond to the phonemes in the first audio sequence. The application trains a machine-learning model with the second audio sequence and the sequence of visemes. The machine-learning model predicts an additional sequence of visemes based on an additional sequence of audio.

SYSTEMS AND METHODS FOR PHONEME AND VISEME RECOGNITION
20210390949 · 2021-12-16 ·

The disclosed computer-implemented method may include training a machine-learning algorithm to use look-ahead to improve effectiveness of identifying visemes corresponding to audio signals by, for one or more audio segments in a set of training audio signals, evaluating an audio segment, where the audio segment includes at least a portion of a phoneme, and a subsequent segment that includes contextual audio that comes after the audio segment and potentially contains context about a viseme that maps to the phoneme. The method may also include using the trained machine-learning algorithm to identify one or more probable visemes corresponding to speech in a target audio signal. Additionally, the method may include recording, as metadata of the target audio signal, where a probable viseme occurs within the target audio signal. Various other methods, systems, and computer-readable media are also disclosed.

Periocular and audio synthesis of a full face image
11200736 · 2021-12-14 · ·

Systems and methods for synthesizing an image of the face by a head-mounted device (HMD) are disclosed. The HMD may not be able to observe a portion of the face. The systems and methods described herein can generate a mapping from a conformation of the portion of the face that is not imaged to a conformation of the portion of the face observed. The HMD can receive an image of a portion of the face and use the mapping to determine a conformation of the portion of the face that is not observed. The HMD can combine the observed and unobserved portions to synthesize a full face image.

ARTIFICIAL INTELLIGENCE-BASED ANIMATION CHARACTER DRIVE METHOD AND RELATED APPARATUS

This application disclose an artificial intelligence (AI) based animation character drive method. A first expression base of a first animation character corresponding to a speaker is determined by acquiring media data including a facial expression change when the speaker says a speech, and the first expression base may reflect different expressions of the first animation character. After target text information is obtained, an acoustic feature and a target expression parameter corresponding to the target text information are determined according to the target text information, the foregoing acquired media data, and the first expression base. A second animation character having a second expression base may be driven according to the acoustic feature and the target expression parameter, so that the second animation character may simulate the speaker's sound and facial expression when saying the target text information, thereby improving experience of interaction between the user and the animation character.

Video frame replacement based on auxiliary data

Audio content and played frames may be received. The audio content may correspond to first video content. The played frames may be included in the first video content. The first video content may further include a replaced frame. The played frames and the replaced frame may include a face of a person. Location data may also be received that indicates locations of facial features of the face of the person within the replaced frame. A replacement frame may be generated, such as by rendering the facial features in the replacement frame based at least in part on the locations indicated by the location data and positions indicated by a portion of the audio content that is associated with the replaced frame. Second video content may be played including the played frames and the replacement frame. The replacement frame may replace the replaced frame in the second video content.

METHOD AND APPARATUS FOR GENERATING ANIMATION, ELECTRONIC DEVICE, AND COMPUTER READABLE MEDIUM
20220180584 · 2022-06-09 ·

The present disclosure discloses a method and apparatus for generating animation. An implementation of the method may include: processing a to-be-processed material to generate a normalized text; analyzing the normalized text to generate a Chinese pinyin sequence of the normalized text; generating a reference audio based on the to-be-processed material; and obtaining a animation of facial expressions corresponding to the timing sequence of the reference audio based on the Chinese pinyin sequence and the reference audio.

Speech-driven facial animation generation method

The present disclosure discloses a speech-driven facial animation generation method. The method is mainly divided into six steps: extracting speech features, collecting frequency information, summarizing time information, decoding action features, driving a facial model, and sliding a signal window. The present disclosure can drive, according to an input speech audio signal, any facial model in real time under a particular delay to generate animation. The quality of the animation reaches the currently most advanced speech animation technology level, and has the characteristics of light weight and good robustness. The present disclosure can be used to generate speech animation under different scenes, such as VR virtual social networking, and virtual speech assistants and games.

INTERACTIVE SYSTEMS AND METHODS

A method of producing an avatar video, the method comprising the steps of: providing a reference image of a person's face; providing a plurality of characteristic features representative of a facial model X0 of the person's face, the characteristic features defining a facial pose dependent on the person speaking; providing a target phrase to be rendered over a predetermined time period during the avatar video and providing a plurality of time intervals t within the predetermined time period; generating, for each of said times intervals t, speech features from the target phrase, to provide a sequence of speech features; and generating, using the plurality of characteristic features and sequence of speech features, a sequence of facial models Xt for each of said time intervals t.

SYSTEMS AND METHODS FOR GENERATING COMPOSITE MEDIA USING DISTRIBUTED NETWORKS
20230274545 · 2023-08-31 ·

A distributed systems and methods for generating composite media including receiving a media context that defines media that is to be generated, the media context including: a definition of a sequence of media segment specifications and, an identification of a set of remote devices. For each media segment specification, a reference segment may be generated and transmitted to at least one remote device. A media segment may be received from each of the remote device, the media segment having been recorded by a camera. Verified media sequences may replace the corresponding reference segment. The media segments may be aggregated and an updated sequence of media segments may be defined. An instance of the media context that includes a subset of the updated sequence of media segments may then be generated.