Patent classifications
G10L2021/105
System and method for synthesizing photo-realistic video of a speech
A system and a method for obtaining a photo-realistic video from a text. The method includes: providing the text and an image of a talking person; synthesizing a speech audio from the text; extracting an acoustic feature from the speech audio by an acoustic feature extractor; and generating the photo-realistic video from the acoustic feature and the image by a video generation neural network. The video generating neural network is pre-trained by: providing a training video and a training image; extracting a training acoustic feature from training audio of the training video by the acoustic feature extractor; generating video frames from the training image and the training acoustic feature by the video generation neural network; and comparing the generated video frames with ground truth video frames using generative adversarial network (GAN). The ground truth video frames correspond to the training video frames.
SYSTEM, METHOD, AND COMPUTER PROGRAM FOR TRANSMITTING FACE MODELS BASED ON FACE DATA POINTS
A system, method, and computer program are provided for receiving face models based on face nodal points. In use, a real-time face model is received, wherein the real-time face model includes one or more face nodal points. Real-time face nodal points are received, including additional one or more face nodal points. The real-time face model is manipulated based on the real-time face nodal points.
Audio-visual dialogue system and method
The present invention provides an audio-visual dialogue system that allows a user to create an ‘avatar’ which may be customised to look and sound a particular way. The avatar may be created to resemble, for example, a person, animal or mythical creature, and generated to have a variable voice which may be female or male. The system then employs a real-time voice conversion in order to transform any audio input, for example, spoken word, into a target voice that is selected and customised by the user. The system is arranged to facially animate the avatar using a real-time lip-synching algorithm such that the generated avatar and the target voice are synchronised.
Systems and methods for generating composite media using distributed networks
A distributed systems and methods for generating composite media including receiving a media context that defines media that is to be generated, the media context including: a definition of a sequence of media segment specifications and, an identification of a set of remote devices. For each media segment specification, a reference segment may be generated and transmitted to at least one remote device. A media segment may be received from each of the remote device, the media segment having been recorded by a camera. Verified media sequences may replace the corresponding reference segment. The media segments may be aggregated and an updated sequence of media segments may be defined. An instance of the media context that includes a subset of the updated sequence of media segments may then be generated.
VOICE RECEIVING METHOD AND DEVICE
A voice receiving device configured for accurate listening includes a microphone array, a camera, a capturing module, a determining module, a time module, a calculating module, and a de-noising module. The microphone array captures a first voice signal and a second voice signal and the camera captures mouth pictures of a user. The determining module determines whether the first voice signal is synchronized with the mouth pictures, and if so compares the first voice signal to a model preset voice signal of a user to determine a target voice signal. The time module obtains time delay difference between one voice reaching different microphones. The calculating module calculates a position of sound source of the target voice signal. According to the position of the sound source, the de-noising module de-noises by reference to the second voice signal. The disclosure further provides a voice receiving method.
ANIMATION SYNTHESIS SYSTEM AND LIP ANIMATION SYNTHESIS METHOD
An animation display system is provided. The animation display system includes a display; a storage configured to store a language model database, a phonetic-symbol lip-motion matching database and a lip motion synthesis database; and a processor electronically connected to the storage and the display, respectively. The processor includes a speech conversion module, a phonetic-symbol lip-motion matching module, and a lip motion synthesis module. A lip animation display method is also provided.
METHOD OF GENERATING 3D VIDEO, METHOD OF TRAINING MODEL, ELECTRONIC DEVICE, AND STORAGE MEDIUM
A method of generating a 3D video, a method of training a neural network model, an electronic device, and a storage medium, which relate to a field of image processing, and in particular to technical fields of computer vision, augmented/virtual reality and deep learning. The method includes: determining, based on an input speech feature, a principal component analysis (PCA) coefficient by using a first network, wherein the PCA coefficient is used to generate the 3D video; correcting the PCA coefficient by using a second network; generating a lip movement information based on the corrected PCA coefficient and a PCA parameter for a neural network model, wherein the neural network model includes the first network and the second network; and applying the lip movement information to a pre-constructed 3D basic avatar model to obtain a 3D video with a lip movement effect.
APPARATUS AND METHOD FOR GENERATING LIP SYNC IMAGE
An apparatus for generating a lip sync image according to a disclosed embodiment has one or more processors and a memory which stores one or more programs executed by the one or more processors. The apparatus includes a first artificial neural network model configured to generate an utterance synthesis image by using a person background image and an utterance audio signal corresponding to the person background image as an input, and generate a silence synthesis image by using only the person background image as an input, and a second artificial neural network model configured to output, from a preset utterance maintenance image and the first artificial neural network model, classification values for the preset utterance maintenance image and the silence synthesis image by using the silence synthesis image as an input.
APPARATUS AND METHOD FOR GENERATING LIP SYNC IMAGE
An apparatus for generating a lip sync image according to disclosed embodiment has one or more processors and a memory which stores one or more programs executed by the one or more processors. The apparatus includes a first artificial neural network model configured to generate an utterance match synthesis image by using a person background image and an utterance match audio signal corresponding to the person background image as an input, and generate an utterance mismatch synthesis image by using the person background image and an utterance mismatch audio signal not corresponding to the person background image as an input, and a second artificial neural network model configured to output classification values for an input pair in which an image and a voice match and an input pair in which an image and a voice do not match by using the input pairs as an input.
DEVICE AND METHOD FOR SYNTHESIZING IMAGE CAPABLE OF IMPROVING IMAGE QUALITY
An image synthesis device according to a disclosed embodiment is an image synthesis device has one or more processors and a memory which stores one or more programs executed by the one or more processors. The image synthesis device includes a first artificial neural network model provided to learn each of a first task of using a damaged image as an input to output a restored image and a second task of using an original image as an input to output a reconstructed image, and a second artificial neural network model trained to use the reconstructed image output from the first artificial neural network model as an input and improve the image quality of the reconstructed image.