G10L13/08

APPARATUS, METHOD, AND COMPUTER PROGRAM FOR PROVIDING LIP-SYNC VIDEO AND APPARATUS, METHOD, AND COMPUTER PROGRAM FOR DISPLAYING LIP-SYNC VIDEO
20230023102 · 2023-01-26 · ·

Provided is a lip-sync video providing apparatus for providing a video in which a voice and lip shapes are synchronized. The lip-sync video providing apparatus is configured to obtain a template video including at least one frame and depicting a target object, obtain a target voice to be used as a voice of the target object, generate a lip image corresponding to the voice for each frame of the template video by using a trained first artificial neural network, and generate lip-sync data including frame identification information of a frame in the template video, the lip image, and position information regarding the lip image in a frame in the template video.

APPARATUS, METHOD, AND COMPUTER PROGRAM FOR PROVIDING LIP-SYNC VIDEO AND APPARATUS, METHOD, AND COMPUTER PROGRAM FOR DISPLAYING LIP-SYNC VIDEO
20230023102 · 2023-01-26 · ·

Provided is a lip-sync video providing apparatus for providing a video in which a voice and lip shapes are synchronized. The lip-sync video providing apparatus is configured to obtain a template video including at least one frame and depicting a target object, obtain a target voice to be used as a voice of the target object, generate a lip image corresponding to the voice for each frame of the template video by using a trained first artificial neural network, and generate lip-sync data including frame identification information of a frame in the template video, the lip image, and position information regarding the lip image in a frame in the template video.

TEXT-TO-SPEECH SYNTHESIS METHOD AND SYSTEM, AND A METHOD OF TRAINING A TEXT-TO-SPEECH SYNTHESIS SYSTEM

A text-to-speech synthesis method includes receiving text, inputting the received text in a synthesizer that includes a prediction network configured to convert the received text into speech data having a speech attribute that includes emotion, intention, projection, pace, and/or accent, and outputting said speech data. The prediction network is obtained by obtaining a first sub-dataset and a second sub-dataset, where the first sub-dataset and the second sub-dataset each include audio samples and corresponding text, and the speech attribute of the audio samples of the second sub-dataset is more pronounced than the speech attribute of the audio samples of the first sub-dataset, training a first model using the first sub-dataset until a performance metric reaches a first predetermined value, training a second model by further training the first model using the second sub-dataset until the performance metric reaches a second predetermined value, and selecting one trained model as the prediction network.

TEXT-TO-SPEECH SYNTHESIS METHOD AND SYSTEM, AND A METHOD OF TRAINING A TEXT-TO-SPEECH SYNTHESIS SYSTEM

A text-to-speech synthesis method includes receiving text, inputting the received text in a synthesizer that includes a prediction network configured to convert the received text into speech data having a speech attribute that includes emotion, intention, projection, pace, and/or accent, and outputting said speech data. The prediction network is obtained by obtaining a first sub-dataset and a second sub-dataset, where the first sub-dataset and the second sub-dataset each include audio samples and corresponding text, and the speech attribute of the audio samples of the second sub-dataset is more pronounced than the speech attribute of the audio samples of the first sub-dataset, training a first model using the first sub-dataset until a performance metric reaches a first predetermined value, training a second model by further training the first model using the second sub-dataset until the performance metric reaches a second predetermined value, and selecting one trained model as the prediction network.

Automated call requests with status updates

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, relating to synthetic call status updates. In some implementations, a method includes determining, by a task manager module, that a triggering event has occurred to provide a current status of a user call request. The method may then determine, by the task manager module, the current status of the user call request. A representation of the current status of the user call request is generated. Then, the generated representation of the current status of the user call request is provided to the user.

DYNAMIC ADJUSTMENT OF CONTENT DESCRIPTIONS FOR VISUAL COMPONENTS
20230230577 · 2023-07-20 ·

In some implementations, a mobile application may receive at least one string included in at least one content description associated with at least one visual component of a screen generated by a mobile application. The mobile application may apply a function to the at least one string, wherein the function performs a targeted replacement of characters included in the at least one string based on at least one optimization associated with a text-to-speech algorithm. Accordingly, the mobile application may receive output from the function that includes at least one modified string based on the at least one string and generate an audio signal, based on the at least one modified string, using the text-to-speech algorithm.

DYNAMIC ADJUSTMENT OF CONTENT DESCRIPTIONS FOR VISUAL COMPONENTS
20230230577 · 2023-07-20 ·

In some implementations, a mobile application may receive at least one string included in at least one content description associated with at least one visual component of a screen generated by a mobile application. The mobile application may apply a function to the at least one string, wherein the function performs a targeted replacement of characters included in the at least one string based on at least one optimization associated with a text-to-speech algorithm. Accordingly, the mobile application may receive output from the function that includes at least one modified string based on the at least one string and generate an audio signal, based on the at least one modified string, using the text-to-speech algorithm.

Real-time neural text-to-speech

Embodiments of a production-quality text-to-speech (TTS) system constructed from deep neural networks are described. System embodiments comprise five major building blocks: a segmentation model for locating phoneme boundaries, a grapheme-to-phoneme conversion model, a phoneme duration prediction model, a fundamental frequency prediction model, and an audio synthesis model. For embodiments of the segmentation model, phoneme boundary detection was performed with deep neural networks using Connectionist Temporal Classification (CTC) loss. For embodiments of the audio synthesis model, a variant of WaveNet was created that requires fewer parameters and trains faster than the original. By using a neural network for each component, system embodiments are simpler and more flexible than traditional TTS systems, where each component requires laborious feature engineering and extensive domain expertise. Inference with system embodiments may be performed faster than real time.

Real-time neural text-to-speech

Embodiments of a production-quality text-to-speech (TTS) system constructed from deep neural networks are described. System embodiments comprise five major building blocks: a segmentation model for locating phoneme boundaries, a grapheme-to-phoneme conversion model, a phoneme duration prediction model, a fundamental frequency prediction model, and an audio synthesis model. For embodiments of the segmentation model, phoneme boundary detection was performed with deep neural networks using Connectionist Temporal Classification (CTC) loss. For embodiments of the audio synthesis model, a variant of WaveNet was created that requires fewer parameters and trains faster than the original. By using a neural network for each component, system embodiments are simpler and more flexible than traditional TTS systems, where each component requires laborious feature engineering and extensive domain expertise. Inference with system embodiments may be performed faster than real time.

Stylizing text-to-speech (TTS) voice response for assistant systems

In one embodiment, a method includes receiving a voice input from a user and determining a first style of the voice input, based on first features extracted from the voice input. A second style for a voice response having second features may then be determined based on the first style. Finally, the voice response may be generated based on the second features of the second style, and this voice response may be provided in response to the voice input.