G10L13/047

Injecting Text in Self-Supervised Speech Pre-training

A method includes receiving training data that includes unspoken text utterances and un-transcribed non-synthetic speech utterances. Each unspoken text utterance is not paired with any corresponding spoken utterance of non-synthetic speech. Each un-transcribed non-synthetic speech utterance is not paired with a corresponding transcription. The method also includes generating a corresponding synthetic speech representation for each unspoken textual utterance of the received training data using a text-to-speech model. The method also includes pre-training an audio encoder on the synthetic speech representations generated for the unspoken textual utterances and the un-transcribed non-synthetic speech utterances to teach the audio encoder to jointly learn shared speech and text representations.

Injecting Text in Self-Supervised Speech Pre-training

A method includes receiving training data that includes unspoken text utterances and un-transcribed non-synthetic speech utterances. Each unspoken text utterance is not paired with any corresponding spoken utterance of non-synthetic speech. Each un-transcribed non-synthetic speech utterance is not paired with a corresponding transcription. The method also includes generating a corresponding synthetic speech representation for each unspoken textual utterance of the received training data using a text-to-speech model. The method also includes pre-training an audio encoder on the synthetic speech representations generated for the unspoken textual utterances and the un-transcribed non-synthetic speech utterances to teach the audio encoder to jointly learn shared speech and text representations.

Two-Level Text-To-Speech Systems Using Synthetic Training Data

A method includes obtaining training data including a plurality of training audio signals and corresponding transcripts. Each training audio signal is spoken by a target speaker in a first accent/dialect. For each training audio signal of the training data, the method includes generating a training synthesized speech representation spoken by the target speaker in a second accent/dialect different than the first accent/dialect and training a text-to-speech (TTS) system based on the corresponding transcript and the training synthesized speech representation. The method also includes receiving an input text utterance to be synthesized into speech in the second accent/dialect. The method also includes obtaining conditioning inputs that include a speaker embedding and an accent/dialect identifier that identifies the second accent/dialect. The method also includes generating an output audio waveform corresponding to a synthesized speech representation of the input text sequence that clones the voice of the target speaker in the second accent/dialect.

Two-Level Text-To-Speech Systems Using Synthetic Training Data

A method includes obtaining training data including a plurality of training audio signals and corresponding transcripts. Each training audio signal is spoken by a target speaker in a first accent/dialect. For each training audio signal of the training data, the method includes generating a training synthesized speech representation spoken by the target speaker in a second accent/dialect different than the first accent/dialect and training a text-to-speech (TTS) system based on the corresponding transcript and the training synthesized speech representation. The method also includes receiving an input text utterance to be synthesized into speech in the second accent/dialect. The method also includes obtaining conditioning inputs that include a speaker embedding and an accent/dialect identifier that identifies the second accent/dialect. The method also includes generating an output audio waveform corresponding to a synthesized speech representation of the input text sequence that clones the voice of the target speaker in the second accent/dialect.

METHOD AND ELECTRONIC DEVICE FOR INTELLIGENTLY READING DISPLAYED CONTENTS

A method for intelligently reading displayed contents by an electronic device is provided. The method includes obtaining a screen representation based on a plurality of contents displayed on a screen of the electronic device. The method includes extracting a plurality of insights comprising at least one of intent, importance, emotion, sound representation and information sequence of the plurality of contents from the plurality of contents based on the screen representation. The method includes generating audio emulating the extracted plurality of insights.

METHOD AND ELECTRONIC DEVICE FOR INTELLIGENTLY READING DISPLAYED CONTENTS

A method for intelligently reading displayed contents by an electronic device is provided. The method includes obtaining a screen representation based on a plurality of contents displayed on a screen of the electronic device. The method includes extracting a plurality of insights comprising at least one of intent, importance, emotion, sound representation and information sequence of the plurality of contents from the plurality of contents based on the screen representation. The method includes generating audio emulating the extracted plurality of insights.

SPEECH SYNTHESIS METHOD AND SYSTEM
20230215420 · 2023-07-06 · ·

Disclosed is a speech synthesis method including: acquiring fundamental frequency information and acoustic feature information from original speech; generating an impulse train from the fundamental frequency information, and inputting it to a harmonic time-varying filter; inputting the acoustic feature information into a neural network filter estimator to obtain corresponding impulse response information; generating noise signal by a noise generator; determining, by the harmonic time-varying filter, harmonic component information through filtering processing on the impulse train and the impulse response information; determining, by a noise time-varying filter, noise component information based on the impulse response information and the noise; and generating a synthesized speech from the harmonic component information and the noise component information. Acoustic features are processed to obtain corresponding impulse response information, and harmonic component information and noise component information are modeled respectively, thereby reducing computation of speech synthesis and improving the quality of the synthesized speech.

SPEECH SYNTHESIS METHOD AND SYSTEM
20230215420 · 2023-07-06 · ·

Disclosed is a speech synthesis method including: acquiring fundamental frequency information and acoustic feature information from original speech; generating an impulse train from the fundamental frequency information, and inputting it to a harmonic time-varying filter; inputting the acoustic feature information into a neural network filter estimator to obtain corresponding impulse response information; generating noise signal by a noise generator; determining, by the harmonic time-varying filter, harmonic component information through filtering processing on the impulse train and the impulse response information; determining, by a noise time-varying filter, noise component information based on the impulse response information and the noise; and generating a synthesized speech from the harmonic component information and the noise component information. Acoustic features are processed to obtain corresponding impulse response information, and harmonic component information and noise component information are modeled respectively, thereby reducing computation of speech synthesis and improving the quality of the synthesized speech.

END-TO-END NEURAL TEXT-TO-SPEECH MODEL WITH PROSODY CONTROL

Methods and systems for generating an end-to-end neural text-to-speech (TTS) model to process an input text to generate speech representations. An annotated set of text documents including annotations inserted therein to indicate prosodic features are input into the TTS model. The TTS model is trained using the annotated dataset and a corresponding dataset of speech representations of the text documents that include prosody associated with the indicated prosodic features. The trained TTS model learns to associate the prosody with the annotations.

END-TO-END NEURAL TEXT-TO-SPEECH MODEL WITH PROSODY CONTROL

Methods and systems for generating an end-to-end neural text-to-speech (TTS) model to process an input text to generate speech representations. An annotated set of text documents including annotations inserted therein to indicate prosodic features are input into the TTS model. The TTS model is trained using the annotated dataset and a corresponding dataset of speech representations of the text documents that include prosody associated with the indicated prosodic features. The trained TTS model learns to associate the prosody with the annotations.