G10L2013/105

Method And Apparatus For Training Model, Method And Apparatus For Synthesizing Speech, Device And Storage Medium
20210390943 · 2021-12-16 ·

The present disclosure discloses a method and apparatus for training a model, a method and apparatus for synthesizing a speech, a device and a storage medium, and relates to the field of natural language processing and deep learning technology. The method for training a model may include: determining a phoneme feature and a prosodic word boundary feature of sample text data; inserting a pause character into the phoneme feature according to the prosodic word boundary feature to obtain a combined feature of the sample text data; and training an initial speech synthesis model according to the combined feature of the sample text data, to obtain a target speech synthesis model.

GENERATING AUDIO DATA USING UNALIGNED TEXT INPUTS WITH AN ADVERSARIAL NETWORK

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for using a generative neural network to convert conditioning text inputs to audio outputs. The generative neural network includes an alignment neural network that is configured to receive a generative input that includes the conditioning text input and to process the generative input to generate an aligned conditioning sequence that comprises a respective feature representation at each of a plurality of first time steps and that is temporally aligned with the audio output.

ELECTRONIC DEVICE AND METHOD OF CONTROLLING SPEECH RECOGNITION BY ELECTRONIC DEVICE
20210375265 · 2021-12-02 · ·

An electronic device for adjusting a speech output rate (speech rate) of speech output data.

DURATION INFORMED ATTENTION NETWORK (DURIAN) FOR AUDIO-VISUAL SYNTHESIS
20210375259 · 2021-12-02 · ·

A method and apparatus include receiving a text input that includes a sequence of text components. Respective temporal durations of the text components are determined using a duration model. A spectrogram frame is generated based on the duration model. An audio waveform is generated based on the spectrogram frame. Video information is generated based on the audio waveform. The audio waveform is provided as an output along with a corresponding video.

SYSTEM AND METHODOLOGY FOR MODULATION OF DYNAMIC GAPS IN SPEECH
20220189500 · 2022-06-16 ·

A system capable of speech gap modulation is configured to: receive at least one composite speech portion, which comprises at least one speech portion and at least one dynamic-gap portion, wherein the speech portion(s) comprising at least one variable-value speech portion, wherein the dynamic-gap portion(s) associated with a pause in speech; receive at least one synchronization point, wherein synchronization point(s) is associating a point in time in the composite speech portion(s) and a point in time in other media portion(s); and modulate dynamic-gap portion(s), based at least partially on the at variable-value speech portion(s), and on the point(s), thereby generating at least one modulated composite speech portion. This facilitates improved synchronization of the modulated composite speech portion(s) and the other media portion(s) at the synchronization point(s), when combining the other media portion(s) and the audio-format modulated composite speech portion(s) into a synchronized multimedia output.

METHOD AND SYSTEM FOR SYNTHESIZING CROSS-LINGUAL SPEECH

A method for synthesizing cross-lingual speech includes receiving a request for synthesizing speech, the request for synthesizing speech including a target text document and a target language. Phonetic transcriptions are generated for the target text document. Prosodic annotations for the target text document are generated based on the target text document and the target language. Phone durations and acoustic features are generated based on the phonetic transcriptions and the prosodic annotations using a neural network. A speech corresponding to the target text document in the target language is synthesized based on the generated phone durations and acoustic features.

Speech synthesis method and device

In a speech synthesis method, an emotion intensity feature vector is set for a target synthesis text, an acoustic feature vector corresponding to an emotion intensity is generated based on the emotion intensity feature vector by using an acoustic model, and a speech corresponding to the emotion intensity is synthesized based on the acoustic feature vector. The emotion intensity feature vector is continuously adjustable, and emotion speeches of different intensities can be generated based on values of different emotion intensity feature vectors, so that emotion types of a synthesized speech are more diversified. This application may be applied to a human-computer interaction process in the artificial intelligence (AI) field, to perform intelligent emotion speech synthesis.

Variable-speed phonetic pronunciation machine

A machine causes a touch-sensitive screen to present a graphical user interface that depicts a slider control aligned with a word that includes a first alphabetic letter and a second alphabetic letter. A first zone of the slider control corresponds to the first alphabetic letter, and a second zone of the slider control corresponds to the second alphabetic letter. The machine detects a touch-and-drag input that begins within the first zone and enters the second zone. In response to the touch-and-drag input beginning within the first zone, the machine presents a first phoneme that corresponds to the first alphabetic letter, and the presenting of the first phoneme may include audio playback of the first phoneme. In response to the touch-and-drag input entering the second zone, the machine presents a second phoneme that corresponds to the second alphabetic letter, which may include audio playback of the second phoneme.

Clockwork Hierarchal Variational Encoder

A method for providing a frame-based mel spectral representation of speech includes receiving a text utterance having at least one word, and selecting a mel spectral embedding for the text utterance. Each word in the text utterance has at least one syllable and each syllable has at least one phoneme. For each phoneme, using the selected mel spectral embedding, the method also includes: predicting a duration of the corresponding phoneme by encoding linguistic features of the corresponding phoneme with a corresponding syllable embedding for the syllable that includes the corresponding phoneme; and generating a plurality of fixed-length predicted mel-frequency spectrogram frames based on the predicted duration for the corresponding phoneme. Each fixed-length predicted mel-frequency spectrogram frame representing mel-spectral information of the corresponding phoneme.

TEXT-BASED VIRTUAL OBJECT ANIMATION GENERATION METHOD, APPARATUS, STORAGE MEDIUM, AND TERMINAL

A text-based virtual object animation generation includes acquiring text information, where the text information includes an original text of a virtual object animation to be generated; analyzing an emotional feature of the text information; performing speech synthesis according to the emotional feature, a rhyme boundary, and the text information to obtain audio information, where the audio information includes emotional speech obtained by conversion based on the original text; and generating a corresponding virtual object animation based on the text information and the audio information, where the virtual object animation is synchronized in time with the audio information.