Patent classifications
G10L13/10
System Providing Expressive and Emotive Text-to-Speech
A speech to text system includes a text and labels module receiving a text input and providing a text analysis and a label with a phonetic description of the text. A label buffer receives the label from the text and labels module. A parameter generation module accesses the label from the label buffer and generates a speech generation parameter. A parameter buffer receives the parameter from the parameter generation module. An audio generation module receives the text input, the label, and/or the parameter and generates a plurality of audio samples, A scheduler monitors and schedules the text and label module, the parameter generation module, and/or the audio generation module. The parameter generation module is further configured to initialize a voice identifier with a Voice Style Sheet (VSS) parameter, receive an input indicating a modification to the VSS parameter, and modify the VSS parameter according to the modification.
GLOBAL PROSODY STYLE TRANSFER WITHOUT TEXT TRANSCRIPTIONS
A computer-implemented method is provided of using a machine learning model for disentanglement of prosody in spoken natural language. The method includes encoding, by a computing device, the spoken natural language to produce content code. The method further includes resampling, by the computing device without text transcriptions, the content code to obscure the prosody by applying an unsupervised technique to the machine learning model to generate prosody-obscured content code. The method additionally includes decoding, by the computing device, the prosody-obscured content code to synthesize speech indirectly based upon the content code.
GLOBAL PROSODY STYLE TRANSFER WITHOUT TEXT TRANSCRIPTIONS
A computer-implemented method is provided of using a machine learning model for disentanglement of prosody in spoken natural language. The method includes encoding, by a computing device, the spoken natural language to produce content code. The method further includes resampling, by the computing device without text transcriptions, the content code to obscure the prosody by applying an unsupervised technique to the machine learning model to generate prosody-obscured content code. The method additionally includes decoding, by the computing device, the prosody-obscured content code to synthesize speech indirectly based upon the content code.
METHOD OF CONVERTING SPEECH, ELECTRONIC DEVICE, AND READABLE STORAGE MEDIUM
A method of converting a speech, an electronic device, and a readable storage medium are provided, which relate to a field of artificial intelligence technology such as speech and deep learning, in particular to speech converting technology. The method of converting a speech includes: acquiring a first speech of a target speaker; acquiring a speech of an original speaker; extracting a first feature parameter of the first speech of the target speaker; extracting a second feature parameter of the speech of the original speaker; processing the first feature parameter and the second feature parameter to obtain a Mel spectrum information; and converting the Mel spectrum information to output a second speech of the target speaker having a tone identical to a tone of the first speech of the target speaker and a content identical to a content of the speech of the original speaker.
Text-to-speech synthesis method and apparatus using machine learning, and computer-readable storage medium
A text-to-speech synthesis method using machine learning, the text-to-speech synthesis method is disclosed. The method includes generating a single artificial neural network text-to-speech synthesis model by performing machine learning based on a plurality of learning texts and speech data corresponding to the plurality of learning texts, receiving an input text, receiving an articulatory feature of a speaker, generating output speech data for the input text reflecting the articulatory feature of the speaker by inputting the articulatory feature of the speaker to the single artificial neural network text-to-speech synthesis model.
Text-to-speech synthesis method and apparatus using machine learning, and computer-readable storage medium
A text-to-speech synthesis method using machine learning, the text-to-speech synthesis method is disclosed. The method includes generating a single artificial neural network text-to-speech synthesis model by performing machine learning based on a plurality of learning texts and speech data corresponding to the plurality of learning texts, receiving an input text, receiving an articulatory feature of a speaker, generating output speech data for the input text reflecting the articulatory feature of the speaker by inputting the articulatory feature of the speaker to the single artificial neural network text-to-speech synthesis model.
INTELLIGENT VOICE RECOGNITION METHOD AND APPARATUS
An intelligent voice recognition method and apparatus are disclosed. An intelligent voice recognition apparatus according to one embodiment of the present invention recognizes speech of the user and outputs a response determined on the basis of the speech, wherein, when a plurality of candidate responses related to the speech exist, the response is determined from among the plurality of candidate responses on the basis of device state information about the voice recognition apparatus, and thus ambiguity in a conversation between a user and the voice recognition apparatus can be reduced so that more natural conversation processing is possible. The intelligent voice recognition apparatus and/or an artificial intelligence (AI) apparatus of the present invention can be associated with an AI module, a drone (an unmanned aerial vehicle (UAV)), a robot, an augmented reality (AR) device, a virtual reality (VR) device, a device related to a 5G service, and the like.
METHOD AND APPARATUS FOR SPEECH SYNTHESIS, AND STORAGE MEDIUM
A method for speech synthesis includes obtaining text to be synthesized and an identifier of a speaker, the text being written in a first language; obtaining pronunciation information of each character in the text; generating linguistic features of the text by performing feature extraction on the pronunciation information of each character in the text based on the first language; and obtaining a target speech in a second language other than the first language, by performing speech synthesis based on the linguistic features and the identifier of the speaker.
METHOD AND APPARATUS FOR SPEECH SYNTHESIS, AND STORAGE MEDIUM
A method for speech synthesis includes obtaining text to be synthesized and an identifier of a speaker, the text being written in a first language; obtaining pronunciation information of each character in the text; generating linguistic features of the text by performing feature extraction on the pronunciation information of each character in the text based on the first language; and obtaining a target speech in a second language other than the first language, by performing speech synthesis based on the linguistic features and the identifier of the speaker.
TEXT-TO-SPEECH (TTS) PROCESSING
During text-to-speech processing, a speech model creates output audio data, including speech, that corresponds to input text data that includes a representation of the speech. A spectrogram estimator estimates a frequency spectrogram of the speech; the corresponding frequency-spectrogram data is used to condition the speech model. A plurality of acoustic features corresponding to different segments of the input text data, such as phonemes, syllable-level features, and/or word-level features, may be separately encoded into context vectors; the spectrogram estimator uses these separate context vectors to create the frequency spectrogram.