Patent classifications
G10L13/06
INTELLIGENT VOICE RECOGNITION METHOD AND APPARATUS
An intelligent voice recognition method and apparatus are disclosed. An intelligent voice recognition apparatus according to one embodiment of the present invention recognizes speech of the user and outputs a response determined on the basis of the speech, wherein, when a plurality of candidate responses related to the speech exist, the response is determined from among the plurality of candidate responses on the basis of device state information about the voice recognition apparatus, and thus ambiguity in a conversation between a user and the voice recognition apparatus can be reduced so that more natural conversation processing is possible. The intelligent voice recognition apparatus and/or an artificial intelligence (AI) apparatus of the present invention can be associated with an AI module, a drone (an unmanned aerial vehicle (UAV)), a robot, an augmented reality (AR) device, a virtual reality (VR) device, a device related to a 5G service, and the like.
METHOD AND APPARATUS FOR SPEECH SYNTHESIS, AND STORAGE MEDIUM
A method for speech synthesis includes obtaining text to be synthesized and an identifier of a speaker, the text being written in a first language; obtaining pronunciation information of each character in the text; generating linguistic features of the text by performing feature extraction on the pronunciation information of each character in the text based on the first language; and obtaining a target speech in a second language other than the first language, by performing speech synthesis based on the linguistic features and the identifier of the speaker.
METHOD AND APPARATUS FOR SPEECH SYNTHESIS, AND STORAGE MEDIUM
A method for speech synthesis includes obtaining text to be synthesized and an identifier of a speaker, the text being written in a first language; obtaining pronunciation information of each character in the text; generating linguistic features of the text by performing feature extraction on the pronunciation information of each character in the text based on the first language; and obtaining a target speech in a second language other than the first language, by performing speech synthesis based on the linguistic features and the identifier of the speaker.
AUDIO GENERATION METHOD, RELATED APPARATUS, AND STORAGE MEDIUM
Embodiments of this application provide an audio generation method, a related apparatus, and a storage medium, to provide a better audio generation solution for a user. In embodiments of this application, a text is obtained, a song clip corresponding to the text is obtained through matching, and the song clip is used as audio corresponding to the text. In this way, the text can be expressed in a manner of the song clip.
TEXT-TO-SPEECH (TTS) PROCESSING
During text-to-speech processing, a speech model creates output audio data, including speech, that corresponds to input text data that includes a representation of the speech. A spectrogram estimator estimates a frequency spectrogram of the speech; the corresponding frequency-spectrogram data is used to condition the speech model. A plurality of acoustic features corresponding to different segments of the input text data, such as phonemes, syllable-level features, and/or word-level features, may be separately encoded into context vectors; the spectrogram estimator uses these separate context vectors to create the frequency spectrogram.
TEXT-TO-SPEECH (TTS) PROCESSING
During text-to-speech processing, a speech model creates output audio data, including speech, that corresponds to input text data that includes a representation of the speech. A spectrogram estimator estimates a frequency spectrogram of the speech; the corresponding frequency-spectrogram data is used to condition the speech model. A plurality of acoustic features corresponding to different segments of the input text data, such as phonemes, syllable-level features, and/or word-level features, may be separately encoded into context vectors; the spectrogram estimator uses these separate context vectors to create the frequency spectrogram.
Voice synthesis method, voice synthesis apparatus, and recording medium
Voice synthesis method and apparatus generate second control data using an intermediate trained model with first input data including first control data designating phonetic identifiers, change the second control data in accordance with a first user instruction provided by a user, generate synthesis data representing frequency characteristics of a voice to be synthesized using a final trained model with final input data including the first control data and the changed second control data, and generate a voice signal based on the generated synthesis data.
Voice synthesis method, voice synthesis apparatus, and recording medium
Voice synthesis method and apparatus generate second control data using an intermediate trained model with first input data including first control data designating phonetic identifiers, change the second control data in accordance with a first user instruction provided by a user, generate synthesis data representing frequency characteristics of a voice to be synthesized using a final trained model with final input data including the first control data and the changed second control data, and generate a voice signal based on the generated synthesis data.
VOICE SYNTHESIS METHOD, VOICE SYNTHESIS APPARATUS, AND RECORDING MEDIUM
Voice synthesis method and apparatus generate second control data using an intermediate trained model with first input data including first control data designating phonetic identifiers, change the second control data in accordance with a first user instruction provided by a user, generate synthesis data representing frequency characteristics of a voice to be synthesized using a final trained model with final input data including the first control data and the changed second control data, and generate a voice signal based on the generated synthesis data.
VOICE SYNTHESIS METHOD, VOICE SYNTHESIS APPARATUS, AND RECORDING MEDIUM
Voice synthesis method and apparatus generate second control data using an intermediate trained model with first input data including first control data designating phonetic identifiers, change the second control data in accordance with a first user instruction provided by a user, generate synthesis data representing frequency characteristics of a voice to be synthesized using a final trained model with final input data including the first control data and the changed second control data, and generate a voice signal based on the generated synthesis data.