G10L21/10

END-TO-END SPEECH CONVERSION

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for end to end speech conversion are disclosed. In one aspect, a method includes the actions of receiving first audio data of a first utterance of one or more first terms spoken by a user. The actions further include providing the first audio data as an input to a model that is configured to receive first given audio data in a first voice and output second given audio data in a synthesized voice without performing speech recognition on the first given audio data. The actions further include receiving second audio data of a second utterance of the one or more first terms spoken in the synthesized voice. The actions further include providing, for output, the second audio data of the second utterance of the one or more first terms spoken in the synthesized voice.

AUTOMATED MIXING OF AUDIO DESCRIPTION

A computer-implemented method of audio processing, the method comprising: receiving audio object data and audio description data, wherein the audio object data includes a first plurality of audio objects; calculating a long-term loudness of the audio object data and a long- term loudness of the audio description data; calculating a plurality of short-term loudnesses of the audio object data and a plurality of short-term loudnesses of the audio description data; reading a first plurality of mixing parameters that correspond to the audio object data; generating a second plurality of mixing parameters based on the first plurality of mixing parameters, the long-term loudness of the audio object data, the long-term loudness of the audio description data, the plurality of short-term loudnesses of the audio object data, and the plurality of short-term loudnesses of the audio description data; generating a gain adjustment visualization corresponding to the second plurality of mixing parameters, the audio object data and the audio description data; and generating mixed audio object data by mixing the audio object data and the audio description data according to the second plurality of mixing parameters, wherein the mixed audio object data includes a second plurality of audio objects, wherein the second plurality of audio objects correspond to the first plurality of audio objects mixed with the audio description data according to the second plurality of mixing parameters.

AUTOMATED MIXING OF AUDIO DESCRIPTION

A computer-implemented method of audio processing, the method comprising: receiving audio object data and audio description data, wherein the audio object data includes a first plurality of audio objects; calculating a long-term loudness of the audio object data and a long- term loudness of the audio description data; calculating a plurality of short-term loudnesses of the audio object data and a plurality of short-term loudnesses of the audio description data; reading a first plurality of mixing parameters that correspond to the audio object data; generating a second plurality of mixing parameters based on the first plurality of mixing parameters, the long-term loudness of the audio object data, the long-term loudness of the audio description data, the plurality of short-term loudnesses of the audio object data, and the plurality of short-term loudnesses of the audio description data; generating a gain adjustment visualization corresponding to the second plurality of mixing parameters, the audio object data and the audio description data; and generating mixed audio object data by mixing the audio object data and the audio description data according to the second plurality of mixing parameters, wherein the mixed audio object data includes a second plurality of audio objects, wherein the second plurality of audio objects correspond to the first plurality of audio objects mixed with the audio description data according to the second plurality of mixing parameters.

COMPUTER-BASED TECHNIQUES FOR VISUALLY NARRATING RECORDED MEETING CONTENT

In various embodiments, a meeting narration application generates visualizations of recorded meeting data. The meeting narration application generates a first visualization of a set of parameters based on a set of transcript sentences associated with the recorded meeting data. The meeting narration application displays the first visualization and a first expanded content visualization of a first transcript sentence included in the set of transcript sentences within a graphical user interface (GUI). Subsequently, the meeting narration application receives a user event associated with the first visualization via the GUI. The meeting narration application modifies a first parameter selection associated with the set of parameters based on the user event to generate a modified parameter selection. Based on the modified parameter selection, the meeting narration application displays a first compressed content visualization of the first transcript sentence within the GUI.

SEMIAUTOMATED RELAY METHOD AND APPARATUS

A call captioning system for captioning a hearing user's (HU's) voice signal during an ongoing call with an assisted user (AU) includes: an AU communication device with a display screen and a caption service activation feature, and a first processor programmed to, during an ongoing call, receive the HU's voice signal. Prior to activating the caption service via the activation feature, the processor uses an automated speech recognition (ASR) engine to generate HU voice signal captions, detect errors in the HU voice signal captions, use the errors to train the ASR software to the HU's voice signal to increase accuracy of the HU captions generated by the ASR engine; and store the trained ASR engine for subsequent use. Upon activating the caption service during the ongoing call, the processor uses the trained ASR engine to generate HU voice signal captions and present them to the AU via the display screen.

SEMIAUTOMATED RELAY METHOD AND APPARATUS

A call captioning system for captioning a hearing user's (HU's) voice signal during an ongoing call with an assisted user (AU) includes: an AU communication device with a display screen and a caption service activation feature, and a first processor programmed to, during an ongoing call, receive the HU's voice signal. Prior to activating the caption service via the activation feature, the processor uses an automated speech recognition (ASR) engine to generate HU voice signal captions, detect errors in the HU voice signal captions, use the errors to train the ASR software to the HU's voice signal to increase accuracy of the HU captions generated by the ASR engine; and store the trained ASR engine for subsequent use. Upon activating the caption service during the ongoing call, the processor uses the trained ASR engine to generate HU voice signal captions and present them to the AU via the display screen.

Method and apparatus for predicting mouth-shape feature, and electronic device

A method and apparatus for predicting a mouth-shape feature, and an electronic device are provided. A specific implementation of the method comprises: recognizing a phonetic posterior gram (PPG) of a phonetic feature; and performing a prediction on the PPG by using a neural network model, to predict a mouth-shape feature of the phonetic feature, the neural network model being obtained by training with training samples and an input thereof including a PPG and an output thereof including a mouth-shape feature, and the training samples including a PPG training sample and a mouth-shape feature training sample.

Method and apparatus for controlling avatars based on sound
11562520 · 2023-01-24 · ·

Provided is a method for controlling avatar motion, which is operated in a user terminal and includes receiving an input audio by an audio sensor, and controlling, by one and more processors, a motion of a first user avatar based on the input audio.

VOICE INTERACTION METHOD AND ELECTRONIC DEVICE
20230017274 · 2023-01-19 ·

Embodiments of this application provide a voice interaction method and an electronic device, and relate to the field of artificial intelligence AI technologies and the field of voice processing technologies. A specific solution includes: An electronic device may receive first voice information sent by a second user, and the electronic device recognizes the first voice information in response to the first voice information. The first voice information is used to request a voice conversation with a first user. The electronic device may have, on a basis that the electronic device recognizes that the first voice information is voice information of the second user, a voice conversation with the second user by imitating a voice of the first user and in a mode in which the first user has a voice conversation with the second user.

METHOD AND APPARATUS FOR PROCESSING SPEECH, ELECTRONIC DEVICE AND STORAGE MEDIUM

A method for processing a speech includes: acquiring an original speech; extracting a spectrogram from the original speech; acquiring a speech synthesis model, where the speech synthesis model comprises a first generation sub-model and a second generation sub-model; generating a harmonic structure of the spectrogram, by invoking the first generation sub-model to process the spectrogram; and generating a target speech, by invoking the second generation sub-model to process the harmonic structure and the spectrogram.