G10L13/08

Speech synthesizer for evaluating quality of synthesized speech using artificial intelligence and method of operating the same
11705105 · 2023-07-18 · ·

A speech synthesizer for evaluating quality of a synthesized speech using artificial intelligence includes a database configured to store a synthesized speech corresponding to text, a correct speech corresponding to the text and a speech quality evaluation model for evaluating the quality of the synthesized speech, and a processor configured to compare a first speech feature set indicating a feature of the synthesized speech and a second speech feature set indicating a feature of the correct speech, acquire a quality evaluation index set including indices used to evaluate the quality of the synthesized speech according to a result of comparison, and determine weights as model parameters of the speech quality evaluation model using the acquired quality evaluation index set and the speech quality evaluation model.

Speech synthesizer for evaluating quality of synthesized speech using artificial intelligence and method of operating the same
11705105 · 2023-07-18 · ·

A speech synthesizer for evaluating quality of a synthesized speech using artificial intelligence includes a database configured to store a synthesized speech corresponding to text, a correct speech corresponding to the text and a speech quality evaluation model for evaluating the quality of the synthesized speech, and a processor configured to compare a first speech feature set indicating a feature of the synthesized speech and a second speech feature set indicating a feature of the correct speech, acquire a quality evaluation index set including indices used to evaluate the quality of the synthesized speech according to a result of comparison, and determine weights as model parameters of the speech quality evaluation model using the acquired quality evaluation index set and the speech quality evaluation model.

Visual responses to user inputs

Techniques for generating a visual response to a user input are described. A system may receive input data corresponding to a user input, determining a first skill component is to determine a response to the user input, and determine a second skill component is to determine supplemental content related to the user input. The system may also determine a template for presenting a visual response to the user input, where the template is configured for presenting the response and the supplemental content. The system may receive, from the first skill component, first image data corresponding to the first response. The system may also receive, from the second skill component, second image data corresponding to the first supplemental content. The system may send, to a device including a display, a command to present the first image data and the second image data using the template.

Visual responses to user inputs

Techniques for generating a visual response to a user input are described. A system may receive input data corresponding to a user input, determining a first skill component is to determine a response to the user input, and determine a second skill component is to determine supplemental content related to the user input. The system may also determine a template for presenting a visual response to the user input, where the template is configured for presenting the response and the supplemental content. The system may receive, from the first skill component, first image data corresponding to the first response. The system may also receive, from the second skill component, second image data corresponding to the first supplemental content. The system may send, to a device including a display, a command to present the first image data and the second image data using the template.

Systems And Methods For Presenting Social Network Communications In Audible Form Based On User Engagement With A User Device
20230223004 · 2023-07-13 · ·

Methods and systems are described herein for generating an audible presentation of a communication received from a remote server. A presentation of a media asset on a user equipment device is generated for a first user. A textual-based communication is received, at the user equipment device from the remote server. The textual-based communication is transmitted to the remote server by a second user and the remote server transmits the textual-based communication to the user equipment device responsive to determining that the second user is on a list of users associated with the first user. An engagement level of the first user with the user equipment device is determined. Responsive to determining that the engagement level does not exceed a threshold value, a presentation of the textual-based communication is generated in audible form.

Injecting Text in Self-Supervised Speech Pre-training

A method includes receiving training data that includes unspoken text utterances and un-transcribed non-synthetic speech utterances. Each unspoken text utterance is not paired with any corresponding spoken utterance of non-synthetic speech. Each un-transcribed non-synthetic speech utterance is not paired with a corresponding transcription. The method also includes generating a corresponding synthetic speech representation for each unspoken textual utterance of the received training data using a text-to-speech model. The method also includes pre-training an audio encoder on the synthetic speech representations generated for the unspoken textual utterances and the un-transcribed non-synthetic speech utterances to teach the audio encoder to jointly learn shared speech and text representations.

Injecting Text in Self-Supervised Speech Pre-training

A method includes receiving training data that includes unspoken text utterances and un-transcribed non-synthetic speech utterances. Each unspoken text utterance is not paired with any corresponding spoken utterance of non-synthetic speech. Each un-transcribed non-synthetic speech utterance is not paired with a corresponding transcription. The method also includes generating a corresponding synthetic speech representation for each unspoken textual utterance of the received training data using a text-to-speech model. The method also includes pre-training an audio encoder on the synthetic speech representations generated for the unspoken textual utterances and the un-transcribed non-synthetic speech utterances to teach the audio encoder to jointly learn shared speech and text representations.

Two-Level Text-To-Speech Systems Using Synthetic Training Data

A method includes obtaining training data including a plurality of training audio signals and corresponding transcripts. Each training audio signal is spoken by a target speaker in a first accent/dialect. For each training audio signal of the training data, the method includes generating a training synthesized speech representation spoken by the target speaker in a second accent/dialect different than the first accent/dialect and training a text-to-speech (TTS) system based on the corresponding transcript and the training synthesized speech representation. The method also includes receiving an input text utterance to be synthesized into speech in the second accent/dialect. The method also includes obtaining conditioning inputs that include a speaker embedding and an accent/dialect identifier that identifies the second accent/dialect. The method also includes generating an output audio waveform corresponding to a synthesized speech representation of the input text sequence that clones the voice of the target speaker in the second accent/dialect.

Two-Level Text-To-Speech Systems Using Synthetic Training Data

A method includes obtaining training data including a plurality of training audio signals and corresponding transcripts. Each training audio signal is spoken by a target speaker in a first accent/dialect. For each training audio signal of the training data, the method includes generating a training synthesized speech representation spoken by the target speaker in a second accent/dialect different than the first accent/dialect and training a text-to-speech (TTS) system based on the corresponding transcript and the training synthesized speech representation. The method also includes receiving an input text utterance to be synthesized into speech in the second accent/dialect. The method also includes obtaining conditioning inputs that include a speaker embedding and an accent/dialect identifier that identifies the second accent/dialect. The method also includes generating an output audio waveform corresponding to a synthesized speech representation of the input text sequence that clones the voice of the target speaker in the second accent/dialect.

Preprocessor System for Natural Language Avatars

A preprocessor for use with natural language processors for control of computerized avatars provides for an embedding of avatar control information in a speech response file of the natural language processor providing avatars with improved perception of emotional intelligence. Rapid avatar response is provided by independent end of speech detection and a response cache bypassing text-to-speech conversion times. The preprocessor may be shared among multiple websites to provide a shared analysis of query optimization.