Patent classifications
G10L13/0335
Methods and systems for providing changes to a live voice stream
Methods and Systems for providing a change to a voice interacting with a user are described. Information indicating a change that can be made to the voice can be received. The voice can be changed based on the information.
TEXT-TO-SPEECH SYNTHESIS METHOD AND APPARATUS USING MACHINE LEARNING, AND COMPUTER-READABLE STORAGE MEDIUM
A text-to-speech synthesis method using machine learning, the text-to-speech synthesis method is disclosed. The method includes generating a single artificial neural network text-to-speech synthesis model by performing machine learning based on a plurality of learning texts and speech data corresponding to the plurality of learning texts, receiving an input text, receiving an articulatory feature of a speaker, generating output speech data for the input text reflecting the articulatory feature of the speaker by inputting the articulatory feature of the speaker to the single artificial neural network text-to-speech synthesis model.
REAL-TIME SPEECH-TO-SPEECH GENERATION (RSSG) AND SIGN LANGUAGE CONVERSION APPARATUS, METHOD AND A SYSTEM THEREFORE
Areal-time speech-to-speech generator and sign gestures converter system is disclosed. The system is still challenging for deaf or hearing impaired people. Embodiments of the invention provide direct speech to speech translation system and further conversion to sign gestures is disclosed. Direct speech to speech translation and further sign gesture conversion uses a one-tier approach, creating a unified-model for whole application. The single-model ecosystem takes in audio (MEL spectrogram) as an input and gives out audio (MEL spectrogram) as an output to a speech-sign converter device with a display. This solves the bottleneck problem by converting the translated speech directly to sign language gesture from first language with emotion by preserving phonetic information along the way. This model needs parallel audio samples in two languages. The training methodology involves augmenting or changing both sides of the audio equally and later converts to sign gestures which are being displayed on a speech-sign converter device.
COORDINATING AND MIXING AUDIOVISUAL CONTENT CAPTURED FROM GEOGRAPHICALLY DISTRIBUTED PERFORMERS
Audiovisual performances, including vocal music, are captured and coordinated with those of other users in ways that create compelling user experiences. In some cases, the vocal performances of individual users are captured (together with performance synchronized video) on mobile devices, television-type display and/or set-top box equipment in the context of karaoke-style presentations of lyrics in correspondence with audible renderings of a backing track. Contributions of multiple vocalists are coordinated and mixed in a manner that selects for visually prominent presentation performance synchronized video of one or more of the contributors. Prominence of particular performance synchronized video may be based, at least in part, on computationally-defined audio features extracted from (or computed over) captured vocal audio. Over the course of a coordinated audiovisual performance timeline, these computationally-defined audio features are selective for performance synchronized video of one or more of the contributing vocalists.
UNSUPERVISED ALIGNMENT FOR TEXT TO SPEECH SYNTHESIS USING NEURAL NETWORKS
Generation of synthetic speech from an input text sequence may be difficult when durations of individual phonemes forming the input text sequence are unknown. A predominantly parallel process may model speech rhythm as a separate generative distribution such that phoneme duration may be sampled at inference. Additional information such as pitch or energy may also be sampled to provide improved diversity for synthetic speech generation.
Systems and methods for enhancing responsiveness to utterances having detectable emotion
Methods, systems, and related products that provide emotion-sensitive responses to user's commands and other utterances received at an utterance-based user interface. Acknowledgements of user's utterances are adapted to the user and/or the user device, and emotions detected in the user's utterance that have been mapped from one or more emotion features extracted from the utterance. In some examples, extraction of a user's changing emotion during a sequence of interactions is used to generate a response to a user's uttered command. In some examples, emotion processing and command processing of natural utterances are performed asynchronously.
SYSTEMS AND METHODS FOR GENERATING SYNTHESIZED SPEECH RESPONSES TO VOICE INPUTS
The system provides a synthesized speech response to a voice input, based on the prosodic character of the voice input. The system receives the voice input and calculates at least one prosodic metric of the voice input. The at least one prosodic metric can be associated with a word, phrase, grouping thereof, or the entire voice input. The system also determines a response to the voice input, which may include the sequence of words that form the response. The system generates the synthesized speech response, by determining prosodic characteristics based on the response, and on the prosodic character of the voice input. The system outputs the synthesized speech response, which includes a more natural, relevant, or both answer to the call of the voice input. The prosodic character of the voice input and/or response may include pitch, note, duration, prominence, timbre, rate, and rhythm, for example.
METHODS AND SYSTEMS FOR PROVIDING CHANGES TO A LIVE VOICE STREAM
Methods and systems for providing changes to a live voice stream of a person are disclosed. A change to be made to the live voice stream based on user information can be identified. The live voice stream can be changed based on the user information.
PREDICTING PRONUNCIATIONS WITH WORD STRESS
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for generating word pronunciations. One of the methods includes determining, by one or more computers, spelling data that indicates the spelling of a word, providing the spelling data as input to a trained recurrent neural network, the trained recurrent neural network being trained to indicate characteristics of word pronunciations based at least on data indicating the spelling of words, receiving output indicating a stress pattern for pronunciation of the word generated by the trained recurrent neural network in response to providing the spelling data as input, using the output of the trained recurrent neural network to generate pronunciation data indicating the stress pattern for a pronunciation of the word, and providing, by the one or more computers, the pronunciation data to a text-to-speech system or an automatic speech recognition system.
LATENT-SEGMENTATION INTONATION MODEL
The intonation model of the present technology disclosed herein assigns different words within a sentence to be prominent, analyzes multiple prominence possibilities (in some cases, all prominence possibilities), and learns parameters of the model using large amounts of data. Unlike previous systems, intonation patterns are discovered from data. Speech data is sub-segmented into words, the different segments are analyzed and used for learning, and a determination is made as to whether the segmentations predict pitch