G10L15/197

MINIMUM WORD ERROR RATE TRAINING FOR ATTENTION-BASED SEQUENCE-TO-SEQUENCE MODELS

Methods, systems, and apparatus, including computer programs encoded on computer-readable storage media, for speech recognition using attention-based sequence-to-sequence models. In some implementations, audio data indicating acoustic characteristics of an utterance is received. A sequence of feature vectors indicative of the acoustic characteristics of the utterance is generated. The sequence of feature vectors is processed using a speech recognition model that has been trained using a loss function that uses a set of speech recognition hypothesis samples, the speech recognition model including an encoder, an attention module, and a decoder. The encoder and decoder each include one or more recurrent neural network layers. A sequence of output vectors representing distributions over a predetermined set of linguistic units is obtained. A transcription for the utterance is obtained based on the sequence of output vectors. Data indicating the transcription of the utterance is provided.

MINIMUM WORD ERROR RATE TRAINING FOR ATTENTION-BASED SEQUENCE-TO-SEQUENCE MODELS

Methods, systems, and apparatus, including computer programs encoded on computer-readable storage media, for speech recognition using attention-based sequence-to-sequence models. In some implementations, audio data indicating acoustic characteristics of an utterance is received. A sequence of feature vectors indicative of the acoustic characteristics of the utterance is generated. The sequence of feature vectors is processed using a speech recognition model that has been trained using a loss function that uses a set of speech recognition hypothesis samples, the speech recognition model including an encoder, an attention module, and a decoder. The encoder and decoder each include one or more recurrent neural network layers. A sequence of output vectors representing distributions over a predetermined set of linguistic units is obtained. A transcription for the utterance is obtained based on the sequence of output vectors. Data indicating the transcription of the utterance is provided.

Generating topic-specific language models

Speech recognition may be improved by generating and using a topic specific language model. A topic specific language model may be created by performing an initial pass on an audio signal using a generic or basis language model. A speech recognition device may then determine topics relating to the audio signal based on the words identified in the initial pass and retrieve a corpus of text relating to those topics. Using the retrieved corpus of text, the speech recognition device may create a topic specific language model. In one example, the speech recognition device may adapt or otherwise modify the generic language model based on the retrieved corpus of text.

Multi-modal spoken language understanding systems

A spoken language understanding (SLU) system may include an automatic speech recognizer (ASR), an audio feature extractor, an optional synchronizer and a language understanding module. The ASR may produce a first set of input data representing transcripts of utterances. The audio feature extractor may produce a second set of input data representing audio features of the utterances, in particular, non-transcript specific characteristics of the speaker in one or more portions the utterances. The two sets of input data may be provided for the language understanding module to predict intents and slot labels for the utterances. The SLU system may use the optional synchronizer to align the two sets of input data before providing them to the language understanding module.

Multi-modal spoken language understanding systems

A spoken language understanding (SLU) system may include an automatic speech recognizer (ASR), an audio feature extractor, an optional synchronizer and a language understanding module. The ASR may produce a first set of input data representing transcripts of utterances. The audio feature extractor may produce a second set of input data representing audio features of the utterances, in particular, non-transcript specific characteristics of the speaker in one or more portions the utterances. The two sets of input data may be provided for the language understanding module to predict intents and slot labels for the utterances. The SLU system may use the optional synchronizer to align the two sets of input data before providing them to the language understanding module.

Adaptive batching to reduce recognition latency

Acoustic features are batched into two different batches. The second batch of the two batches is made in response to a detection of a word hypothesis output by a speech recognition network that received the first batch. The number of acoustic feature frames of the second batch is equal to a second batch size greater than the first batch size. The second batch is also to the speech recognition network for processing.

Adaptive batching to reduce recognition latency

Acoustic features are batched into two different batches. The second batch of the two batches is made in response to a detection of a word hypothesis output by a speech recognition network that received the first batch. The number of acoustic feature frames of the second batch is equal to a second batch size greater than the first batch size. The second batch is also to the speech recognition network for processing.

Language models using domain-specific model components
11557289 · 2023-01-17 · ·

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for language models using domain-specific model components. In some implementations, context data for an utterance is obtained. A domain-specific model component is selected from among multiple domain-specific model components of a language model based on the non-linguistic context of the utterance. A score for a candidate transcription for the utterance is generated using the selected domain-specific model component and a baseline model component of the language model that is domain-independent. A transcription for the utterance is determined using the score the transcription is provided as output of an automated speech recognition system.

Language models using domain-specific model components
11557289 · 2023-01-17 · ·

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for language models using domain-specific model components. In some implementations, context data for an utterance is obtained. A domain-specific model component is selected from among multiple domain-specific model components of a language model based on the non-linguistic context of the utterance. A score for a candidate transcription for the utterance is generated using the selected domain-specific model component and a baseline model component of the language model that is domain-independent. A transcription for the utterance is determined using the score the transcription is provided as output of an automated speech recognition system.

GENERATING CONTEXTUALLY RELEVANT TEXT TRANSCRIPTS OF VOICE RECORDINGS WITHIN A MESSAGE THREAD

The present disclosure relates to systems, non-transitory computer-readable media, and methods for generating contextually relevant transcripts of voice recordings based on social networking data. For instance, the disclosed systems receive a voice recording from a user corresponding to a message thread including the user and one or more co-users. The disclosed systems analyze acoustic features of the voice recording to generate transcription-text probabilities. The disclosed systems generate term weights for terms corresponding to objects associated with the user within a social networking system by analyzing user social networking data. Using the contextually aware term weights, the disclosed systems adjust the transcription-text probabilities. Based on the adjusted transcription-text probabilities, the disclosed systems generate a transcript of the voice recording for display within the message thread.