G10L15/183

Methods and systems for predicting non-default actions against unstructured utterances

A method to adaptively predict non-default actions against unstructured utterances by an automated assistant operating in a computing-system is provided. The method includes extracting voice-features based on receiving an input utterance from at-least one speaker by an automatic speech recognition (ASR) device, identifying the input utterance as an unstructured utterance based on the extracted voice-features and a mapping between the input utterance with one or more default actions as drawn by the ASR, obtaining at least one probable action to be performed in response to the unstructured utterance through a dynamic bayesian network (DBN). The method further includes providing the at least one probable action obtained by the DBN to the speaker in an order of the posterior probability with respect to each action.

Methods and systems for predicting non-default actions against unstructured utterances

A method to adaptively predict non-default actions against unstructured utterances by an automated assistant operating in a computing-system is provided. The method includes extracting voice-features based on receiving an input utterance from at-least one speaker by an automatic speech recognition (ASR) device, identifying the input utterance as an unstructured utterance based on the extracted voice-features and a mapping between the input utterance with one or more default actions as drawn by the ASR, obtaining at least one probable action to be performed in response to the unstructured utterance through a dynamic bayesian network (DBN). The method further includes providing the at least one probable action obtained by the DBN to the speaker in an order of the posterior probability with respect to each action.

System and method for identifying spoken language in telecommunications relay service
11705131 · 2023-07-18 · ·

A system for identifying spoken language in a telecommunications relay service, which includes a call serving entity; and a plurality of automatic speech recognition groups where each of the automatic speech recognition groups includes an associated automatic speech recognition engine that recognizes and transcribes speech to a predefined language. One of the plurality of automatic speech recognition groups is set as a default automatic speech recognition group and automatic speech recognition engines transcribe and convert peer voices into text packets. The text packets are scored by the automatic speech recognition engine and transmitted to the call serving entity to determine whether the text packets meet a predetermined threshold based on their respective scores with the text packet having the highest score that meets or exceeds the predetermine threshold transmitted to a user.

Priority and context-based routing of speech processing

A speech processing system uses contextual data to determine the specific domains, subdomains, and applications appropriate for taking action in response to spoken commands and other utterances. Some applications may be given priority over others such that some applications are general request applications to which responsibility for processing an intent is to be assigned as long as contextual criteria are satisfied, while other applications are specific request applications to which responsibility for processing an intent is to be assigned only if the applications are specifically requested, if the contextual criteria of priority applications are not satisfied, and/or if certain contextual criteria associated with the specific request applications are satisfied.

Visual responses to user inputs

Techniques for generating a visual response to a user input are described. A system may receive input data corresponding to a user input, determining a first skill component is to determine a response to the user input, and determine a second skill component is to determine supplemental content related to the user input. The system may also determine a template for presenting a visual response to the user input, where the template is configured for presenting the response and the supplemental content. The system may receive, from the first skill component, first image data corresponding to the first response. The system may also receive, from the second skill component, second image data corresponding to the first supplemental content. The system may send, to a device including a display, a command to present the first image data and the second image data using the template.

Visual responses to user inputs

Techniques for generating a visual response to a user input are described. A system may receive input data corresponding to a user input, determining a first skill component is to determine a response to the user input, and determine a second skill component is to determine supplemental content related to the user input. The system may also determine a template for presenting a visual response to the user input, where the template is configured for presenting the response and the supplemental content. The system may receive, from the first skill component, first image data corresponding to the first response. The system may also receive, from the second skill component, second image data corresponding to the first supplemental content. The system may send, to a device including a display, a command to present the first image data and the second image data using the template.

Language models using domain-specific model components
11557289 · 2023-01-17 · ·

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for language models using domain-specific model components. In some implementations, context data for an utterance is obtained. A domain-specific model component is selected from among multiple domain-specific model components of a language model based on the non-linguistic context of the utterance. A score for a candidate transcription for the utterance is generated using the selected domain-specific model component and a baseline model component of the language model that is domain-independent. A transcription for the utterance is determined using the score the transcription is provided as output of an automated speech recognition system.

Processing Multimodal User Input for Assistant Systems
20230222605 · 2023-07-13 ·

In one embodiment, a method includes receiving at a head-mounted device a speech input from a user and a visual input captured by cameras of the head-mounted device, wherein the visual input comprises subjects and attributes associated with the subjects, and wherein the speech input comprises a co-reference to one or more of the subjects, resolving entities corresponding to the subjects associated with the co-reference based on the attributes and the co-reference, and presenting a communication content responsive to the speech input and the visual input at the head-mounted device, wherein the communication content comprises information associated with executing results of tasks corresponding to the resolved entities.

REAL TIME CORRECTION OF ACCENT IN SPEECH AUDIO SIGNALS
20230223011 · 2023-07-13 ·

Systems and methods for real-time correction of an accent in a speech audio signal are provided. A method includes dividing the speech audio signal into a stream of input chunks, an input chunk from the stream of input chunks including a pre-defined number of frames of the speech audio signal, extracting, by an acoustic features extraction module from the input chunk and a context associated with the input chunk, acoustic features, the context is a pre-determined number of the frames preceding the input chunk in the stream; extracting, by a linguistic features extraction module from the input chunk and the context, linguistic features, receiving a speaker embedding for a human speaker, providing the speaker embedding, the acoustic features, and the linguistic features to a synthesis module to generate a melspectrogram with a reduced accent, providing the melspectrogram to a vocoder to generate an output chunk of an output audio signal.

Server side hotwording

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for detecting hotwords using a server. One of the methods includes receiving an audio signal encoding one or more utterances including a first utterance; determining whether at least a portion of the first utterance satisfies a first threshold of being at least a portion of a key phrase; in response to determining that at least the portion of the first utterance satisfies the first threshold of being at least a portion of a key phrase, sending the audio signal to a server system that determines whether the first utterance satisfies a second threshold of being the key phrase, the second threshold being more restrictive than the first threshold; and receiving tagged text data representing the one or more utterances encoded in the audio signal when the server system determines that the first utterance satisfies the second threshold.