G10L15/32

Methods and systems for detecting and processing speech signals

Provided are methods, systems, and apparatuses for detecting, processing, and responding to audio signals, including speech signals, within a designated area or space. A platform for multiple media devices connected via a network is configured to process speech, such as voice commands, detected at the media devices, and respond to the detected speech by causing the media devices to simultaneously perform one or more requested actions. The platform is capable of scoring the quality of a speech request, handling speech requests from multiple end points of the platform using a centralized processing approach, a de-centralized processing approach, or a combination thereof, and also manipulating partial processing of speech requests from multiple end points into a coherent whole when necessary.

Methods and systems for detecting and processing speech signals

Provided are methods, systems, and apparatuses for detecting, processing, and responding to audio signals, including speech signals, within a designated area or space. A platform for multiple media devices connected via a network is configured to process speech, such as voice commands, detected at the media devices, and respond to the detected speech by causing the media devices to simultaneously perform one or more requested actions. The platform is capable of scoring the quality of a speech request, handling speech requests from multiple end points of the platform using a centralized processing approach, a de-centralized processing approach, or a combination thereof, and also manipulating partial processing of speech requests from multiple end points into a coherent whole when necessary.

Audio communication in a vehicle

An audio communication system for communication between vehicle occupants in a vehicle, including an image capturing device configured to monitor a first vehicle occupant, a processor configured to receive an image of the first vehicle occupant from the image capturing device and determine whether the first vehicle occupant is attracting attention from a second vehicle occupant, a first microphone associated to the first vehicle occupant configured to receive an audio input from the first vehicle occupant in response to the determination of the first vehicle occupant attracting the second vehicle occupant's attention, and a first speaker associated to the second vehicle occupant configured to activate an audio augmentation of the received audio input in the first speaker and output the augmented audio input.

Pronunciation error detection apparatus, pronunciation error detection method and program

The present invention provides a pronunciation error detection apparatus capable of following a text without the need for a correct sentence even when erroneous recognition such as a reading error occurs. The pronunciation error detection apparatus comprises: a speech recognition part that recognizes the speech in speech data based on a speech recognition model for a non-native speaker, and outputs speech recognition results, reliability and time information; a reliability determination part that outputs the speech recognition results with higher reliability than a predetermined threshold and the corresponding time information as the determined speech recognition results and the determined time information; and a pronunciation error detection part that outputs a phoneme as a pronunciation error when reliability for each phoneme in the speech recognition results using the native speaker speech recognition model under a weakly constraining grammar is greater than the reliability of the corresponding phoneme in the speech recognition results using the native speaker acoustic model under a constraining grammar in which the determined speech recognition results are correct for the speech data in a segment specified by the determined time information.

Intelligent Voice Interface for Handling Out-of-Context Dialog

In a method for handling out-of-sequence caller dialog, an intelligent voice interface is configured to lead callers through pathways of an algorithmic dialog that includes available voice prompts for requesting different types of caller information. The method may include, during a voice communication with a caller via a caller device, receiving from the caller device caller input data indicative of a voice input of the caller, without having first provided to the caller device any voice prompt that requests a first type of caller information, and determining, by processing the caller input data, that the voice input includes caller information of the first type. The method also includes after determining that the voice input includes the caller information of the first type, bypassing one or more voice prompts, of the available voice prompts, that request the first type of caller information.

Speech-to-text transcription with multiple languages

One embodiment provides a method that includes obtaining a default language corpus. A second language corpus is obtained based on a second language preference. A first transcription of an utterance is received using the default language corpus and natural language processing (NLP). At least one problem word in the first transcription is determined based on an associated grammatical relevance to neighboring words in the first transcription. Upon determining that a first probability score is below a first threshold, an acoustic lookup is performed for an audible match for the problem word in the first transcription based on an associated acoustical relevance. Upon determining that a second probability score is below a second threshold, it is determined whether a match for the problem word exists in the secondary language corpus. Upon determining that the match exists in the secondary language corpus, a second transcription for the utterance is provided.

Sequence-to-sequence speech recognition with latency threshold

A computing system including one or more processors configured to receive an audio input. The one or more processors may generate a text transcription of the audio input at a sequence-to-sequence speech recognition model, which may assign a respective plurality of external-model text tokens to a plurality of frames included in the audio input. Each external-model text token may have an external-model alignment within the audio input. Based on the audio input, the one or more processors may generate a plurality of hidden states. Based on the plurality of hidden states, the one or more processors may generate a plurality of output text tokens. Each output text token may have a corresponding output alignment within the audio input. For each output text token, a latency between the output alignment and the external-model alignment may be below a predetermined latency threshold. The one or more processors may output the text transcription.

METHODS AND APPARATUS TO DETERMINE AN AUDIENCE COMPOSITION BASED ON VOICE RECOGNITION

Methods, apparatus, systems and articles of manufacture are disclosed. An example apparatus includes a controller to cause a people meter to emit a prompt for input of audience identification information at a first time and determine a first audience count based on the input, an audio detector to determine a second audience count based on signatures generated from audio data captured in the media environment, and a comparator to cause the people meter to not emit the prompt for at least a first time period after the first time when the first audience count is equal to the second audience count.

METHODS AND APPARATUS TO DETERMINE AN AUDIENCE COMPOSITION BASED ON VOICE RECOGNITION

Methods, apparatus, systems and articles of manufacture are disclosed. An example apparatus includes a controller to cause a people meter to emit a prompt for input of audience identification information at a first time and determine a first audience count based on the input, an audio detector to determine a second audience count based on signatures generated from audio data captured in the media environment, and a comparator to cause the people meter to not emit the prompt for at least a first time period after the first time when the first audience count is equal to the second audience count.

Adaptive batching to reduce recognition latency

Acoustic features are batched into two different batches. The second batch of the two batches is made in response to a detection of a word hypothesis output by a speech recognition network that received the first batch. The number of acoustic feature frames of the second batch is equal to a second batch size greater than the first batch size. The second batch is also to the speech recognition network for processing.