Patent classifications
G10L25/87
SPEECH ENDPOINTING BASED ON WORD COMPARISONS
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for speech endpointing based on word comparisons are described. In one aspect, a method includes the actions of obtaining a transcription of an utterance. The actions further include determining, as a first value, a quantity of text samples in a collection of text samples that (i) include terms that match the transcription, and (ii) do not include any additional terms. The actions further include determining, as a second value, a quantity of text samples in the collection of text samples that (i) include terms that match the transcription, and (ii) include one or more additional terms. The actions further include classifying the utterance as a likely incomplete utterance or not a likely incomplete utterance based at least on comparing the first value and the second value.
Voice Filtering Other Speakers From Calls And Audio Messages
A method includes receiving a first instance of raw audio data corresponding to a voice-based command and receiving a second instance of the raw audio data corresponding to an utterance of audible contents for an audio-based communication spoken by a user. When a voice filtering recognition routine determines to activate voice filtering for at least the voice of the user, the method also includes obtaining a respective speaker embedding of the user and processing, using the respective speaker embedding, the second instance of the raw audio data to generate enhanced audio data for the audio-based communication that isolates the utterance of the audible contents spoken by the user and excludes at least a portion of the one or more additional sounds that are not spoken by the user The method also includes executing.
Voice and speech recognition for call center feedback and quality assurance
A computer-implemented method for providing an objective evaluation to a customer service representative regarding his performance during an interaction with a customer may include receiving a digitized data stream corresponding to a spoken conversation between a customer and a representative; converting the data stream to a text stream; generating a representative transcript that includes the words from the text stream that are spoken by the representative; comparing the representative transcript with a plurality of positive words and a plurality of negative words; and generating a score that varies according to the occurrence of each word spoken by the representative that matches one of the positive words, and/or the occurrence of each word spoken by the representative that matches one of the negative words. Tone of voice, as well as response time, during the interaction may also be monitored and analyzed to adjust the score, or generate a separate score.
Voice and speech recognition for call center feedback and quality assurance
A computer-implemented method for providing an objective evaluation to a customer service representative regarding his performance during an interaction with a customer may include receiving a digitized data stream corresponding to a spoken conversation between a customer and a representative; converting the data stream to a text stream; generating a representative transcript that includes the words from the text stream that are spoken by the representative; comparing the representative transcript with a plurality of positive words and a plurality of negative words; and generating a score that varies according to the occurrence of each word spoken by the representative that matches one of the positive words, and/or the occurrence of each word spoken by the representative that matches one of the negative words. Tone of voice, as well as response time, during the interaction may also be monitored and analyzed to adjust the score, or generate a separate score.
Dynamic voice input detection for conversation assistants
A processor may receive data regarding a context for a first dialog turn. The processor may monitor a voice input from a user for the first dialog turn. The processor may detect a first pause in the voice input, the first pause having a duration that satisfies a time threshold. The processor may receive, based on the first pause, first voice input data. The processor may analyze the first voice input data. The processor may determine that additional time is recommended for the voice input to be provided by the user.
ALIGNING PARAMETER DATA WITH AUDIO RECORDINGS
Various techniques relate to aligning parameters and audio recordings obtained at a rescue scene. An example method includes receiving, from a first device, a first file including first measurements of a first parameter at first discrete times in a time interval. The first file further indicates a marker output by the first device during the time interval. The method also includes receiving, from a second device, a second file comprising second measurements of a second parameter at second discrete times in the time interval. The method includes detecting the marker output by the first device in the second measurements of the second parameter and based on detecting the signal output by the first device in the second measurements, generating aligned data by time-aligning the first measurements of the first parameter and the second measurements of the second parameter. The method further includes outputting the aligned data.
ALIGNING PARAMETER DATA WITH AUDIO RECORDINGS
Various techniques relate to aligning parameters and audio recordings obtained at a rescue scene. An example method includes receiving, from a first device, a first file including first measurements of a first parameter at first discrete times in a time interval. The first file further indicates a marker output by the first device during the time interval. The method also includes receiving, from a second device, a second file comprising second measurements of a second parameter at second discrete times in the time interval. The method includes detecting the marker output by the first device in the second measurements of the second parameter and based on detecting the signal output by the first device in the second measurements, generating aligned data by time-aligning the first measurements of the first parameter and the second measurements of the second parameter. The method further includes outputting the aligned data.
ENHANCING SIGNATURE WORD DETECTION IN VOICE ASSISTANTS
Systems and methods detecting a spoken sentence in a speech recognition system are disclosed herein. Speech data is buffered based on an audio signal captured at a computing device operating in an active mode. The speech data is buffered irrespective of whether the speech data comprises a signature word. The buffered speech data is processed to detect a presence of the sentence comprising at least one command and a query for the computing device. Processing the buffered speech data includes detecting the signature word in the buffered speech data, and in response to detecting the signature word in the speech data, initiating detection of the sentence in the buffered speech data.
AUDIO ANALYSIS OF BODY WORN CAMERA
Machine natural language processing to analyze language in apparatus, systems, and methods of using are provided. Audio from camera footage can be transcribed in one exemplary method includes extracting at least one audio segment from a body camera video track, detecting voice activity to identify starting and ending timestamps of voice, transcribing the at least one audio segment to identify and separate audio of at least a first speaker, and scoring the audio of the first speaker to identify interactions of interest. Audio could be analyzed and scored to record verbal performance, respectfulness, wellness, etc. and speakers from the audio can be detected.
AUDIO ANALYSIS OF BODY WORN CAMERA
Machine natural language processing to analyze language in apparatus, systems, and methods of using are provided. Audio from camera footage can be transcribed in one exemplary method includes extracting at least one audio segment from a body camera video track, detecting voice activity to identify starting and ending timestamps of voice, transcribing the at least one audio segment to identify and separate audio of at least a first speaker, and scoring the audio of the first speaker to identify interactions of interest. Audio could be analyzed and scored to record verbal performance, respectfulness, wellness, etc. and speakers from the audio can be detected.