G10L17/16

Streaming self-attention in a neural network

An acoustic event detection system may employ one or more recurrent neural networks (RNNs) to extract features from audio data, and use the extracted features to determine the presence of an acoustic event. The system may use self-attention to emphasize features extracted from portions of audio data that may include features more useful for detecting acoustic events. The system may perform self-attention in an iterative manner to reduce the amount of memory used to store hidden states of the RNN while processing successive portions of the audio data. The system may process the portions of the audio data using the RNN to generate a hidden state for each portion. The system may calculate an interim embedding for each hidden state. An interim embedding calculated for the last hidden state may be normalized to determine a final embedding representing features extracted from the input data by the RNN.

Method for microphone selection and multi-talker segmentation with ambient automated speech recognition (ASR)

Disclosed methods and systems are directed to determining a best microphone pair and segmenting sound signals. The methods and systems may include receiving a collection of sound signals comprising speech from one or more audio sources (e.g., meeting participants) and/or background noise. The methods and systems may include calculating a TDOA and determining, based on the TDOA and via robust statistics, the best pair of microphones. The methods and systems may also include segmenting sound signals from multiple sources.

Method for microphone selection and multi-talker segmentation with ambient automated speech recognition (ASR)

Disclosed methods and systems are directed to determining a best microphone pair and segmenting sound signals. The methods and systems may include receiving a collection of sound signals comprising speech from one or more audio sources (e.g., meeting participants) and/or background noise. The methods and systems may include calculating a TDOA and determining, based on the TDOA and via robust statistics, the best pair of microphones. The methods and systems may also include segmenting sound signals from multiple sources.

Method and device for transforming feature vector for user recognition

A method of converting a feature vector includes extracting a feature sequence from an audio signal including utterance of a user; extracting a feature vector from the feature sequence; acquiring a conversion matrix for reducing a dimension of the feature vector, based on a probability value acquired based on different covariance values; and converting the feature vector by using the conversion matrix.

SYSTEM AND METHOD FOR SPEAKER AUTHENTICATION AND IDENTIFICATION
20190244622 · 2019-08-08 · ·

A system and method for enrolling a speaker in a speaker authentication and identification system (AIS), the method comprising: generating a user account, the user account comprising: a user identifier based on one or more metadata elements associated with an audio input received from an end device; generating a first i-vector from an audio frame of the audio input, a trained T-matrix, and a Universal Background Model (UBM), wherein the first i-vector generation comprises an optimized computation; and associating the user account with the first i-vector.

SYSTEM AND METHOD FOR SPEAKER AUTHENTICATION AND IDENTIFICATION
20190244622 · 2019-08-08 · ·

A system and method for enrolling a speaker in a speaker authentication and identification system (AIS), the method comprising: generating a user account, the user account comprising: a user identifier based on one or more metadata elements associated with an audio input received from an end device; generating a first i-vector from an audio frame of the audio input, a trained T-matrix, and a Universal Background Model (UBM), wherein the first i-vector generation comprises an optimized computation; and associating the user account with the first i-vector.

Acoustic signature building for a speaker from multiple sessions

Disclosed herein are methods of diarizing audio data using first-pass blind diarization and second-pass blind diarization that generate speaker statistical models, wherein the first pass-blind diarization is on a per-frame basis and the second pass-blind diarization is on a per-word basis, and methods of creating acoustic signatures for a common speaker based only on the statistical models of the speakers in each audio session.

METHOD, APPARATUS AND SYSTEM FOR SPEAKER VERIFICATION

The present disclosure relates to a method, apparatus, and system for speaker verification. The method includes: acquiring an audio recording; extracting speech signals from the audio recording; extracting features of the extracted speech signals; and determining whether the extracted speech signals represent speech by a predetermined speaker based on the extracted features and a speaker model trained with reference voice data of the predetermined speaker.

METHOD, APPARATUS AND SYSTEM FOR SPEAKER VERIFICATION

The present disclosure relates to a method, apparatus, and system for speaker verification. The method includes: acquiring an audio recording; extracting speech signals from the audio recording; extracting features of the extracted speech signals; and determining whether the extracted speech signals represent speech by a predetermined speaker based on the extracted features and a speaker model trained with reference voice data of the predetermined speaker.

Automatic speaker identification using speech recognition features

Features are disclosed for automatically identifying a speaker. Artifacts of automatic speech recognition (ASR) and/or other automatically determined information may be processed against individual user profiles or models. Scores may be determined reflecting the likelihood that individual users made an utterance. The scores can be based on, e.g., individual components of Gaussian mixture models (GMMs) that score best for frames of audio data of an utterance. A user associated with the highest likelihood score for a particular utterance can be identified as the speaker of the utterance. Information regarding the identified user can be provided to components of a spoken language processing system, separate applications, etc.