Patent classifications
G10L17/20
Speaker recognition using domain independent embedding
Receiving a raw speech signal from a human speaker; providing an acoustic representation of the raw speech signal if the raw speech signal is determined to be within one of a plurality of pre-defined acoustic domains; augmenting the raw speech signal with the acoustic representation to provide a plurality of augmented speech signals; determining a set of a plurality of Mel frequency cepstral coefficients for each of the plurality of augmented speech signals, wherein each set of the plurality of Mel frequency cepstral coefficients is transformed using domain-dependent transformations to obtain acoustic reference vector, such that there are a plurality of acoustic reference vectors, for each one of the plurality of augmented speech signals; stacking the plurality of acoustic reference vectors corresponding to each augmented speech signal to form a super acoustic reference vector; and processing the super acoustic reference vector through a neural network which has been previously trained on data from a plurality of human speakers to obtain domain-independent embeddings for speaker recognition.
SPEAKER RECOGNITION USING ADAPTIVE THRESHOLDING
Techniques related to speaker recognition are discussed. Such techniques may include determining an adaptive speaker recognition threshold based on a speech to noise ratio and noise type label corresponding to received audio and performing speaker recognition based on the adaptive speaker recognition threshold and a speaker recognition score corresponding to received audio.
SPEAKER RECOGNITION USING ADAPTIVE THRESHOLDING
Techniques related to speaker recognition are discussed. Such techniques may include determining an adaptive speaker recognition threshold based on a speech to noise ratio and noise type label corresponding to received audio and performing speaker recognition based on the adaptive speaker recognition threshold and a speaker recognition score corresponding to received audio.
Estimation of reliability in speaker recognition
A method for estimating the reliability of a result of a speaker recognition system concerning a testing audio and a speaker model, which is based on one, two, three or more model audios, the method using a Bayesian Network to estimate whether the result is reliable. In estimating the reliability of the result of the speaker recognition system one, two, three, four or more than four quality measures of the testing audio and one, two, three, four or more than four quality measures of the model audio(s) are used.
REVERBERATION COMPENSATION FOR FAR-FIELD SPEAKER RECOGNITION
Techniques are provided for reverberation compensation for far-field speaker recognition. A methodology implementing the techniques according to an embodiment includes receiving an authentication audio signal associated with speech of a user and extracting features from the authentication audio signal. The method also includes scoring results of application of one or more speaker models to the extracted features. Each of the speaker models is trained based on a training audio signal processed by a reverberation simulator to simulate selected far-field environmental effects to be associated with that speaker model. The method further includes selecting one of the speaker models, based on the score, and mapping the selected speaker model to a known speaker identification or label that is associated with the user.
REVERBERATION COMPENSATION FOR FAR-FIELD SPEAKER RECOGNITION
Techniques are provided for reverberation compensation for far-field speaker recognition. A methodology implementing the techniques according to an embodiment includes receiving an authentication audio signal associated with speech of a user and extracting features from the authentication audio signal. The method also includes scoring results of application of one or more speaker models to the extracted features. Each of the speaker models is trained based on a training audio signal processed by a reverberation simulator to simulate selected far-field environmental effects to be associated with that speaker model. The method further includes selecting one of the speaker models, based on the score, and mapping the selected speaker model to a known speaker identification or label that is associated with the user.
Method and electronic device for separating mixed sound signal
This application can provide a method and electronic device for separating mixed sound signal. The method includes: obtaining a first hidden variable representing a human voice feature and a second hidden variable representing an accompaniment sound feature by inputting feature data of a mixed sound extracted from a mixed sound signal into a coding model for the mixed sound; obtaining first feature data of a human voice and second feature data of an accompaniment sound by inputting the first hidden variable and the second hidden variable into a first decoding model for the human voice and a second decoding model for the accompaniment sound respectively; and obtaining, based on the first feature data and the second feature data, the human voice and the accompaniment sound.
Method and electronic device for separating mixed sound signal
This application can provide a method and electronic device for separating mixed sound signal. The method includes: obtaining a first hidden variable representing a human voice feature and a second hidden variable representing an accompaniment sound feature by inputting feature data of a mixed sound extracted from a mixed sound signal into a coding model for the mixed sound; obtaining first feature data of a human voice and second feature data of an accompaniment sound by inputting the first hidden variable and the second hidden variable into a first decoding model for the human voice and a second decoding model for the accompaniment sound respectively; and obtaining, based on the first feature data and the second feature data, the human voice and the accompaniment sound.
METHODS AND APPARATUS FOR OBTAINING BIOMETRIC DATA
A method of modelling speech of a user of a headset comprising a microphone, the method comprising: receiving a first sample, from a bone-conduction sensor, representing bone-conducted speech of the user; obtaining a measure of fundamental frequency of the bone-conducted speech in each of a plurality of speech frames of the first sample; obtaining a first distribution of the fundamental frequencies of the bone-conducted speech over the plurality of speech frames; receiving, from the microphone, a second sample; determining a first acoustic condition at the headset based on the second signal; performing a biometric process based on the first distribution of fundamental frequencies and the first acoustic condition.
AUTHENTICATION USING A CONVERSATIONAL USER INTERFACE
A one-time passphrase is transmitted from an authentication system to a personal communication device of a user. The one-time passphrase includes common but incongruous words. The user is prompted to verbalize the one-time passphrase to a processor-implemented, conversational user interface. Utterances from the user are received by a conversational user interface, and the utterances are communicated from the conversational user interface to the authentication system via a trusted communication channel. The authentication system determines, using speech recognition, presence or non-presence of the one-time passphrase within the received utterances. The authentication system authenticates the user in response to detecting presence of the one-time passphrase within the received utterances.