G10L17/20

VOICE OR SPEECH RECOGNITION IN NOISY ENVIRONMENTS
20230197085 · 2023-06-22 ·

Embodiments include methods for voice/speech recognition in noisy environments executed by a processor of a computing device. In various embodiments, voice or speech recognition may be executed by a processor of a computing device, which may include determining a voice recognition model to use for voice and/or speech recognition based on a location where an audio input is received and performing voice and/or speech recognition on the audio input using the determined voice recognition model. Some embodiments my receive from a computing device, an audio input and location information associated with a location where the audio input was recorded. The received audio input may be used to generate a voice recognition model associated with the location where the audio input was recorded for use in voice and/or speech recognition. The generated voice recognition model associated with the location may be provided to the computing device.

VOICE OR SPEECH RECOGNITION IN NOISY ENVIRONMENTS
20230197085 · 2023-06-22 ·

Embodiments include methods for voice/speech recognition in noisy environments executed by a processor of a computing device. In various embodiments, voice or speech recognition may be executed by a processor of a computing device, which may include determining a voice recognition model to use for voice and/or speech recognition based on a location where an audio input is received and performing voice and/or speech recognition on the audio input using the determined voice recognition model. Some embodiments my receive from a computing device, an audio input and location information associated with a location where the audio input was recorded. The received audio input may be used to generate a voice recognition model associated with the location where the audio input was recorded for use in voice and/or speech recognition. The generated voice recognition model associated with the location may be provided to the computing device.

SPEAKER RECOGNITION METHOD AND APPARATUS

A speaker recognition method and apparatus receives a first voice signal of a speaker, generates a second voice signal by enhancing the first voice signal through speech enhancement, generates a multi-channel voice signal by associating the first voice signal with the second voice signal, and recognizes the speaker based on the multi-channel voice signal.

Neural network device for speaker recognition and operating method of the same

Provided are a method of generating a trained third neural network to recognize a speaker of a noisy speech signal by combining a trained first neural network which is a skip connection-based neural network for removing noise from the noisy speech signal with a trained second neural network for recognizing the speaker of a speech signal, and a neural network device for operating the neural networks.

Estimating Clean Speech Features Using Manifold Modeling
20170316790 · 2017-11-02 ·

The technology described in this document can be embodied in a computer-implemented method that includes receiving, at one or more processing devices, a portion of an input signal representing noisy speech, and extracting, from the portion of the input signal, one or more frequency domain features of the noisy speech. The method also includes generating a set of projected features by projecting each of the one or more frequency domain features on a manifold that represents a model of frequency domain features for clean speech. The method further includes using the set of projected features for at least one of: a) generating synthesized speech that represents a noise-reduced version of the noisy speech, b) performing speaker recognition, or c) performing speech recognition.

Electronic apparatus and control method thereof

An electronic apparatus and a controlling methods thereof are disclosed. The electronic apparatus includes a voice input unit configured to receive a user voice, a storage unit configured to store a plurality of voice print feature models representing a plurality of user voices and a plurality of utterance environment models representing a plurality of environmental disturbances, a controller, in response to a user voice being input through the voice input unit, configured to extract utterance environment information of an utterance environment model among the plurality of utterance environment models corresponding to a location where the user voice is input, compare a voice print feature of the input user voice with the plurality of voice print feature models, revise a result of the comparison based on the extracted utterance environment information, and recognize a user corresponding to the input user voice based on the revised result.

Electronic apparatus and control method thereof

An electronic apparatus and a controlling methods thereof are disclosed. The electronic apparatus includes a voice input unit configured to receive a user voice, a storage unit configured to store a plurality of voice print feature models representing a plurality of user voices and a plurality of utterance environment models representing a plurality of environmental disturbances, a controller, in response to a user voice being input through the voice input unit, configured to extract utterance environment information of an utterance environment model among the plurality of utterance environment models corresponding to a location where the user voice is input, compare a voice print feature of the input user voice with the plurality of voice print feature models, revise a result of the comparison based on the extracted utterance environment information, and recognize a user corresponding to the input user voice based on the revised result.

Speaker verification using co-location information
09792914 · 2017-10-17 · ·

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for identifying a user in a multi-user environment. One of the methods includes receiving, by a first user device, an audio signal encoding an utterance, obtaining, by the first user device, a first speaker model for a first user of the first user device, obtaining, by the first user device for a second user of a second user device that is co-located with the first user device, a second speaker model for the second user or a second score that indicates a respective likelihood that the utterance was spoken by the second user, and determining, by the first user device, that the utterance was spoken by the first user using (i) the first speaker model and the second speaker model or (ii) the first speaker model and the second score.

Speaker verification using co-location information
09792914 · 2017-10-17 · ·

Methods, systems, and apparatus, including computer programs encoded on computer storage media, for identifying a user in a multi-user environment. One of the methods includes receiving, by a first user device, an audio signal encoding an utterance, obtaining, by the first user device, a first speaker model for a first user of the first user device, obtaining, by the first user device for a second user of a second user device that is co-located with the first user device, a second speaker model for the second user or a second score that indicates a respective likelihood that the utterance was spoken by the second user, and determining, by the first user device, that the utterance was spoken by the first user using (i) the first speaker model and the second speaker model or (ii) the first speaker model and the second score.

Dataset shift compensation in machine learning

A method for inter-dataset variability compensation, the method comprising using at least one hardware processor for: receiving a heterogeneous development dataset comprising multiple samples and metadata associated with at least some of the multiple samples; dividing the multiple samples into multiple homogenous subsets, based on the metadata; averaging high-level features of each of the multiple homogenous subsets, to produce multiple central high-level features for the multiple homogenous subsets, respectively; computing an inter-dataset variability subspace spanned by the multiple central high-level features; removing the inter-dataset variability subspace from the high-level features of the multiple homogenous subsets, to produce denoised samples; and training a machine learning system using the denoised speech samples.