G10L17/04

MACHINE LEARNING FOR IMPROVING QUALITY OF VOICE BIOMETRICS

Methods and systems are disclosed herein for improving the quality of audio for use in a biometric. A biometric system may use machine learning to determine whether audio or a portion of the audio should be used as a biometric for a user. A sample of the user's voice may be used to generate a voice signature of the user. Portions of the audio that do not meet a similarity threshold when compared with the voice signature may be removed from the audio. Additionally or alternatively, interfering noises may be detected and removed from the audio to improve the quality of a voice biometric generated from the audio.

MACHINE LEARNING FOR IMPROVING QUALITY OF VOICE BIOMETRICS

Methods and systems are disclosed herein for improving the quality of audio for use in a biometric. A biometric system may use machine learning to determine whether audio or a portion of the audio should be used as a biometric for a user. A sample of the user's voice may be used to generate a voice signature of the user. Portions of the audio that do not meet a similarity threshold when compared with the voice signature may be removed from the audio. Additionally or alternatively, interfering noises may be detected and removed from the audio to improve the quality of a voice biometric generated from the audio.

Voiceprint recognition method, model training method, and server

Embodiments of this application disclose a voiceprint recognition method performed by a computer. After obtaining a to-be-recognized target voice message, the computer obtains target feature information of the target voice message by using a voice recognition model, the voice recognition model being obtained through training according to a first loss function and a second loss function. Next, the computer determines a voiceprint recognition result according to the target feature information and registration feature information, the registration feature information being obtained from a voice message of a to-be-recognized object using the voiceprint recognition model. The normalized exponential function and the centralization function are used for jointly optimizing the voice recognition model, and can reduce an intra-class variation between depth features from the same speaker. The two functions are used for simultaneously supervising and learning the voice recognition model, and enable the depth feature to have better discrimination, thereby improving recognition performance.

Voiceprint recognition method, model training method, and server

Embodiments of this application disclose a voiceprint recognition method performed by a computer. After obtaining a to-be-recognized target voice message, the computer obtains target feature information of the target voice message by using a voice recognition model, the voice recognition model being obtained through training according to a first loss function and a second loss function. Next, the computer determines a voiceprint recognition result according to the target feature information and registration feature information, the registration feature information being obtained from a voice message of a to-be-recognized object using the voiceprint recognition model. The normalized exponential function and the centralization function are used for jointly optimizing the voice recognition model, and can reduce an intra-class variation between depth features from the same speaker. The two functions are used for simultaneously supervising and learning the voice recognition model, and enable the depth feature to have better discrimination, thereby improving recognition performance.

System and method for detecting fraud rings

A system and method may identify a fraud ring based on call or interaction data by analyzing by a computer processor interaction data including audio recordings to identify clusters of interactions which are suspected of involving fraud each cluster including the same speaker; analyzing by the computer processor the clusters, in combination with metadata associated with the interaction data, to identify fraud rings, each fraud ring describing a plurality of different speakers, each fraud ring defined by a set of speakers and a set of metadata corresponding to interactions including that speaker; and for each fraud ring, creating a relevance value defining the relative relevance of the fraud ring.

Personalized voices for text messaging
11508380 · 2022-11-22 · ·

Systems and processes for operating an intelligent automated assistant are provided. In one example, a plurality of speech inputs is received from a first user. A voice model is obtained based on the plurality of speech inputs. A user input is received from the first user, the user input corresponding to a request to provide access to the voice model. The voice model is provided to a second electronic device.

Personalized voices for text messaging
11508380 · 2022-11-22 · ·

Systems and processes for operating an intelligent automated assistant are provided. In one example, a plurality of speech inputs is received from a first user. A voice model is obtained based on the plurality of speech inputs. A user input is received from the first user, the user input corresponding to a request to provide access to the voice model. The voice model is provided to a second electronic device.

SELECTIVELY STORING, WITH MULTIPLE USER ACCOUNTS AND/OR TO A SHARED ASSISTANT DEVICE: SPEECH RECOGNITION BIASING, NLU BIASING, AND/OR OTHER DATA
20230055608 · 2023-02-23 ·

Some implementations relate to performing speech biasing, NLU biasing, and/or other biasing based on historical assistant interaction(s). It can be determined, for one or more given historical interactions of a given user, whether to affect future biasing for (1) the given user account, (2) additional user account(s), and/or (3) the shared assistant device as a whole. Some implementations disclosed herein additionally and/or alternatively relate to: determining, based on utterance(s) of a given user to a shared assistant device, an association of first data and second data; storing the association as accessible to a given user account of the given user; and determining whether to store the association as also accessible by additional user account(s) and/or the shared assistant device.

SELECTIVELY STORING, WITH MULTIPLE USER ACCOUNTS AND/OR TO A SHARED ASSISTANT DEVICE: SPEECH RECOGNITION BIASING, NLU BIASING, AND/OR OTHER DATA
20230055608 · 2023-02-23 ·

Some implementations relate to performing speech biasing, NLU biasing, and/or other biasing based on historical assistant interaction(s). It can be determined, for one or more given historical interactions of a given user, whether to affect future biasing for (1) the given user account, (2) additional user account(s), and/or (3) the shared assistant device as a whole. Some implementations disclosed herein additionally and/or alternatively relate to: determining, based on utterance(s) of a given user to a shared assistant device, an association of first data and second data; storing the association as accessible to a given user account of the given user; and determining whether to store the association as also accessible by additional user account(s) and/or the shared assistant device.

AUDIO DEVICE AND OPERATION METHOD THEREOF

An audio device capable of inhibiting malfunction of an information terminal is provided. The audio device includes a sound sensor portion, a sound separation portion, a sound determination portion, and a processing portion. The sound sensor portion has a function of sensing sound. The sound separation portion has a function of separating the sound sensed by the sound sensor portion into a voice and sound other than a voice. The sound determination portion has a function of storing the feature quantity of the sound. The sound determination portion has a function of determining, with a machine learning model such as a neural network model, whether the feature quantity of the voice separated by the sound separation portion is the stored feature quantity. The processing portion has a function of analyzing an instruction contained in the voice and generating an instruction signal representing the content of the instruction in the case where the feature quantity of the voice is the stored feature quantity. The processing portion has a function of performing, on the sound other than a voice separated by the sound separation portion, processing for canceling the sound other than a voice. Specifically, the processing portion has a function of performing, on the sound other than a voice, processing for inverting the phase thereof.