Patent classifications
G10L17/08
User Identification with Voiceprints on Online Social Networks
In one embodiment, a method includes, by one or more computing devices of an online social network, receiving, from a client system at a first location, an audio input from an unknown user, identifying a first user who is proximate to the first location, identifying the unknown user as a second user based on a comparison of the audio input to one or more voiceprints of one or more candidate users accessible by the client system, respectively, wherein each voiceprint comprises audio data for auditory identification of a unique user, and wherein each candidate user is a contact of the first user, and sending customized content to one or more of the first user or the second user, wherein the content is customized using interest information associated with the first or second user.
Authentication method, authentication device, electronic device and storage medium
The present disclosure provides an authentication method, an authentication device, an electronic device and a storage medium. The authentication method includes: receiving target voice data; obtaining a first voiceprint feature parameter corresponding to the target voice data from a device voiceprint model library; performing a first encryption process on the first voiceprint feature parameter with a locally stored private key to generate to-be-verified data; transmitting the to-be-verified data to a server, so that the server uses a public key which matches the private key to decrypt the to-be-verified data to obtain the first voiceprint feature parameter, and performs authentication on the first voiceprint feature parameter to obtain an authentication result; receiving the authentication result returned by the server.
Voice vector framework for authenticating user interactions
There are provided systems and methods for a voice vector framework that authenticates user interactions. A service provider server receives user interaction data having audio data that is associated with an interaction between a user device and the service provider server. The server extracts user attributes from the audio data and obtains user account information associated with the user device. The server selects a classifier that corresponds to a select combination of features based on the user account information and applies the classifier to the user attributes. The server generates a voice vector that includes multiple scores indicating likelihoods that a respective user attribute corresponds to an attribute of the select combination of features. The server compares the voice vector to a baseline vector corresponding to a predetermined combination of features and sends a notification to an agent device with an indication of whether the user device is verified.
Voice vector framework for authenticating user interactions
There are provided systems and methods for a voice vector framework that authenticates user interactions. A service provider server receives user interaction data having audio data that is associated with an interaction between a user device and the service provider server. The server extracts user attributes from the audio data and obtains user account information associated with the user device. The server selects a classifier that corresponds to a select combination of features based on the user account information and applies the classifier to the user attributes. The server generates a voice vector that includes multiple scores indicating likelihoods that a respective user attribute corresponds to an attribute of the select combination of features. The server compares the voice vector to a baseline vector corresponding to a predetermined combination of features and sends a notification to an agent device with an indication of whether the user device is verified.
SYSTEM AND METHOD FOR REAL-TIME FRAUD DETECTION IN VOICE BIOMETRIC SYSTEMS USING PHONEMES IN FRAUDSTER VOICE PRINTS
A system and method for real-time fraud detection with a social engineering phoneme (SEP) watchlist of phoneme sequences may perform real-time fraud prevention operations including receiving incoming call interactions and grouping the call interactions into one or more clusters, each cluster associated with a speaker's voice based on voiceprints. For a pair of voiceprints in a cluster, a phoneme sequence is extracted for each voice print. From the extracted phoneme sequences, a similarity score is then calculated to determine if a match exists between the extracted phoneme sequences based on a threshold. If determined a match exists, the phoneme sequence may be added to a SEP watchlist.
Electronic apparatus, method for controlling mobile apparatus by electronic apparatus and computer readable recording medium
An electronic apparatus is provided. The electronic apparatus includes a voice receiver, a communication interface, and a processor configured to, based on a user voice being obtained through the voice receiver, identify a mobile apparatus having a user account corresponding to the user voice from among at least one mobile apparatus communicably connected to the electronic apparatus through the communication interface, and transmit a control signal corresponding to the user voice to the identified mobile apparatus through the communication interface.
Automatic speech-based longitudinal emotion and mood recognition for mental health treatment
A method of predicting a mood state of a user may include recording an audio sample via a microphone of a mobile computing device of the user based on the occurrence of an event, extracting a set of acoustic features from the audio sample, generating one or more emotion values by analyzing the set of acoustic features using a trained machine learning model, and determining the mood state of the user, based on the one or more emotion values. In some embodiments, the audio sample may be ambient audio recorded periodically, and/or call data of the user recorded during clinical calls or personal calls.
Method and apparatus with registration for speaker recognition
Disclosed is a method and apparatus with recognition for speaker recognition. The method includes determining whether an input feature vector corresponding to a voice signal of a speaker meets a candidate similarity criterion with at least one registered data included in a registration database, selectively, based on a result of the determining of whether the input feature vector meets the candidate similarity criterion, constructing a candidate list based on the input feature vector, determining whether a candidate input feature vector, among one or more candidate input feature vectors constructed in the candidate list in the selective constructing of the candidate list, meets a registration update similarity criterion with the at least one registered data, and selectively, based on a result of the determination of whether the candidate input feature vector meets the registration update similarity criterion, updating the registration database based on the candidate input feature vector.
Method and apparatus with registration for speaker recognition
Disclosed is a method and apparatus with recognition for speaker recognition. The method includes determining whether an input feature vector corresponding to a voice signal of a speaker meets a candidate similarity criterion with at least one registered data included in a registration database, selectively, based on a result of the determining of whether the input feature vector meets the candidate similarity criterion, constructing a candidate list based on the input feature vector, determining whether a candidate input feature vector, among one or more candidate input feature vectors constructed in the candidate list in the selective constructing of the candidate list, meets a registration update similarity criterion with the at least one registered data, and selectively, based on a result of the determination of whether the candidate input feature vector meets the registration update similarity criterion, updating the registration database based on the candidate input feature vector.
Display device and method for controlling the same
A display device and a method for controlling the same are provided. The display device includes a rollable display screen, a voice acquisition unit, an identification control unit, a drive control unit and a display control unit. The voice acquisition unit is configured to acquire a first voice command. The identification control unit is configured to identify the first voice command acquired by the voice acquisition unit as a voice process command, and the voice process command includes a rolling operation command and a display drive command. The drive control unit is configured to perform an operation corresponding to the rolling operation command on the rollable display screen according to the rolling operation command. The display control unit is configured to control a display state of the rollable display screen according to the display drive command.