G10L15/07

System and method for data augmentation for multi-microphone signal processing

A method, computer program product, and computing system for receiving a signal from each microphone of a plurality of microphones, thus defining a plurality of signals. One or more inter-microphone gain-based augmentations may be performed on the plurality of signals, thus defining one or more inter-microphone gain-augmented signals.

Method for generating acoustic model
11551672 · 2023-01-10 · ·

A method for generating an acoustic model is disclosed. The method can generate the acoustic model with high accuracy through learning data including various dialects by training the acoustic model using text data, to which regional information is tagged, and changing a parameter of the acoustic model based on the tagged regional information. The acoustic model can be associated with an artificial intelligence module, an unmanned aerial vehicle (UAV), a robot, an augmented reality (AR) device, a virtual reality (VR) device, devices related to 5G services, and the like.

Method for generating acoustic model
11551672 · 2023-01-10 · ·

A method for generating an acoustic model is disclosed. The method can generate the acoustic model with high accuracy through learning data including various dialects by training the acoustic model using text data, to which regional information is tagged, and changing a parameter of the acoustic model based on the tagged regional information. The acoustic model can be associated with an artificial intelligence module, an unmanned aerial vehicle (UAV), a robot, an augmented reality (AR) device, a virtual reality (VR) device, devices related to 5G services, and the like.

Information processing apparatus, information processing system, and information processing method

Provided is an apparatus that includes a voice recognition section that executes a voice recognition process on a user speech and a learning processing section that executes a process of updating a degree of confidence on the basis of an interaction made between a user and the information processing apparatus after the user speech. The degree of confidence is an evaluation value indicating the reliability of a voice recognition result of the user speech. The voice recognition section generates data on degrees of confidence in recognition of the user speech in which data plural user speech candidates based on the voice recognition result of the user speech are associated with the degrees of confidence which are evaluation values each indicating reliability of the corresponding user speech candidate.

Auto-completion for gesture-input in assistant systems

In one embodiment, a method includes receiving an initial input in a first modality from a first user from a client system associated with the first user, determining one or more intents corresponding to the initial input by an intent-understanding module, generating one or more candidate continuation-inputs based on the one or more intents, where the one or more candidate continuation-inputs are in one or more candidate modalities, respectively, and wherein the candidate modalities are different from the first modality, and sending instructions for presenting one or more suggested inputs corresponding to one or more of the candidate continuation-inputs to the client system.

Auto-completion for gesture-input in assistant systems

In one embodiment, a method includes receiving an initial input in a first modality from a first user from a client system associated with the first user, determining one or more intents corresponding to the initial input by an intent-understanding module, generating one or more candidate continuation-inputs based on the one or more intents, where the one or more candidate continuation-inputs are in one or more candidate modalities, respectively, and wherein the candidate modalities are different from the first modality, and sending instructions for presenting one or more suggested inputs corresponding to one or more of the candidate continuation-inputs to the client system.

Method and apparatus for implementing speaker identification neural network

A method and apparatus for generating a speaker identification neural network include generating a first neural network that is trained to identify a first speaker with respect to a first voice signal in a first environment, generating a second neural network for identifying a second speaker with respect to a second voice signal in a second environment, and generating the speaker identification neural network by training the second neural network based on a teacher-student training model in which the first neural network is set to a teacher neural network and the second neural network is set to a student neural network.

Method and apparatus for implementing speaker identification neural network

A method and apparatus for generating a speaker identification neural network include generating a first neural network that is trained to identify a first speaker with respect to a first voice signal in a first environment, generating a second neural network for identifying a second speaker with respect to a second voice signal in a second environment, and generating the speaker identification neural network by training the second neural network based on a teacher-student training model in which the first neural network is set to a teacher neural network and the second neural network is set to a student neural network.

Personalized conversational recommendations by assistant systems

In one embodiment, a method includes receiving a user request from a client system associated with a user, generating a response to the user request which references one or more entities, generating a personalized recommendation based on the user request and the response, wherein the personalized recommendation references one or more of the entities of the response, and sending instructions for presenting the response and the personalized recommendation to the client system.

System and method of automated model adaptation

Methods, systems, and computer readable media for automated transcription model adaptation includes obtaining audio data from a plurality of audio files. The audio data is transcribed to produce at least one audio file transcription which represents a plurality of transcription alternatives for each audio file. Speech analytics are applied to each audio file transcription. A best transcription is selected from the plurality of transcription alternatives for each audio file. Statistics from the selected best transcription are calculated. An adapted model is created from the calculated statistics.