Patent classifications
G10L2015/227
Voice command system and voice command method
A voice command system according to a first disclosure comprises a gateway apparatus having an interface configured to receive a voice command, and a controller configured to perform a registration process of registering a speaker permitted to receive the voice command. The controller is configured to perform an authentication process of rejecting a reception of the voice command when a speaker of the voice command is not registered, and permitting a reception of the voice command when a speaker of the voice command is registered. The controller is configured to perform the authentication process for each voice command.
INTENT RECOGNITION METHOD AND INTENT RECOGNITION SYSTEM HAVING SELF LEARNING CAPABILITY
An intent recognition method having a self-learning capability includes the following steps: acquiring a user expression, and recognizing a voice as a corresponding text; performing preliminary intent recognition on the user expression, and outputting candidate intents; acquiring historical data feature parameters of the candidate intents; on the basis of a pre-set rule strategy, deciding whether to directly output a final recognized intent, and on the basis of the feature parameters of each intent, performing rule computation, and outputting a final recognized intent; submitting prediction data of the final recognized intent and the candidate intents from the intent recognition process to a self-learning system, and performing self learning and indicator parameter data updating. The present disclosure is able to perform self learning on the basis of the feature distribution in historical data of intent recognition and dynamically adjust intent recognition strategies.
LANGUAGE PROCESSOR, LANGUAGE PROCESSING METHOD AND LANGUAGE PROCESSING PROGRAM
The present disclosure is directed to enabling acquisition of information of an argument corresponding to a case. The present disclosure is a language processing apparatus which refers to an argument emergence history database 14 which stores argument emergence patterns associated with cases and arguments of verbs for each meaning of a word or usage of a verb, acquires an argument emergence pattern which matches a verb and a case of the verb included in a request from a user from the argument emergence history database 14, and generates a response to the user using an argument included in the argument emergence pattern acquired from the argument emergence history database 14.
Third party account linking for voice user interface
Methods and systems for adding functionality to an account of a language processing system where the functionality is associated with a second account of a first application system is described herein. In a non-limiting embodiment, an individual may log into a first account of a language processing system and log into a second account of a first application system. While logged into both the first account and the second account, a button included within a webpage provided by the first application may be invoked. A request capable of being serviced using the first functionality may be received by the language processing system from a device associated with the first account. The language processing system may send first account data and the second account data to the first application system to facilitate an action associated with the request, thereby enabling the first functionality for the first account.
Multi-user devices in a connected home environment
A device implementing a system for responding to a voice request includes a processor configured to receive a voice request, the device being associated with a user account, and determine, based on the voice request, a confidence score that the voice request corresponds to a voice profile associated with the user account. The processor is further configured to select, based at least in part on a content of the voice request and the confidence score, a request domain from among plural request domains for responding to the voice request, and provide for a response to the voice request based on the selected request domain.
Voice command recognition device and method thereof
A voice command recognition device and a method thereof are provided. The voice command recognition device includes a processor that registers one or more voice commands selected by analysis of one or more voice commands repeatedly used by a user or a voice command utterance pattern of the user to generate one package command and a storage storing data or an algorithm for speech recognition by the processor.
ELECTRONIC DEVICE FOR TRANSLATING VOICE OR TEXT AND METHOD THEREOF
An electronic device is provided. The electronic device includes an input unit configured to receive a voice or text, an output unit, and a processor. The processor is configured to determine context information, translate the received voice or text based on the context information, convert the translated voice or text, and output the converted voice or text using the output unit.
Assisting Users with Efficient Information Sharing among Social Connections
In one embodiment, a method includes receiving a user input from a first user at the first client system, determining that the user input is a sharing request to share content, determining multiple second users the sharing request is directed to, determining, for each second user, modalities associated with the respective second user based on the content, a user profile associated with the respective second user, and modalities supported by a second client system the respective second user is currently engaged with, the respective second user being associated with two or more second client systems, and sending, to one or more second client systems currently associated with the second users, instructions for accessing the content based on the determined modalities for each second user.
Handling calls on a shared speech-enabled device
In some implementations, a determination that a first party has spoken a query for a voice-enabled virtual assistant during a voice call between the first party and a second party is made, in response to the determination that the first party has spoken the query for the voice-enabled virtual assistant during the voice call between the first party and the second party, the voice call between the first party and the second party is placed on hold, a determination that the voice-enabled virtual assistant has resolved the query is made, and in response to the determination that the voice-enabled virtual assistant has handled the query, the voice call between the first party and the second party is resumed from hold.
Whispering voice recovery method, apparatus and device, and readable storage medium
A method, an apparatus and a device for converting a whispered speech, and a readable storage medium are provided. The method is implemented based on the whispered speech converting model. The whispered speech converting model is trained in advance by using recognition results and whispered speech training acoustic features of whispered speech training data as samples and using normal speech acoustic features of normal speech data parallel to the whispered speech training data as sample labels. A whispered speech acoustic feature and a preliminary recognition result of whispered speech data are acquired, then the whispered speech acoustic feature and the preliminary recognition result are inputted into a preset whispered speech converting model to acquire a normal speech acoustic feature outputted by the model. In this way, the whispered speech can be converted to a normal speech.