Patent classifications
G10L15/22
MULTI-USER VOICE ASSISTANT WITH DISAMBIGUATION
Disambiguating question answering responses by receiving voice command data associated with a first user, determining a first user identity according to the first user voice command data, determining a first user activity context according to the first user voice command data, determining a first response for the first user, receiving voice command data associated with a second user, determining a second user identity according to the second user voice command data, determining a second user activity context according to the second user voice command data, determining a second response for the second user, determining a predicted ambiguity between the first response and the second response, altering the first response according to the predicted ambiguity, and providing the first response and the second response.
MULTI-USER VOICE ASSISTANT WITH DISAMBIGUATION
Disambiguating question answering responses by receiving voice command data associated with a first user, determining a first user identity according to the first user voice command data, determining a first user activity context according to the first user voice command data, determining a first response for the first user, receiving voice command data associated with a second user, determining a second user identity according to the second user voice command data, determining a second user activity context according to the second user voice command data, determining a second response for the second user, determining a predicted ambiguity between the first response and the second response, altering the first response according to the predicted ambiguity, and providing the first response and the second response.
Query rephrasing using encoder neural network and decoder neural network
A method comprising receiving first data representative of a query. A representation of the query is generated using an encoder neural network and the first data. Words for a rephrased version of the query are selected from a set of words comprising a first subset of words comprising words of the query and a second subset of words comprising words absent from the query. Second data representative of the rephrased version of the query is generated.
Query rephrasing using encoder neural network and decoder neural network
A method comprising receiving first data representative of a query. A representation of the query is generated using an encoder neural network and the first data. Words for a rephrased version of the query are selected from a set of words comprising a first subset of words comprising words of the query and a second subset of words comprising words absent from the query. Second data representative of the rephrased version of the query is generated.
Task resumption in a natural understanding system
A speech-processing system may provide access to one or more skills via spoken commands and/or responses in the form of synthesized speech. The system may be capable of keeping one or more skills active in the background while a user interacts (e.g., provides inputs to and/or receives outputs from) with a skill running in the foreground. A background skill may receive some trigger data, and determine to request the system to return the background skill to the foreground to, for example, request a user input regarding an action previously requested by the user. In some cases, the user may invoke a background skill to continue a previous interaction. The system may return the background skill to the foreground. The resumed skill may continue a previous interaction to, for example, to query the user for instructions, provide an update or alert, or continue a previous output.
Contextual natural language understanding for conversational agents
Techniques are described for a contextual natural language understanding (cNLU) framework that is able to incorporate contextual signals of variable history length to perform joint intent classification (IC) and slot labeling (SL) tasks. A user utterance provided by a user within a multi-turn chat dialog between the user and a conversational agent is received. The user utterance and contextual information associated with one or more previous turns of the multi-turn chat dialog is provided to a machine learning (ML) model. An intent classification and one or more slot labels for the user utterance are then obtained from the ML model. The cNLU framework described herein thus uses, in addition to a current utterance itself, various contextual signals as input to a model to generate IC and SL predictions for each utterance of a multi-turn chat dialog.
Contextual natural language understanding for conversational agents
Techniques are described for a contextual natural language understanding (cNLU) framework that is able to incorporate contextual signals of variable history length to perform joint intent classification (IC) and slot labeling (SL) tasks. A user utterance provided by a user within a multi-turn chat dialog between the user and a conversational agent is received. The user utterance and contextual information associated with one or more previous turns of the multi-turn chat dialog is provided to a machine learning (ML) model. An intent classification and one or more slot labels for the user utterance are then obtained from the ML model. The cNLU framework described herein thus uses, in addition to a current utterance itself, various contextual signals as input to a model to generate IC and SL predictions for each utterance of a multi-turn chat dialog.
Receiving voice samples from listeners of media programs
Listeners to media programs provide feedback to creators or other entities associated with the media programs in the form of one or more spoken utterances. When a listener to a media program speaks one or more words to a microphone or other system, the words are captured and processed to determine an emotion of the listener, or to determine whether the words include any objectionable content. Data including the spoken words is captured and stored, and presented to the creator of the media program. Notifications of the utterances are provided to the creator, who may identify one of the utterances, and include the utterance in the media program, or invite the listener who provided the utterances to participate in the media program.
Receiving voice samples from listeners of media programs
Listeners to media programs provide feedback to creators or other entities associated with the media programs in the form of one or more spoken utterances. When a listener to a media program speaks one or more words to a microphone or other system, the words are captured and processed to determine an emotion of the listener, or to determine whether the words include any objectionable content. Data including the spoken words is captured and stored, and presented to the creator of the media program. Notifications of the utterances are provided to the creator, who may identify one of the utterances, and include the utterance in the media program, or invite the listener who provided the utterances to participate in the media program.
Electronic device configured to perform action using speech recognition function and method for providing notification related to action using same
A method includes receiving a designated event related to a second application while an execution screen of a first application is displayed on a display. The method also includes executing an artificial intelligent application in response to the designated event. The method further includes transmitting data related to the designated event to an external server, based on the executed artificial intelligent application. Additionally, the method includes sensing a user utterance related to the designated event for a designated period of time. The method also includes transmitting the user utterance to the external server. The method further includes receiving an action order for performing a function related to the user utterance from the external server. The method also includes executing the second application at least based on the received action order. The method further includes outputting a result of performing the function by using the second application.