Patent classifications
G10L2015/228
REDUCING THE NEED FOR MANUAL START/END-POINTING AND TRIGGER PHRASES
Systems and processes for selectively processing and responding to a spoken user input are provided. In one example, audio input containing a spoken user input can be received at a user device. The spoken user input can be identified from the audio input by identifying start and end-points of the spoken user input. It can be determined whether or not the spoken user input was intended for a virtual assistant based on contextual information. The determination can be made using a rule-based system or a probabilistic system. If it is determined that the spoken user input was intended for the virtual assistant, the spoken user input can be processed and an appropriate response can be generated. If it is instead determined that the spoken user input was not intended for the virtual assistant, the spoken user input can be ignored and/or no response can be generated.
Contextual assistant using mouse pointing or touch cues
A method for a contextual assistant to use mouse pointing or touch cues includes receiving audio data corresponding to a query spoken by a user, receiving, in a graphical user interface displayed on a screen, a user input indication indicating a spatial input applied at a first location on the screen, and processing the audio data to determine a transcription of the query. The method also includes performing query interpretation on the transcription to determine that the query is referring to an object displayed on the screen without uniquely identifying the object, and requesting information about the object. The method further includes disambiguating, using the user input indication indicating the spatial input applied at the first location on the screen, the query to uniquely identify the object that the query is referring to, obtaining the information about the object requested by the query, and providing a response to the query.
DIALOG MANAGEMENT WITH MULTIPLE APPLICATIONS
Features are disclosed for performing functions in response to user requests based on contextual data regarding prior user requests. Users may engage in conversations with a computing device in order to initiate some function or obtain some information. A dialog manager may manage the conversations and store contextual data regarding one or more of the conversations. Processing and responding to subsequent conversations may benefit from the previously stored contextual data by, e.g., reducing the amount of information that a user must provide if the user has already provided the information in the context of a prior conversation. Additional information associated with performing functions responsive to user requests may be shared among applications, further improving efficiency and enhancing the user experience.
VEHICLE FUNCTION CONTROL WITH SENSOR BASED VALIDATION
The present disclosure is generally related to a data processing system to validate vehicular functions in a voice activated computer network environment. The data processing system can improve the efficiency of the network by discarding action data structures and requests that invalid prior to their transmission across the network. The system can invalidate requests by comparing attributes of a vehicular state to attributes of a request state.
CONVERSION TABLE GENERATION DEVICE, CONVERSION TABLE GENERATION METHOD, AND RECORDING MEDIUM
A conversion table generation device includes a similar word extraction unit and a conversion table generation unit. The similar word extraction unit is configured to extract similar words similar to first words, for each of the first words included in a word group used in a dialogue. The conversion table generation unit is configured to associate any one of the first words with the extracted similar words, that are similar to the plurality of first words, as second words, on the basis of the priority, and generates a conversion table for voice recognition with the second word as a conversion source and the first word as a conversion destination.
Operating modes that designate an interface modality for interacting with an automated assistant
Implementations described herein relate to transitioning a computing device between operating modes according to whether the computing device is suitably oriented for received non-audio related gestures. For instance, the user can attach a portable computing device to a docking station of a vehicle and, while in transit, wave their hand near the portable computing device in order to invoke the automated assistant. Such action by the user can be detected by a proximity sensor and/or any other device capable of determining a context of the portable computing device and/or an interest of the user in invoking the automated assistant. In some implementations location, orientation, and/or motion of the portable computing device can be detected and used in combination with an output of the proximity sensor to determine whether to invoke the automated assistant in response to an input gesture from the user.
Systems and methods to provide audible output based on section of content being presented
A device provides audible output pertaining to audio video (AV) content such as a video game based on a section of the AV content that is being presented.
Operation method of dialog agent and apparatus thereof
An operation method of a dialog agent includes obtaining an utterance history including at least one of an outgoing utterance to be transmitted to request a service or at least one of an incoming utterance to be received to request the service, updating a requirement specification including items requested for the service based on the utterance history, generating utterance information to be used to request the service based on the updated requirement specification, and outputting the generated utterance information.
Always listening and active voice assistant and vehicle operation
A vehicle includes a controller configured to select one of a group of topics for generating an answer to a question embedded within the group based on an operating parameter of the vehicle and a syntax of a phrase. The selection is responsive to input originating from utterances including a preceding topic and a following topic having a moniker therebetween that is associated with only one of the topics through the syntax. The vehicle may operate an interface to output the answer.
Robotic interactions for observable signs of intent
Described herein are assistant robots that anticipate needs of one or more people (or animals). The assistant robots may recognize a current activity, knowledge of the person's routines, and contextual information. As such, the assistant robots can provide or offer to provide appropriate robotic assistance. The assistant robots can learn users' habits or be provided with knowledge regarding humans in its environment. The assistant robots develop a schedule and contextual understanding of the persons' behavior and needs. The assistant robots may interact, understand, and communicate with people before, during, or after providing assistance. The robot can combine gesture, clothing, emotional aspect, time, pose recognition, action recognition, and other observational data to understand people's medical condition, current activity, and future intended activities and intents.