Patent classifications
G10L2015/227
MITIGATING FALSE POSITIVES AND/OR FALSE NEGATIVES IN HOT WORD FREE ADAPTATION OF AUTOMATED ASSISTANT
Hot word free adaptation, of one or more function(s) of an automated assistant, responsive to determining, based on gaze measure(s) and/or active speech measure(s), that a user is engaging with the automated assistant. Implementations relate to various techniques for mitigating false positive occurrences of and/or false negative occurrences, of hot word free adaptation, through utilization of personalized parameter(s) for at least some user(s) of an assistant device. The personalized parameter(s) are utilized in determining whether condition(s) are satisfied, where those condition(s), if satisfied, indicate that the user is engaging in hot word free interaction with the automated assistant and result in adaptation of function(s) of the automated assistant.
PROVIDING AUDIO INFORMATION WITH A DIGITAL ASSISTANT
In an exemplary technique, speech input including one or more instructions is received. After the speech input has stopped, if it is determined that one or more visual characteristics indicate that further speech input is not expected, a response to the one or more instructions is provided. If it is determined that one or more visual characteristics indicate that further speech input is expected, a response to the one or more instructions is not provided.
Speech-to-text transcription with multiple languages
One embodiment provides a method that includes obtaining a default language corpus. A second language corpus is obtained based on a second language preference. A first transcription of an utterance is received using the default language corpus and natural language processing (NLP). At least one problem word in the first transcription is determined based on an associated grammatical relevance to neighboring words in the first transcription. Upon determining that a first probability score is below a first threshold, an acoustic lookup is performed for an audible match for the problem word in the first transcription based on an associated acoustical relevance. Upon determining that a second probability score is below a second threshold, it is determined whether a match for the problem word exists in the secondary language corpus. Upon determining that the match exists in the secondary language corpus, a second transcription for the utterance is provided.
PERSONALIZED SPEECH QUERY ENDPOINTING BASED ON PRIOR INTERACTION(S)
A personalized endpointing measure can be used to determine whether a user has finished speaking a spoken utterance. Various implementations include using the personalized endpointing measure to determine whether a candidate endpoint indicates a user has finished speaking the spoken utterance or whether the user has paused and has not finished speaking the spoken utterance. Various implementations include determining the personalized endpointing measure based on a portion of a text representation of the spoken utterance immediately preceding the candidate endpoint and a user-specific measure. Additionally or alternatively, the user-specific measure can be based on the text representation immediately preceding the candidate endpoint and one or more historical interactions with the user. In various implementations, each of the historical interactions are specific to the text representation and the user, and indicate whether a previous instance of the text representation was a previous endpoint for the user.
Voice-Based Menu Personalization
A natural-language voice chatbot engages a consumer in a voice dialogue. The chatbot is customized for engaging the specific consumer based on features and characteristics of that specific consumer’s speech and a lexicon associated with terms, words, and commands for item ordering. The consumer can perform voice queries for specific items and/or specific establishments for placing a pre-staged order with the chatbot. Once the consumer selects options with a specific establishment, a pre-staged order is provided to the corresponding establishment on behalf of the user. Location data for a consumer-operated device is monitored and when it is determined that the consumer will arrive at the establishment within a time period required by the establishment to prepare the pre-staged order, a message is sent to the establishment to begin preparing the pre-staged order.
Electronic device and controlling the electronic device
An electronic device and a method for controlling thereof are provided. The electronic device includes a communicator comprising circuitry, a microphone, at least one memory configured to store at least one instruction and dialogue history information, and a processor configured to execute the at least one instruction, and the processor, by executing the at least one instruction, is further configured to determine whether to transmit, to a server storing a first dialogue system, a user speech that is input through the microphone, based on determining that the user speech is transmitted to the server, control the communicator to transmit the user speech and at least a part of the stored dialogue history information to the server, receive, from the server, dialogue history information associated with the user speech, through the communicator, and control the received dialogue history information to be stored in the memory.
DEVICE-DIRECTED UTTERANCE DETECTION
A speech interface device is configured to detect an interrupt event and process a voice command without detecting a wakeword. The device includes on-device interrupt architecture configured to detect when device-directed speech is present and send audio data to a remote system for speech processing. This architecture includes an interrupt detector that detects an interrupt event (e.g., device-directed speech) with low latency, enabling the device to quickly lower a volume of output audio and/or perform other actions in response to a potential voice command. In addition, the architecture includes a device directed classifier that processes an entire utterance and corresponding semantic information and detects device-directed speech with high accuracy. Using the device directed classifier, the device may reject the interrupt event and increase a volume of the output audio or may accept the interrupt event, causing the output audio to end and performing speech processing on the audio data.
Hotword-based speaker recognition
Systems, methods performed by data processing apparatus and computer storage media encoded with computer programs for receiving an utterance from a user in a multi-user environment, each user having an associated set of available resources, determining that the received utterance includes at least one predetermined word, comparing speaker identification features of the uttered predetermined word with speaker identification features of each of a plurality of previous utterances of the predetermined word, the plurality of previous predetermined word utterances corresponding to different known users in the multi-user environment, attempting to identify the user associated with the uttered predetermined word as matching one of the known users in the multi-user environment, and based on a result of the attempt to identify, selectively providing the user with access to one or more resources associated with a corresponding known user.
Techniques for dialog processing using contextual data
Techniques are described for using data stored for a user in association with context levels to improve the efficiency and accuracy of dialog processing tasks. A dialog system stores historical dialog data in association with a plurality of configured context levels. The dialog system receives an utterance and identifies a term for disambiguation from the utterance. Based on a determined context level, the dialog system identifies relevant historical data stored to a database. The historical data may be used to perform tasks such as resolving an ambiguity based on user preferences, disambiguating named entities based on a prior dialog, and identifying previously generated answers to queries. Based on the context level, the dialog system can efficiently identify the relevant information and use the identified information to provide a response.
Device-directed utterance detection
A speech interface device is configured to detect an interrupt event and process a voice command without detecting a wakeword. The device includes on-device interrupt architecture configured to detect when device-directed speech is present and send audio data to a remote system for speech processing. This architecture includes an interrupt detector that detects an interrupt event (e.g., device-directed speech) with low latency, enabling the device to quickly lower a volume of output audio and/or perform other actions in response to a potential voice command. In addition, the architecture includes a device directed classifier that processes an entire utterance and corresponding semantic information and detects device-directed speech with high accuracy. Using the device directed classifier, the device may reject the interrupt event and increase a volume of the output audio or may accept the interrupt event, causing the output audio to end and performing speech processing on the audio data.