Patent classifications
G10L2015/227
Personalized conversational recommendations by assistant systems
In one embodiment, a method includes receiving a user request from a client system associated with a user, generating a response to the user request which references one or more entities, generating a personalized recommendation based on the user request and the response, wherein the personalized recommendation references one or more of the entities of the response, and sending instructions for presenting the response and the personalized recommendation to the client system.
Method and system providing service based on user voice
A method for providing a service based on a user's voice includes steps of extracting a voice of a first user, generating text information or voice waveform information based on the voice of the first user, analyzing a disposition of the first user based on the text information and the voice waveform information, and then selecting a second user corresponding to the disposition of the first user based on the analysis result, providing the first user with a conversation connection service with the second user and acquiring information on a change in an emotional state of the first user based on conversation information between the first user and the second user, and re-selecting the second user corresponding to the disposition of the first user based on the acquired information on the change in the emotional state of the first user.
Systems and methods for controlling a fuel pump
A method of controlling a fuel pump includes receiving a first set of data characterizing an audible activation word including a first voice pattern. Control of the fuel pump is authorized in response to the first voice pattern matching a stored voice pattern within a database. A second set of data characterizing an audible command word is received, where the audible command word includes a second voice pattern. The fuel pump is controlled based on the audible command word in response to the second voice pattern matching the stored voice pattern within the database.
Intelligent automated order-based customer dialogue system
Based on a detection that a customer has arrived at an enterprise location to pick up a previously-placed order, an intelligent automated customer dialogue system generates an interface via which an intelligent customer dialogue application dialogues with the customer. The application generates and initially offers, at the interface using natural language, content which is contextual to one or more items of the order, e.g., by using a specially trained intelligent dialogue machine learning model. The application may intelligently respond to the customer's natural language responses and/or requests to refine, augment, or redirect subsequently-offered content and/or dialogue, e.g., by using the model. Offered content (e.g., product information, services, coupons, suggestions, recommendations, etc.) generally provides value-add to the customer as well as maintains customer engagement. The system may be implemented at least partially by using a chatbot upon curbside pick-up, for example, as well as through other electronic customer facing channels.
SYSTEM AND METHOD FOR SPEECH PROCESSING BASED ON RESPONSE CONTENT
A system for determining intent in a voice signal receives a first voice signal that indicates to perform a task. The system sends a first response that comprises a hyperlink associated with a particular webpage used to perform the task. The system receives a second voice signal that indicates whether to access the hyperlink. The system determines intent of the second voice signal by comparing keywords of the second voice signal with keywords of the first response. The system activates the hyperlink in response to determining that the keywords of the second voice signal correspond to the keywords of the first response.
Dynamically delaying execution of automated assistant actions and/or background application requests
Implementations set forth herein allow a user to access a first application in a foreground of a graphical interface, and simultaneously employ an automated assistant to respond to notifications arising from a second application. The user can provide an input, such as a spoken utterance, while viewing the first application in the foreground in order to respond to notifications from the second application without performing certain intervening steps that can arise under certain circumstances. Such intervening steps can include providing a user confirmation, which can be bypassed, and/or time-limited according to a timer, which can be displayed in response to the user providing a responsive input directed at the notification. A period for the timer can be set according to one or more characteristics that are associated with the notification, the user, and/or any other information that can be associated with the user receiving the notification.
Electronic apparatus, method for controlling mobile apparatus by electronic apparatus and computer readable recording medium
An electronic apparatus is provided. The electronic apparatus includes a voice receiver, a communication interface, and a processor configured to, based on a user voice being obtained through the voice receiver, identify a mobile apparatus having a user account corresponding to the user voice from among at least one mobile apparatus communicably connected to the electronic apparatus through the communication interface, and transmit a control signal corresponding to the user voice to the identified mobile apparatus through the communication interface.
Information processing apparatus and non-transitory computer readable medium storing program
An information processing apparatus includes a processor configured to acquire a voice of a user, authenticate the user by using the voice, and recognize the voice, and display operation screens that are different depending on an authentication result of the user and a recognition result of the voice and are used for an operation of executing processing on a display unit.
ON-DEVICE GENERATION AND PERSONALIZATION OF ZERO-PREFIX SUGGESTION(S) AND USE THEREOF
Implementations described herein relate to generating, locally at a client device, corresponding subset(s) of zero-prefix suggestions, for a user of the client device, and for suggestion state(s) associated with the client device, and subsequently causing the client device to utilize the corresponding subset(s) of zero-prefix suggestions. The suggestion state(s) and a superset of candidate zero-prefix suggestions can be processed, using machine learning model(s), to generate a corresponding score for each of the candidate zero-prefix suggestions and with respect to the suggestion state(s). Further, zero-prefix suggestions can be selected for inclusion in the corresponding subset(s) of zero-prefix suggestions, and for the suggestion state(s), based on the corresponding scores. Accordingly, when a given suggestion state is subsequently detected at the client device, a given corresponding subset of zero-prefix suggestions that is stored in association with the given suggestion state can be obtained and provided for presentation to the user.
EARLY INVOCATION FOR CONTEXTUAL DATA PROCESSING
A speech processing system uses contextual data to determine the specific domains, subdomains, and applications appropriate for taking action in response to spoken commands and other utterances. The system can use signals and other contextual data associated with an utterance, such as location signals, content catalog data, data regarding historical usage patterns, data regarding content visually presented on a display screen of a computing device when an utterance was made, other data, or some combination thereof.