Patent classifications
H04M2250/74
SELECTIVELY RENDERING A KEYBOARD INTERFACE IN RESPONSE TO AN ASSISTANT INVOCATION IN CERTAIN CIRCUMSTANCES
Implementations set forth herein relate to an automated assistant that can adapt to circumstances in which a user may invoke an automated assistant with an intention of interacting with the automated assistant via a non-default interface. For example, in some instances, a user may invoke an automated assistant by selecting a selectable GUI element. In response, the automated assistant can determine that, in the current context, spoken utterances may not be suitable for providing to the automated assistant. Based on this determination, the automated assistant can cause a keyboard interface to be rendered and/or initialized for receiving typed inputs from the user. Should the user subsequently change contexts, the automated assistant can determine that voice input is now suitable for user input and initialize an audio interface in response to the user providing an invocation input in the subsequent context.
SYSTEM AND METHOD FOR HANDS-FREE MULTI-LINGUAL ONLINE COMMUNICATION
According to various embodiments, a method for hands-free multi-lingual online communication between a first mobile device and a second mobile device is disclosed. The method comprises receiving, at the first mobile device, a text input message in a second language from the second mobile device associated with a second user. Further, the method comprises determining whether a preferred language selection for communicating with the second user on the first mobile device is associated with a language different than the second language. Further, the method comprises translating the received text input message into the first language for a first user of the first mobile device. Furthermore, the method comprises displaying the text input message into the first language on the first mobile device.
Systems and methods for optimizing voice verification from multiple sources against a common voiceprint
Systems and methods for authenticating a user using a voice activated device. The method includes receiving first data representing a user identifier corresponding to a user and second data representing a device identifier corresponding to the voice activated device. The method further includes determining user metadata corresponding to the user identifier and a device audio type corresponding to the device identifier. The method also includes calculating a risk score based on the user metadata. The method further includes calculating a length of spoken voice utterance based on the calculated risk score. The method also includes receiving and processing third data representing a spoken voice utterance having the calculated length corresponding to the user using the voice activated device. The method further includes validating the user in response to determining that the processed third data substantially matches the voiceprint associated with the user.
Portable terminal device and information processing system
A portable terminal device in an information processing system and method includes a camera and a microphone. Data of obtained images and voice are transmitted to a server that identifies operations to be executed based on the received voice and image data. The server transmits an identification of one or more results of the plurality of operations to the portable terminal device. When the portable terminal device receives only one result from the server, an operation corresponding to the one result is executed, and when a plurality of results is received, the portable terminal device displays information corresponding to the plurality of results as candidates. Additional voice is captured for selecting one of the plurality of results during the displaying of the information. A determination of one result from the plurality of results is made based on the captured voice, and an operation corresponding to the determined result is executed.
ELECTRONIC DEVICE AND METHOD FOR SHARING EXECUTION INFORMATION ON USER INPUT HAVING CONTINUITY
An electronic device and method for sharing execution information on a user input having continuity thereof are provided. The electronic device includes a processor configured to, recognize a user intent by analyzing the user input, execute a function and/or action corresponding to the user intent and provide an execution result through the display, store the user input in the memory, in response to a sharing request of the user input, classify a type of the user input requested to be shared, if the classification result shows that the user input requested to be shared is a user input related to a previous user input pre-stored in the memory, generate execution information, based on the at least one previous user input and the user input requested to be shared, and transmit the generated execution information to another electronic device through the communication module. Various other embodiments are also possible.
Joining users to communications via voice commands
Techniques for joining a device of a third user to a communication between a device of a first user and a device of a second user are described herein. For instance, two or more users may utilize respective computing devices to engage in a telephone call, a video call, an instant-messaging session, or any other type of communication in which the users communicate with each other audibly and/or visually. In some instances, a first user of the two users may issue a voice command requesting to join a device of a third user to the communication. One or more computing devices may recognize this voice command and may attempt to join a device of a third user to the communication.
VOICE APPLICATION NETWORK PLATFORM
A distributed voice applications system includes a voice applications rendering agent and at least one voice applications agent that is configured to provide voice applications to an individual user. A management system may control and direct the voice applications rendering agent to create voice applications that are personalized for individual users based on user characteristics, information about the environment in which the voice applications will be performed, prior user interactions and other information. The voice applications agent and components of customized voice applications may be resident on a local user device which includes a voice browser and speech recognition capabilities. The local device, voice applications rendering agent and management system may be interconnected via a communications network.
Selectively rendering a keyboard interface in response to an assistant invocation in certain circumstances
Implementations set forth herein relate to an automated assistant that can adapt to circumstances in which a user may invoke an automated assistant with an intention of interacting with the automated assistant via a non-default interface. For example, in some instances, a user may invoke an automated assistant by selecting a selectable GUI element. In response, the automated assistant can determine that, in the current context, spoken utterances may not be suitable for providing to the automated assistant. Based on this determination, the automated assistant can cause a keyboard interface to be rendered and/or initialized for receiving typed inputs from the user. Should the user subsequently change contexts, the automated assistant can determine that voice input is now suitable for user input and initialize an audio interface in response to the user providing an invocation input in the subsequent context.
Systems and methods for computerized interactive skill training
The present invention is directed to interactive training, and in particular, to methods and systems for computerized interactive skill training. An example embodiment provides a method and system for providing skill training using a computerized system. The computerized system receives a selection of a first training subject. Several related training components can be invoked, such as reading, watching, performing, and/or reviewing components. In addition, a scored challenge session is provided, wherein a training challenge is provided to a user via a terminal, optionally in video form.
COMMUNICATION TERMINAL, INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING SYSTEM, INFORMATION PROCESSING METHOD, AND PROGRAM
In an information processing system, a storage unit stores data related to a voice of a user for each item of user identification information that identifies the user during transmission or reception of a call. An acquisition unit acquires the user identification information on a calling side and data related to the voice of the user on the calling side after a call is started. A determination unit determines whether the data related to the voice of the user on the calling side and the data related to the voice stored by the storage unit in the user identification information acquired by the acquisition unit are based on the voice of the same person during the call.