Patent classifications
H04M3/42204
REUSABLE MULTIMODAL APPLICATION
A method and system are disclosed herein for accepting multimodal inputs and deriving synchronized and processed information. A reusable multimodal application is provided on the mobile device. A user transmits a multimodal command to the multimodal platform via the mobile network. The one or more modes of communication that are inputted are transmitted to the multimodal platform(s) via the mobile network(s) and thereafter synchronized and processed at the multimodal platform. The synchronized and processed information is transmitted to the multimodal application. If required, the user verifies and appropriately modifies the synchronized and processed information. The verified and modified information are transferred from the multimodal application to the visual application. The final result(s) are derived by inputting the verified and modified results into the visual application.
Information processing apparatus, information processing method, and computer program
Provided is an information processing apparatus capable of reliably delivering a message to a third party desired by a user. Provided is an information processing apparatus including an acquisition unit configured to acquire information including a sound message, and a recognition unit configured to recognize a sender of the sound message, a destination of a message included is the sound message, and content of the message from the information acquired by the acquisition unit, in which the recognition unit generates information for inputting the destination of the message is a case where the destination cannot be uniquely specified.
VIRTUAL PRIVATE AGENT FOR MACHINE-BASED INTERACTIONS WITH A CONTACT CENTER
Contact centers often utilize automated agents to converse with customers of the contact center. As provided herein, a user may utilize a virtual private agent to converse with a contact center. A user device is configured to converse with human and/or automated agents to exchange information on behalf of the user. Tasks may be issued by the user to the user device which, may gather any required additional information, and initiate a call. The call comprising a number of prompts which are then analyzed and respond to in a manner determined to perform the task. If the remote system is discovered to also be automated, the speech-based communications utilized for human agents may be discontinued and non-speech tones utilized for more efficient machine-to-machine communications.
Battery case power system
A battery case is operable with an electronic device such as a cellular telephone. The battery case has a battery that can be used to supply power to the electronic device. The battery case is configured to receive power from a power supply that is coupled to a mains power supply using a wired path or to receive power from a wireless charging mat or other wireless power transmitting device. Circuitry in the battery case may include direct-current-to-direct-current power converter circuitry, current sensor circuitry, switching circuitry, and other circuitry for controlling currents and voltages in the battery case and communicating with other electronic devices.
AUTOMATIC NAVIGATION OF AN INTERACTIVE VOICE RESPONSE (IVR) TREE ON BEHALF OF HUMAN USER(S)
Implementations are directed to utilizing an assistant to automatically navigate an interactive voice response (IVR) tree to arrive at a target state during an assisted telephone call. The assistant can receive input to initiate the assisted telephone call, identify an entity to engage with, on behalf of the user, and during the assisted telephone call, based on the input, and identify an IVR tree stored in association with the entity. In various implementations, navigation of the IVR tree can be modified based on interaction(s) detected at a client device subsequent to initiating the assisted telephone call. In various implementations, the assisted telephone call can be initiated from a search interface, and the target state can be associated with a given search result. In various implementations, the IVR tree can be dynamic in that only a subset of candidate state(s) of the IVR tree may be available as the target state.
Automatic navigation of an interactive voice response (IVR) tree on behalf of human user(s)
Implementations are directed to utilizing an assistant to automatically navigate an interactive voice response (IVR) tree to arrive at a target state during an assisted telephone call. The assistant can receive input to initiate the assisted telephone call, identify an entity to engage with, on behalf of the user, and during the assisted telephone call, based on the input, and identify an IVR tree stored in association with the entity. In various implementations, navigation of the IVR tree can be modified based on interaction(s) detected at a client device subsequent to initiating the assisted telephone call. In various implementations, the assisted telephone call can be initiated from a search interface, and the target state can be associated with a given search result. In various implementations, the IVR tree can be dynamic in that only a subset of candidate state(s) of the IVR tree may be available as the target state.
AUTOMATIC NAVIGATION OF AN INTERACTIVE VOICE RESPONSE (IVR) TREE ON BEHALF OF HUMAN USER(S)
Implementations are directed to utilizing an assistant to automatically navigate an interactive voice response (IVR) tree to arrive at a target state during an assisted telephone call. The assistant can receive input to initiate the assisted telephone call, identify an entity to engage with, on behalf of the user, and during the assisted telephone call, based on the input, and identify an IVR tree stored in association with the entity. In various implementations, navigation of the IVR tree can be modified based on interaction(s) detected at a client device subsequent to initiating the assisted telephone call. In various implementations, the assisted telephone call can be initiated from a search interface, and the target state can be associated with a given search result. In various implementations, the IVR tree can be dynamic in that only a subset of candidate state(s) of the IVR tree may be available as the target state.
Selective performance of automated telephone calls to reduce latency and/or duration of assistant interaction
Implementations are directed to using an assistant to initiate automated telephone calls with entities. Some implementations identify an item of interest, identify a group of entities associated with the item, and initiate the calls with the entities. During a given call with a given entity, the assistant can request a status update regarding the item, and determine a temporal delay before initiating another call with the given entity to request a further status update regarding the item based on information received responsive to the request. Other implementations receive a request to perform an action on behalf of a user, identify a group of entities that can perform the action, and initiate a given call with a given entity. During the given call, the assistant can initiate an additional call with an additional entity, and generate notification(s), for the user, based on result(s) of the given call and/or the additional call.
VOICE-CONTROLLED COMMUNICATION REQUESTS AND RESPONSES
Systems and methods for establishing communication connections using speech, such as establishing calls between speech-controlled devices, are described. A first speech-controlled device receives a communication request in the form of audio and sends audio data corresponding to the captured audio to a server. The server performs speech processing on the audio data to determine a recipient, a subject for the call, and a device associated with the recipient. The server then sends a message indicating the communication request and audio data corresponding to the communication topic to the recipient's speech-controlled device. The recipient device outputs audio to the recipient requesting whether the recipient accepts the communication request. The recipient audibly refuses or accepts the communication request, and the recipient's speech-controlled device sends an indication of the recipient's audible decision to the server. If the recipient accepted the communication request, the server causes a communication connection be established between the two speech-controlled devices.
Automatic upload of pictures from a camera
A system and method is disclosed for enabling user friendly interaction with a camera system. Specifically, the inventive system and method has several aspects to improve the interaction with a camera system, including voice recognition, gaze tracking, touch sensitive inputs and others. The voice recognition unit is operable for, among other things, receiving multiple different voice commands, recognizing the vocal commands, associating the different voice commands to one camera command and controlling at least some aspect of the digital camera operation in response to these voice commands. The gaze tracking unit is operable for, among other things, determining the location on the viewfinder image that the user is gazing upon. One aspect of the touch sensitive inputs provides that the touch sensitive pad is mouse-like and is operable for, among other things, receiving user touch inputs to control at least some aspect of the camera operation. Another aspect of the disclosed invention provides for gesture recognition to be used to interface with and control the camera system.