Patent classifications
H04M2250/74
SYSTEMS AND METHODS FOR MULTI-AGENT CONVERSATIONS
A first input is received from a user input device. Based on the first input, a list of candidate intents is generated, and a plurality of agents is initialized. Each agent of the plurality of agents corresponds to a respective candidate intent. Each agent then provides a different response to the first input in accordance with its respective corresponding intent. A second input is then received that responds to one or more of the agents. Based on the agents to which the second input is responsive, the list of candidate intents is refined and, based on the refined list, one or more agents are deactivated.
Electronic device and method of executing function of electronic device
An artificial intelligence system and method are disclosed herein. The system includes a processor which implements the method, including: receiving by an input unit a first user input including a request to execute a task using at least one of the electronic device or an external device, transmitting by a wireless communication unit first data associated with the first user input to an external server, receiving a first response from the external server including information associated with at least one of the first user input and a sequence of electronic device states for performing at least a portion of the task, receiving a second user input assigning at least one of a voice command and a touch operation received by a touch screen display as the request to perform the task, and transmitting second data associated with the second user input to the external server.
Electronic apparatus and control method thereof
Disclosed is an electronic apparatus. The electronic apparatus includes a first communicator, a second communicator; and a processor configured to determine whether or not an external electronic apparatus outputting input speech is connectable to a network connectable through the first communicator, based on the input speech, and to transmit a signal for controlling the external electronic apparatus to the external electronic apparatus through the first communicator or the second communicator depending on whether or not the external electronic apparatus is connectable to the network.
WEARABLE HEADSET WITH SELF-CONTAINED VOCAL FEEDBACK AND VOCAL COMMAND
A headset includes a wearable body, first and second earphones extending from the wearable body, controls for controlling an external communication/multimedia device wirelessly, a microphone for picking up vocal data from a user of the headset system and a signal processing unit. The signal processing unit includes circuitry for processing the vocal data into a distinctly audible vocal feedback signal, circuitry for enhancing the vocal feedback signal thereby producing an enhanced vocal feedback signal and circuitry for mixing the enhanced vocal feedback signal with audio signals originating from the external communication/multimedia device, thereby producing a mixed output signal and then sending the mixed output signal to the user via the earphones. The external communication/multimedia device comprises a vocal command application and the headset further comprises a vocal command control for sending vocal commands to the external communication/multimedia device and to the vocal command application.
Voice detection using ear-based devices
This disclosure describes techniques for detecting voice commands from a user of an ear-based device. The ear-based device may include an in-ear facing microphone to capture sound emitted in an ear of the user, and an exterior facing microphone to capture sound emitted in an exterior environment of the user. The in-ear microphone may generate an inner audio signal representing the sound emitted in the ear, and the exterior microphone may generate an outer audio signal representing sound from the exterior environment. The ear-based device may compute a ratio of a power of the inner audio signal to the outer audio signal and may compare this ratio to a threshold. If the ratio is larger than the threshold, the ear-based device may detect the voice of the user. Further, the ear-based device may set a value of the threshold based on a level of acoustic seal of the ear-based device.
VOICE APPLICATION NETWORK PLATFORM
A distributed voice applications system includes a voice applications rendering agent and at least one voice applications agent that is configured to provide voice applications to an individual user. A management system may control and direct the voice applications rendering agent to create voice applications that are personalized for individual users based on user characteristics, information about the environment in which the voice applications will be performed, prior user interactions and other information. The voice applications agent and components of customized voice applications may be resident on a local user device which includes a voice browser and speech recognition capabilities. The local device, voice applications rendering agent and management system may be interconnected via a communications network.
UNIFIED MESSAGE SEARCH
The disclosed embodiments include computerized methods, systems, and devices, including computer programs encoded on a computer storage medium, for generating terms of a search query based on a user's spoken utterances, identifying multiple cross-platform messages based on the generated terms, and to generating, via a presentation device, a single interface that enables the user to interact with identified messages. Based on a spoken utterance, the disclosed embodiments may determine user-specified search terms and/or criteria, and based on the user-specified search terms and/or criteria, may obtain cross-platform message data that corresponds to the search query. The communications device may generate one or more interface elements that describe corresponding ones of the cross-platform messages, which may be presented within a unified graphical user interface or voice-user interface by a communications device.
Wearable Multimedia Device and Cloud Computing Platform with Application Ecosystem
Systems, methods, devices and non-transitory, computer-readable storage mediums are disclosed for a wearable multimedia device and cloud computing platform with an application ecosystem for processing multimedia data captured by the wearable multimedia device. In an embodiment, a method comprises: receiving, by one or more processors of a cloud computing platform, context data from a wearable multimedia device, the wearable multimedia device including at least one data capture device for capturing the context data; creating a data processing pipeline with one or more applications based on one or more characteristics of the context data and a user request; processing the context data through the data processing pipeline; and sending output of the data processing pipeline to the wearable multimedia device or other device for presentation of the output.
ATTENTION AWARE VIRTUAL ASSISTANT DISMISSAL
Systems and processes for operating an intelligent automated assistant are provided. An example process includes initiating a virtual assistant session responsive to receiving user input. In accordance with initiating the virtual assistant session, the process includes determining, based on data obtained using one or more sensors of the electronic device, whether one or more criteria representing expressed user disinterest are satisfied. In accordance with determining that the one or more criteria representing expressed user disinterest are satisfied prior to a first time, the process includes automatically deactivating the virtual assistant session prior to the first time. The first time is defined by a setting of the electronic device. In accordance with determining that the one or more criteria representing expressed user disinterest are not satisfied prior to the first time, the process includes automatically deactivating the virtual assistant session at the first time.
Providing hands-free service to multiple devices
Techniques for providing audio services to multiple devices are described. For instance, connections between a hands-free unit and multiple wireless devices are established. The connections are themselves used to establish active communication channels, such as active audio communication channels, between the hands-free unit and the wireless devices, such as during a phone call. Upon establishment of an active communication channel with one of the wireless devices, the connections to the other wireless devices are disconnected—and/or additional connections refused—for the duration of the active communication channel. Furthermore, a routing module in various embodiments permits multiple hands-free units to route active communication channels to each other depending on user location.