Patent classifications
G10L15/08
INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD
An information processing apparatus of the present disclosure includes an acquisition unit that acquires respiration information indicating respiration of a user, and a determination unit that determines an operation amount regarding an operation by the user on the basis of the respiration of the user indicated by the respiration information acquired by the acquisition unit.
DEVICE PAIRING USING WIRELESS COMMUNICATION BASED ON VOICE COMMAND CONTEXT
Pairing of multiple devices is initiated using a computer in an artificial intelligence (AI) ecosystem. A command is received at a computer to perform a user activity at a location which includes pairing a user device to a selectable device at the location. The context of the command is analyzed including a historical corpus regarding previous pairings and connection preferences. A device at the location is selected based on the analysis and the determining of the user activity. Pairing is automatically initiated for the user device to the selected device at the location based on the analysis of the context of the command. The automatic initiation includes adjusting settings on the user device based on the analysis of the context of the command. The user device is automatically paired to the selected device at the location to perform the user activity.
HANDSFREE INFORMATION SYSTEM AND METHOD
A method, computer program product, and computing system for monitoring a work environment in which a technician is working on a mechanical asset; detecting the issuance of a textless-input concerning the mechanical asset; processing the textless-input to define a response; and effectuating the response.
Device, method and computer program for acoustic monitoring of a monitoring area
A device for acoustic monitoring of a monitoring area includes first and second sensor systems which have first and second acoustic sensors, processors, and transmitter, respectively, and which may be mounted at different locations of the monitoring area. The first and second processors may be configured to classify first and second audio signals detected by the first and second acoustic sensors so as to obtain first and second classification results, respectively. The first and second transmitter may be configured to transmit the first and second classification results to a central evaluator, respectively. In addition, the device may include the central evaluator, which may be configured to receive the first classification result and to receive the second classification result, and to generate a monitoring output for the monitoring area as a function of the first classification result and the second classification result.
Device, method and computer program for acoustic monitoring of a monitoring area
A device for acoustic monitoring of a monitoring area includes first and second sensor systems which have first and second acoustic sensors, processors, and transmitter, respectively, and which may be mounted at different locations of the monitoring area. The first and second processors may be configured to classify first and second audio signals detected by the first and second acoustic sensors so as to obtain first and second classification results, respectively. The first and second transmitter may be configured to transmit the first and second classification results to a central evaluator, respectively. In addition, the device may include the central evaluator, which may be configured to receive the first classification result and to receive the second classification result, and to generate a monitoring output for the monitoring area as a function of the first classification result and the second classification result.
Systems and methods of operating media playback systems having multiple voice assistant services
Systems and methods for managing multiple voice assistants are disclosed. Audio input is received via one or more microphones of a playback device. A first activation word is detected in the audio input via the playback device. After detecting the first activation word, the playback device transmits a voice utterance of the audio input to a first voice assistant service (VAS). The playback device receives, from the first VAS, first content to be played back via the playback device. The playback device also receives, from a second VAS, second content to be played back via the playback device. The playback device plays back the first content while suppressing the second content. Such suppression can include delaying or canceling playback of the second content.
Systems and methods of operating media playback systems having multiple voice assistant services
Systems and methods for managing multiple voice assistants are disclosed. Audio input is received via one or more microphones of a playback device. A first activation word is detected in the audio input via the playback device. After detecting the first activation word, the playback device transmits a voice utterance of the audio input to a first voice assistant service (VAS). The playback device receives, from the first VAS, first content to be played back via the playback device. The playback device also receives, from a second VAS, second content to be played back via the playback device. The playback device plays back the first content while suppressing the second content. Such suppression can include delaying or canceling playback of the second content.
End-to-end streaming keyword spotting
A method for detecting a hotword includes receiving a sequence of input frames that characterize streaming audio captured by a user device and generating a probability score indicating a presence of a hotword in the streaming audio using a memorized neural network. The network includes sequentially-stacked single value decomposition filter (SVDF) layers and each SVDF layer includes at least one neuron. Each neuron includes a respective memory component, a first stage configured to perform filtering on audio features of each input frame individually and output to the memory component, and a second stage configured to perform filtering on all the filtered audio features residing in the respective memory component. The method also includes determining whether the probability score satisfies a hotword detection threshold and initiating a wake-up process on the user device for processing additional terms.
Speaker dependent follow up actions and warm words
A method includes receiving audio data corresponding to an utterance spoken by a user that includes a command for a digital assistant to perform a long-standing operation, activating a set of one or more warm words associated with a respective action for controlling the long-standing operation, and associating the activated set of one or more warm words with only the user. While the digital assistant is performing the long-standing operation, the method includes receiving additional audio data corresponding to an additional utterance, identifying one of the warm words from the activated set of warm words, and performing speaker verification on the additional audio data. The method further includes performing the respective action associated with the identified one of the warm words for controlling the long-standing operation when the additional utterance was spoken by the same user that is associated with the activated set of one or more warm words.
Speaker dependent follow up actions and warm words
A method includes receiving audio data corresponding to an utterance spoken by a user that includes a command for a digital assistant to perform a long-standing operation, activating a set of one or more warm words associated with a respective action for controlling the long-standing operation, and associating the activated set of one or more warm words with only the user. While the digital assistant is performing the long-standing operation, the method includes receiving additional audio data corresponding to an additional utterance, identifying one of the warm words from the activated set of warm words, and performing speaker verification on the additional audio data. The method further includes performing the respective action associated with the identified one of the warm words for controlling the long-standing operation when the additional utterance was spoken by the same user that is associated with the activated set of one or more warm words.