Patent classifications
G10L17/00
LOCATING INDIVIDUALS USING MICROPHONE ARRAYS AND VOICE PATTERN MATCHING
Examples disclosed herein provide the ability to identify the location of an individual within a room by using a combination of microphone arrays and voice pattern matching. In one example, a computing device may extract a voice detected by microphones of a microphone array located in a room, perform voice pattern matching to identify an individual associated with the extracted voice, and determine a location of the individual in the room based on an intensity of the voice detected individually by the microphones of the microphone array.
SPEAKER VERIFICATION USING CO-LOCATION INFORMATION
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for identifying a user in a multi-user environment. One of the methods includes receiving, by a first user device, an audio signal encoding an utterance, obtaining, by the first user device, a first speaker model for a first user of the first user device, obtaining, by the first user device for a second user of a second user device that is co-located with the first user device, a second speaker model for the second user or a second score that indicates a respective likelihood that the utterance was spoken by the second user, and determining, by the first user device, that the utterance was spoken by the first user using (i) the first speaker model and the second speaker model or (ii) the first speaker model and the second score.
SPEAKER VERIFICATION USING CO-LOCATION INFORMATION
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for identifying a user in a multi-user environment. One of the methods includes receiving, by a first user device, an audio signal encoding an utterance, obtaining, by the first user device, a first speaker model for a first user of the first user device, obtaining, by the first user device for a second user of a second user device that is co-located with the first user device, a second speaker model for the second user or a second score that indicates a respective likelihood that the utterance was spoken by the second user, and determining, by the first user device, that the utterance was spoken by the first user using (i) the first speaker model and the second speaker model or (ii) the first speaker model and the second score.
Voice detection using ear-based devices
This disclosure describes techniques for detecting voice commands from a user of an ear-based device. The ear-based device may include an in-ear facing microphone to capture sound emitted in an ear of the user, and an exterior facing microphone to capture sound emitted in an exterior environment of the user. The in-ear microphone may generate an inner audio signal representing the sound emitted in the ear, and the exterior microphone may generate an outer audio signal representing sound from the exterior environment. The ear-based device may compute a ratio of a power of the inner audio signal to the outer audio signal and may compare this ratio to a threshold. If the ratio is larger than the threshold, the ear-based device may detect the voice of the user. Further, the ear-based device may set a value of the threshold based on a level of acoustic seal of the ear-based device.
REDUCING THE NEED FOR MANUAL START/END-POINTING AND TRIGGER PHRASES
Systems and processes for selectively processing and responding to a spoken user input are provided. In one example, audio input containing a spoken user input can be received at a user device. The spoken user input can be identified from the audio input by identifying start and end-points of the spoken user input. It can be determined whether or not the spoken user input was intended for a virtual assistant based on contextual information. The determination can be made using a rule-based system or a probabilistic system. If it is determined that the spoken user input was intended for the virtual assistant, the spoken user input can be processed and an appropriate response can be generated. If it is instead determined that the spoken user input was not intended for the virtual assistant, the spoken user input can be ignored and/or no response can be generated.
Autonomous material evaluation system and method
A system and method to determine a remaining useful life estimation of a material under evaluation. The equipment comprises at least one computer and a material features acquisition system operable to detect a plurality of material features. The features are then evaluated according to rules captured from of experts and inputted into the computer. The computer iterations are processed until an acceptable conclusion is made regarding the condition of the material under evaluation. The remaining useful life estimation capability may also be retrofitted into conventional inspection systems by extracting pertinent features through spectral frequency analysis and sensor normalization and utilizing those features in the autonomous remaining useful life estimation system.
Autonomous material evaluation system and method
A system and method to determine a remaining useful life estimation of a material under evaluation. The equipment comprises at least one computer and a material features acquisition system operable to detect a plurality of material features. The features are then evaluated according to rules captured from of experts and inputted into the computer. The computer iterations are processed until an acceptable conclusion is made regarding the condition of the material under evaluation. The remaining useful life estimation capability may also be retrofitted into conventional inspection systems by extracting pertinent features through spectral frequency analysis and sensor normalization and utilizing those features in the autonomous remaining useful life estimation system.
Machine learning dataset generation using a natural language processing technique
A server can receive a plurality of records at a databases such that each record is associated with a phone call and includes at least one request generated based on a transcript of the phone call. The server can generate a training dataset based on the plurality of records. The server can further train a binary classification model using the training dataset. Next, the server can receive a live transcript of a phone call in progress. The server can generate at least one live request based on the live transcript using a natural language processing module of the server. The server can provide the at least one live request to the binary classification model as input to generate a prediction. Lastly, the server can transmit the prediction to an entity receiving the phone call in progress. The prediction can cause a transfer of the call to a chatbot.
DIALOG MANAGEMENT WITH MULTIPLE APPLICATIONS
Features are disclosed for performing functions in response to user requests based on contextual data regarding prior user requests. Users may engage in conversations with a computing device in order to initiate some function or obtain some information. A dialog manager may manage the conversations and store contextual data regarding one or more of the conversations. Processing and responding to subsequent conversations may benefit from the previously stored contextual data by, e.g., reducing the amount of information that a user must provide if the user has already provided the information in the context of a prior conversation. Additional information associated with performing functions responsive to user requests may be shared among applications, further improving efficiency and enhancing the user experience.
DIALOG MANAGEMENT WITH MULTIPLE APPLICATIONS
Features are disclosed for performing functions in response to user requests based on contextual data regarding prior user requests. Users may engage in conversations with a computing device in order to initiate some function or obtain some information. A dialog manager may manage the conversations and store contextual data regarding one or more of the conversations. Processing and responding to subsequent conversations may benefit from the previously stored contextual data by, e.g., reducing the amount of information that a user must provide if the user has already provided the information in the context of a prior conversation. Additional information associated with performing functions responsive to user requests may be shared among applications, further improving efficiency and enhancing the user experience.