Patent classifications
G10L15/24
DYNAMIC ADAPTATION OF PARAMETER SET USED IN HOT WORD FREE ADAPTATION OF AUTOMATED ASSISTANT
Hot word free adaptation, of function(s) of an automated assistant, responsive to determining, based on gaze measure(s) and/or active speech measure(s), that a user is engaging with the automated assistant. Implementations relate to techniques for mitigating false positive occurrences of and/or false negative occurrences, of hot word free adaptation, through utilization of a permissive parameter set in some situation(s) and a restrictive parameter set in other situation(s). For example, utilizing the restrictive parameter set when it is determined that a user is engaged in conversation with additional user(s). The permissive parameter set includes permissive parameter(s) that are more permissive than counterpart(s) in the restrictive parameter set. A parameter set is utilized in determining whether condition(s) are satisfied, where those condition(s), if satisfied, indicate that the user is engaging in hot word free interaction with the automated assistant and result in adaptation of function(s) of the automated assistant
Detection of replay attack
In order to detect a replay attack in a speaker recognition system, at least one feature is identified in a detected magnetic field. It is then determined whether the at least one identified feature of the detected magnetic field is indicative of playback of speech through a loudspeaker. If so, it is determined that a replay attack may have taken place.
Detection of replay attack
In order to detect a replay attack in a speaker recognition system, at least one feature is identified in a detected magnetic field. It is then determined whether the at least one identified feature of the detected magnetic field is indicative of playback of speech through a loudspeaker. If so, it is determined that a replay attack may have taken place.
Automated sign language translation and communication using multiple input and output modalities
Methods, apparatus and systems for recognizing sign language movements using multiple input and output modalities. One example method includes capturing a movement associated with the sign language using a set of visual sensing devices, the set of visual sensing devices comprising multiple apertures oriented with respect to the subject to receive optical signals corresponding to the movement from multiple angles, generating digital information corresponding to the movement based on the optical signals from the multiple angles, collecting depth information corresponding to the movement in one or more planes perpendicular to an image plane captured by the set of visual sensing devices, producing a reduced set of digital information by removing at least some of the digital information based on the depth information, generating a composite digital representation by aligning at least a portion of the reduced set of digital information, and recognizing the movement based on the composite digital representation.
Automated sign language translation and communication using multiple input and output modalities
Methods, apparatus and systems for recognizing sign language movements using multiple input and output modalities. One example method includes capturing a movement associated with the sign language using a set of visual sensing devices, the set of visual sensing devices comprising multiple apertures oriented with respect to the subject to receive optical signals corresponding to the movement from multiple angles, generating digital information corresponding to the movement based on the optical signals from the multiple angles, collecting depth information corresponding to the movement in one or more planes perpendicular to an image plane captured by the set of visual sensing devices, producing a reduced set of digital information by removing at least some of the digital information based on the depth information, generating a composite digital representation by aligning at least a portion of the reduced set of digital information, and recognizing the movement based on the composite digital representation.
Methods and systems for speech detection
Methods and systems for processing user input to a computing system are disclosed. The computing system has access to an audio input and a visual input such as a camera. Face detection is performed on an image from the visual input, and if a face is detected this triggers the recording of audio and making the audio available to a speech processing function. Further verification steps can be combined with the face detection step for a multi-factor verification of user intent to interact with the system.
Methods and systems for speech detection
Methods and systems for processing user input to a computing system are disclosed. The computing system has access to an audio input and a visual input such as a camera. Face detection is performed on an image from the visual input, and if a face is detected this triggers the recording of audio and making the audio available to a speech processing function. Further verification steps can be combined with the face detection step for a multi-factor verification of user intent to interact with the system.
SPEECH TRANSCRIPTION FROM FACIAL SKIN MOVEMENTS
Systems and methods are disclosed for determining textual transcription from minute facial skin movements. In one implementation, a system may include at least one coherent light source, at least one sensor configured to receive light reflections from the at least one coherent light source; and a processor configured to control the at least one coherent light source to illuminate a region of a face of a user. The processor may receive from the at least one sensor, reflection signals indicative of coherent light reflected from the face in a time interval. The reflection signals may be analyzed to determine minute facial skin movements in the time interval. Then, based on the determined minute facial skin movements in the time interval, the processor may determine a sequence of words associated with the minute facial skin movements, and output a textual transcription corresponding with the determined sequence of words.
SPEECH DETECTION USING IMAGE CLASSIFICATION
Speech detection can be achieved by identifying a speech segment within an audio segment using image classification. An audio segment of radio communications is obtained. An audio sub-segment within the audio segment is extracted. A sampled histogram is generated of a plurality of sampled values across a sampled time window of the audio sub-segment. A two-dimensional image is generated that represents a two-dimensional mapping of the sampled histogram along a first dimension and a predefined histogram along a second dimension that is orthogonal to the first dimension. The two-dimensional image is provided to an image classifier previously trained using the predefined histogram. An output is received from the image classifier based on the two-dimensional image. The output indicates whether the audio sub-segment contains speech.
SPEECH DETECTION USING IMAGE CLASSIFICATION
Speech detection can be achieved by identifying a speech segment within an audio segment using image classification. An audio segment of radio communications is obtained. An audio sub-segment within the audio segment is extracted. A sampled histogram is generated of a plurality of sampled values across a sampled time window of the audio sub-segment. A two-dimensional image is generated that represents a two-dimensional mapping of the sampled histogram along a first dimension and a predefined histogram along a second dimension that is orthogonal to the first dimension. The two-dimensional image is provided to an image classifier previously trained using the predefined histogram. An output is received from the image classifier based on the two-dimensional image. The output indicates whether the audio sub-segment contains speech.