Patent classifications
G10L25/09
Speech signal cascade processing method, terminal, and computer-readable storage medium
A method for improving speech signal intelligibility is performed at a device. A speech signal is obtained. A correspondence between the speech signal and a respective user group among different user groups having distinct voice characteristics is identified. Pre-encoding signal augmentation is performed on the speech signal with a respective pre-augmentation filtering coefficient that corresponds to the respective user group to obtain a group-specific pre-augmented speech signal. The device encodes the pre-augmented speech signal for subsequent transmission through the voice communication channel. An encoded version of the pre-augmented speech signal has reduced loss of signal quality as compared to an encoded version of the speech signal that is obtained without the pre-encoding signal augmentation.
Time-based frequency tuning of analog-to-information feature extraction
A sound recognition system including time-dependent analog filtered feature extraction and sequencing. An analog front end (AFE) in the system receives input analog signals, such as signals representing an audio input to a microphone. Features in the input signal are extracted, by measuring such attributes as zero crossing events and total energy in filtered versions of the signal with different frequency characteristics at different times during the audio event. In one embodiment, a tunable analog filter is controlled to change its frequency characteristics at different times during the event. In another embodiment, multiple analog filters with different filter characteristics filter the input signal in parallel, and signal features are extracted from each filtered signal; a multiplexer selects the desired features at different times during the event.
Time-based frequency tuning of analog-to-information feature extraction
A sound recognition system including time-dependent analog filtered feature extraction and sequencing. An analog front end (AFE) in the system receives input analog signals, such as signals representing an audio input to a microphone. Features in the input signal are extracted, by measuring such attributes as zero crossing events and total energy in filtered versions of the signal with different frequency characteristics at different times during the audio event. In one embodiment, a tunable analog filter is controlled to change its frequency characteristics at different times during the event. In another embodiment, multiple analog filters with different filter characteristics filter the input signal in parallel, and signal features are extracted from each filtered signal; a multiplexer selects the desired features at different times during the event.
Speech Identification and Extraction from Noise Using Extended High Frequency Information
Improved systems and methods are provided herein for extracting target speech from audio signals that can contain masking speech or other unwanted noise content. These systems and methods include detection of target speech in an input signal by detecting elevated frequency content in the signal above a threshold frequency. Portions of the signal determined to contain such elevated high frequency content are then used to generate audio filters to extract target speech from subsequently-obtained audio signals. This can include performing non-negative matrix factorization to determine a set of basis vectors to represent noise content in the spectral domain and then using the set of basis vectors to decompose subsequently-obtained audio signals into noise signals that can then be removed from the audio signals.
Speech Identification and Extraction from Noise Using Extended High Frequency Information
Improved systems and methods are provided herein for extracting target speech from audio signals that can contain masking speech or other unwanted noise content. These systems and methods include detection of target speech in an input signal by detecting elevated frequency content in the signal above a threshold frequency. Portions of the signal determined to contain such elevated high frequency content are then used to generate audio filters to extract target speech from subsequently-obtained audio signals. This can include performing non-negative matrix factorization to determine a set of basis vectors to represent noise content in the spectral domain and then using the set of basis vectors to decompose subsequently-obtained audio signals into noise signals that can then be removed from the audio signals.
DETERMINING WHEN A SUBJECT IS SPEAKING BY ANALYZING A RESPIRATORY SIGNAL OBTAINED FROM A VIDEO
What is disclosed is a system and method for determining when a subject is speaking from a respiratory signal obtained from a video of that subject. A video of a subject is received and a respiratory signal is extracted from a time-series signal is obtained from processing pixels in image frames of the video. The respiratory signal comprises an inspiratory signal and an expiratory signal. Cycle-level feature are extracted from the respiratory signal and used to identify expiratory signals during which speech is likely to have occurred. The identified expiratory signal are divided into time intervals. Frame-level features are determined for each time interval and an amount of distortion in the expiratory signal for this time interval is quantified. The amount of distortion is compared to a threshold. In response to the comparison, a determination is made that speech occurred during this interval. The process repeats for all time intervals.
DETERMINING WHEN A SUBJECT IS SPEAKING BY ANALYZING A RESPIRATORY SIGNAL OBTAINED FROM A VIDEO
What is disclosed is a system and method for determining when a subject is speaking from a respiratory signal obtained from a video of that subject. A video of a subject is received and a respiratory signal is extracted from a time-series signal is obtained from processing pixels in image frames of the video. The respiratory signal comprises an inspiratory signal and an expiratory signal. Cycle-level feature are extracted from the respiratory signal and used to identify expiratory signals during which speech is likely to have occurred. The identified expiratory signal are divided into time intervals. Frame-level features are determined for each time interval and an amount of distortion in the expiratory signal for this time interval is quantified. The amount of distortion is compared to a threshold. In response to the comparison, a determination is made that speech occurred during this interval. The process repeats for all time intervals.
METHODS AND APPARATUSES FOR NOISE REDUCTION BASED ON TIME AND FREQUENCY ANALYSIS USING DEEP LEARNING
A noise cancellation method including generating a first voice signal by canceling a first portion of noise included in an input voice signal using a first network, the first network being a trained u-net structure, and the first portion of the noise being in a time domain, applying a first window to the first voice signal, performing a fast Fourier transform on the first windowed voice signal to acquire a magnitude signal and a phase signal, acquiring a mask using a second network based on the magnitude signal, the second network being another trained u-net structure, applying the mask to the magnitude signal, generating a second voice signal by canceling a second portion of the noise by performing an inverse fast Fourier transform on the first windowed voice signal based on the masked magnitude signal and the phase signal, and applying a second window to the second voice signal.
Method, terminal, system for audio encoding/decoding/codec
Audio encoding methods/terminals, audio decoding methods/terminals, and audio codec systems are provided. A plurality of audio signals that are continuous is obtained. It is determined whether each audio signal of the plurality of audio signals includes a designated signal type, according to an audio parameter of each audio signal. A marked audio encoding stream is obtained by performing a marking to each audio signal as having or not having the designated signal type. The marking is used, at a decoding terminal, to perform an enhancement-process to one or more audio signals having the designated signal type. The enhancement-process is not performed to audio signals that do not have the designated signal type.
Method and apparatus for exemplary segment classification
Method and apparatus for segmenting speech by detecting the pauses between the words and/or phrases, and to determine whether a particular time interval contains speech or non-speech, such as a pause.