Patent classifications
G10L25/12
Low energy deep-learning networks for generating auditory features for audio processing pipelines
Low energy deep-learning networks for generating auditory features such as mel frequency cepstral coefficients in audio processing pipelines are provided. In various embodiments, a first neural network is trained to output auditory features such as mel-frequency cepstral coefficients, linear predictive coding coefficients, perceptual linear predictive coefficients, spectral coefficients, filter bank coefficients, and/or spectro-temporal receptive fields based on input audio samples. A second neural network is trained to output a classification based on input auditory features such as mel-frequency cepstral coefficients. An input audio sample is provided to the first neural network. Auditory features such as mel-frequency cepstral coefficients are received from the first neural network. The auditory features such as mel-frequency cepstral coefficients are provided to the second neural network. A classification of the input audio sample is received from the second neural network.
Audio Signal Classification Method and Apparatus
An audio signal classification method includes determining, according to voice activity of a current audio frame, whether to obtain a frequency spectrum fluctuation of the current audio frame and store the frequency spectrum fluctuation in a frequency spectrum fluctuation memory, and updating, according to whether the audio frame is percussive music or activity of a historical audio frame, frequency spectrum fluctuations stored in the frequency spectrum fluctuation memory, and classifying the current audio frame as a speech frame or a music frame according to statistics of a part or all of effective data of the frequency spectrum fluctuations stored in the frequency spectrum fluctuation memory.
MDCT-BASED COMPLEX PREDICTION STEREO CODING
The invention provides methods and devices for stereo encoding and decoding using complex prediction in the frequency domain. In one embodiment, a decoding method, for obtaining an output stereo signal from an input stereo signal encoded by complex prediction coding and comprising first frequency-domain representations of two input channels, comprises the upmixing steps of: (i) computing a second frequency-domain representation of a first input channel; and (ii) computing an output channel on the basis of the first and second frequency-domain representations of the first input channel, the first frequency-domain representation of the second input channel and a complex prediction coefficient. The upmixing can be suspended responsive to control data.
SYSTEM FOR CONVERTING VIBRATION TO VOICE FREQUENCY WIRELESSLY
The present application discloses a system for converting vibration to voice frequency wirelessly and a method thereof. By sensing a first vibration variation data and a voice frequency variation data of a vocal vibration part in a first sensing period, a voice frequency reference data is obtained from the voice frequency variation data and the first vibration result. A second vibration result is obtained at a second sensing period for converting to a voice frequency output signal, and the voice frequency output signal is used to output as a voice signal corresponding to the voice frequency various result. Thus, the present application provides a voice signal close to a human voice.
METHOD AND SYSTEM FOR PROCESSING SPEECH SIGNAL
Embodiments of the present disclosure provide methods and systems for processing a speech signal. The method can include: processing the speech signal to generate a plurality of speech frames; generating a first number of acoustic features based on the plurality of speech frames using a frame shift at a given frequency; and generating a second number of posteriori probability vectors based on the first number of acoustic features using an acoustic model, wherein each of the posteriori probability vectors comprises probabilities of the acoustic features corresponding to a plurality of modeling units, respectively.
METHOD AND SYSTEM FOR PROCESSING SPEECH SIGNAL
Embodiments of the present disclosure provide methods and systems for processing a speech signal. The method can include: processing the speech signal to generate a plurality of speech frames; generating a first number of acoustic features based on the plurality of speech frames using a frame shift at a given frequency; and generating a second number of posteriori probability vectors based on the first number of acoustic features using an acoustic model, wherein each of the posteriori probability vectors comprises probabilities of the acoustic features corresponding to a plurality of modeling units, respectively.
Linear prediction coefficient conversion device and linear prediction coefficient conversion method
The purpose of the present invention is to estimate, with a small amount of computation, a linear prediction synthesis filter after conversion of an internal sampling frequency. A linear prediction coefficient conversion device is a device that converts first linear prediction coefficients calculated at a first sampling frequency to second linear prediction coefficients at a second sampling frequency different from the first sampling frequency, which includes a means for calculating, on the real axis of the unit circle, a power spectrum corresponding to the second linear prediction coefficients at the second sampling frequency based on the first linear prediction coefficients or an equivalent parameter, a means for calculating, on the real axis of the unit circle, autocorrelation coefficients from the power spectrum, and a means for converting the autocorrelation coefficients to the second linear prediction coefficients at the second sampling frequency.
Linear prediction coefficient conversion device and linear prediction coefficient conversion method
The purpose of the present invention is to estimate, with a small amount of computation, a linear prediction synthesis filter after conversion of an internal sampling frequency. A linear prediction coefficient conversion device is a device that converts first linear prediction coefficients calculated at a first sampling frequency to second linear prediction coefficients at a second sampling frequency different from the first sampling frequency, which includes a means for calculating, on the real axis of the unit circle, a power spectrum corresponding to the second linear prediction coefficients at the second sampling frequency based on the first linear prediction coefficients or an equivalent parameter, a means for calculating, on the real axis of the unit circle, autocorrelation coefficients from the power spectrum, and a means for converting the autocorrelation coefficients to the second linear prediction coefficients at the second sampling frequency.
Audio upmixer operable in prediction or non-prediction mode
The invention provides methods and devices for outputting a stereo audio signal having a left channel and a right channel. The apparatus includes a demultiplexer, decoder, and upmixer. The upmixer is configured operate either in a prediction mode or a non-prediction mode based on a parameter encoded in the audio bitstream.
Method and system for dereverberation of speech signals
A system and method for reverberation reduction is disclosed. A first Deep Neural Network (DNN) produces a first estimate of a target direct-path signal from a mixture of acoustic signals that include the target direct-path signal and a reverberation of the target direct-path signal. A filter modeling a room impulse response (RIR) for the first estimate is estimated. The filter when applied to the first estimate of the target direct-path signal generates a result closest to a residual between the mixture of the acoustic signals and the first estimate of the target direct-path signal according to a distance function. A mixture with reduced reverberation of the target direct-path signal is obtained by removing the result of applying the filter to the first estimate of the target direct-path signal from the received mixture. A second DNN produces a second estimate of the target direct-path signal from the mixture with reduced reverberation.