Patent classifications
G10L19/028
Encoding device, decoding device, encoding method, decoding method, and non-transitory computer-readable recording medium
An encoding device according to the disclosure includes a first encoding unit that generates a first encoded signal in which a low-band signal having a frequency lower than or equal to a predetermined frequency from a voice or audio input signal is encoded, and a low-band decoded signal; a second encoding unit that encodes, on the basis of the low-band decoded signal, a high-band signal having a band higher than that of the low-band signal to generate a high-band encoded signal; and a first multiplexing unit that multiplexes the first encoded signal and the high-band encoded signal to generate and output an encoded signal. The second encoding unit calculates an energy ratio between a high-band noise component, which is a noise component of the high-band signal, and a high-band non-tonal component of a high-band decoded signal generated from the low-band decoded signal and outputs the ratio as the high-band encoded signal.
Concept for audio encoding and decoding for audio channels and audio objects
Audio encoder for encoding audio input data to obtain audio output data includes an input interface for receiving a plurality of audio channels, a plurality of audio objects and metadata related to one or more of the plurality of audio objects; a mixer for mixing the plurality of objects and the plurality of channels to obtain a plurality of pre-mixed channels, each pre-mixed channel including audio data of a channel and audio data of at least one object; a core encoder for core encoding core encoder input data; and a metadata compressor for compressing the metadata related to the one or more of the plurality of audio objects, wherein the audio encoder is configured to operate in at least one mode of the group of two modes.
Concept for audio encoding and decoding for audio channels and audio objects
Audio encoder for encoding audio input data to obtain audio output data includes an input interface for receiving a plurality of audio channels, a plurality of audio objects and metadata related to one or more of the plurality of audio objects; a mixer for mixing the plurality of objects and the plurality of channels to obtain a plurality of pre-mixed channels, each pre-mixed channel including audio data of a channel and audio data of at least one object; a core encoder for core encoding core encoder input data; and a metadata compressor for compressing the metadata related to the one or more of the plurality of audio objects, wherein the audio encoder is configured to operate in at least one mode of the group of two modes.
NOISE FILLING IN MULTICHANNEL AUDIO CODING
In multichannel audio coding, an improved coding efficiency is achieved by the following measure: the noise filling of zero-quantized scale factor bands is performed using noise filling sources other than artificially generated noise or spectral replica. In particular, the coding efficiency in multichannel audio coding may be rendered more efficient by performing the noise filling based on noise generated using spectral lines from a previous frame of, or a different channel of the current frame of, the multichannel audio signal.
NOISE FILLING IN MULTICHANNEL AUDIO CODING
In multichannel audio coding, an improved coding efficiency is achieved by the following measure: the noise filling of zero-quantized scale factor bands is performed using noise filling sources other than artificially generated noise or spectral replica. In particular, the coding efficiency in multichannel audio coding may be rendered more efficient by performing the noise filling based on noise generated using spectral lines from a previous frame of, or a different channel of the current frame of, the multichannel audio signal.
BURST FRAME ERROR HANDLING
There is provided mechanisms for frame loss concealment. A method is performed by a receiving entity. The method comprises adding, in association with constructing a substitution frame for a lost frame, a noise component to the substitution frame. The noise component has a frequency characteristic corresponding to a low-resolution spectral representation of a signal in a previously received frame.
BURST FRAME ERROR HANDLING
There is provided mechanisms for frame loss concealment. A method is performed by a receiving entity. The method comprises adding, in association with constructing a substitution frame for a lost frame, a noise component to the substitution frame. The noise component has a frequency characteristic corresponding to a low-resolution spectral representation of a signal in a previously received frame.
Method and apparatus for processing audio signal
A method and apparatus for processing an audio signal are disclosed. According to an example embodiment, a method of processing an audio signal may include acquiring a final audio signal for an initial audio signal using a plurality of neural network models generating output audio signals by encoding and decoding input audio signals, calculating a difference between the initial audio signal and the final audio signal in a time domain, converting the initial audio signal and the final audio signal into Mel-spectra, calculating a difference between the Mel-spectra of the initial audio signal and the final audio signal in a frequency domain, training the plurality of neural network models based on results calculated in the time domain and the frequency domain, and generating a new final audio signal distinguished from the final audio signal from the initial audio signal using the trained neural network models.
Method and apparatus for processing audio signal
A method and apparatus for processing an audio signal are disclosed. According to an example embodiment, a method of processing an audio signal may include acquiring a final audio signal for an initial audio signal using a plurality of neural network models generating output audio signals by encoding and decoding input audio signals, calculating a difference between the initial audio signal and the final audio signal in a time domain, converting the initial audio signal and the final audio signal into Mel-spectra, calculating a difference between the Mel-spectra of the initial audio signal and the final audio signal in a frequency domain, training the plurality of neural network models based on results calculated in the time domain and the frequency domain, and generating a new final audio signal distinguished from the final audio signal from the initial audio signal using the trained neural network models.
Context-based entropy coding of sample values of a spectral envelope
An improved concept for coding sample values of a spectral envelope is obtained by combining spectrotemporal prediction on the one hand and context-based entropy coding the residuals, on the other hand, while particularly determining the context for a current sample value dependent on a measure of a deviation between a pair of already coded/decoded sample values of the spectral envelope in a spectrotemporal neighborhood of the current sample value. The combination of the spectrotemporal prediction on the one hand and the context-based entropy coding of the prediction residuals with selecting the context depending on the deviation measure on the other hand harmonizes with the nature of spectral envelopes.