G10L19/032

Correlating scene-based audio data for psychoacoustic audio coding

In general, techniques are described by which to correlate scene-based audio data for psychoacoustic audio coding. A device comprising a memory and one or more processors may be configured to perform the techniques. The memory may store a bitstream including a plurality of encoded correlated components of a soundfield represented by scene-based audio data. The one or more processors may perform psychoacoustic audio decoding with respect to one or more of the plurality of encoded correlated components to obtain a plurality of correlated components, and obtain, from the bitstream, an indication representative of how the one or more of the plurality of correlated components were reordered in the bitstream. The one or more processors may reorder, based on the indication, the plurality of correlated components to obtain a plurality of reordered components, and reconstruct, based on the plurality of reordered components, the scene-based audio data.

Correlating scene-based audio data for psychoacoustic audio coding

In general, techniques are described by which to correlate scene-based audio data for psychoacoustic audio coding. A device comprising a memory and one or more processors may be configured to perform the techniques. The memory may store a bitstream including a plurality of encoded correlated components of a soundfield represented by scene-based audio data. The one or more processors may perform psychoacoustic audio decoding with respect to one or more of the plurality of encoded correlated components to obtain a plurality of correlated components, and obtain, from the bitstream, an indication representative of how the one or more of the plurality of correlated components were reordered in the bitstream. The one or more processors may reorder, based on the indication, the plurality of correlated components to obtain a plurality of reordered components, and reconstruct, based on the plurality of reordered components, the scene-based audio data.

BITRATE DISTRIBUTION IN IMMERSIVE VOICE AND AUDIO SERVICES

Embodiments are disclosed for bitrate distribution in immersive voice and audio services. In an embodiment, a method of encoding an IVAS bitstream comprises: receiving an input audio signal; downmixing the input audio signal into one or more downmix channels and spatial metadata; reading a set of one or more bitrates for the downmix channels and a set of quantization levels for the spatial metadata from a bitrate distribution control table; determining a combination of the one or more bitrates for the downmix channels; determining a metadata quantization level from the set of metadata quantization levels using a bitrate distribution process; quantizing and coding the spatial metadata using the metadata quantization level; generating, using the combination of one or more bitrates, a downmix bitstream for the one or more downmix channels; combining the downmix bitstream, the quantized and coded spatial metadata and the set of quantization levels into the IVAS bitstream.

BITRATE DISTRIBUTION IN IMMERSIVE VOICE AND AUDIO SERVICES

Embodiments are disclosed for bitrate distribution in immersive voice and audio services. In an embodiment, a method of encoding an IVAS bitstream comprises: receiving an input audio signal; downmixing the input audio signal into one or more downmix channels and spatial metadata; reading a set of one or more bitrates for the downmix channels and a set of quantization levels for the spatial metadata from a bitrate distribution control table; determining a combination of the one or more bitrates for the downmix channels; determining a metadata quantization level from the set of metadata quantization levels using a bitrate distribution process; quantizing and coding the spatial metadata using the metadata quantization level; generating, using the combination of one or more bitrates, a downmix bitstream for the one or more downmix channels; combining the downmix bitstream, the quantized and coded spatial metadata and the set of quantization levels into the IVAS bitstream.

Linear prediction analysis device, method, program, and storage medium

An autocorrelation calculation unit 21 calculates an autocorrelation R.sub.O(i) from an input signal. A prediction coefficient calculation unit 23 performs linear prediction analysis by using a modified autocorrelation R′.sub.O(i) obtained by multiplying a coefficient w.sub.O(i) by the autocorrelation R.sub.O(i). It is assumed here, for each order i of some orders i at least, that the coefficient w.sub.O(i) corresponding to the order i is in a monotonically increasing relationship with an increase in a value that is negatively correlated with a fundamental frequency of the input signal of the current frame or a past frame.

Linear prediction analysis device, method, program, and storage medium

An autocorrelation calculation unit 21 calculates an autocorrelation R.sub.O(i) from an input signal. A prediction coefficient calculation unit 23 performs linear prediction analysis by using a modified autocorrelation R′.sub.O(i) obtained by multiplying a coefficient w.sub.O(i) by the autocorrelation R.sub.O(i). It is assumed here, for each order i of some orders i at least, that the coefficient w.sub.O(i) corresponding to the order i is in a monotonically increasing relationship with an increase in a value that is negatively correlated with a fundamental frequency of the input signal of the current frame or a past frame.

Methods and apparatus systems for unified speech and audio decoding improvements

The present disclosure relates to an apparatus for decoding an encoded Unified Audio and Speech stream. The apparatus comprises a core decoder for decoding the encoded Unified Audio and Speech stream. The core decoder includes a fast Fourier transform, FFT, module implementation based on a Cooley-Tuckey algorithm. The FFT module is configured to determine a discrete Fourier transform, DFT. Determining the DFT involves recursively breaking down the DFT into small FFTs based on the Cooley-Tucker algorithm and using radix-4 if a number of points of the FFT is a power of 4 and using mixed radix if the number is not a power of 4. Performing the small FFTs involves applying twiddle factors. Applying the twiddle factors involves referring to pre-computed values for the twiddle factors. The present disclosure further relates to an apparatus for decoding an encoded Unified Audio and Speech stream, in which the core decoder is configured to decode an LPC filter that has been quantized using a line spectral frequency, LSF, representation from the Unified Audio and Speech stream. Decoding the LPC filter from the Unified Audio and Speech stream comprises computing a first-stage approximation of a LSF vector, reconstructing a residual LSF vector, if an absolute quantization mode has been used for quantizing the LPC filter, determining inverse LSF weights for inverse weighting of the residual LSF vector by referring to pre-computed values for the inverse LSF weights or their respective corresponding LSF weights, inverse weighting the residual LSF vector by the determined inverse LSF weights, and calculating the LPC filter based on the inversely-weighted residual LSF vector and the first-stage approximation of the LSF vector. The present disclosure further relates to corresponding methods and storage media.

Methods and apparatus systems for unified speech and audio decoding improvements

The present disclosure relates to an apparatus for decoding an encoded Unified Audio and Speech stream. The apparatus comprises a core decoder for decoding the encoded Unified Audio and Speech stream. The core decoder includes a fast Fourier transform, FFT, module implementation based on a Cooley-Tuckey algorithm. The FFT module is configured to determine a discrete Fourier transform, DFT. Determining the DFT involves recursively breaking down the DFT into small FFTs based on the Cooley-Tucker algorithm and using radix-4 if a number of points of the FFT is a power of 4 and using mixed radix if the number is not a power of 4. Performing the small FFTs involves applying twiddle factors. Applying the twiddle factors involves referring to pre-computed values for the twiddle factors. The present disclosure further relates to an apparatus for decoding an encoded Unified Audio and Speech stream, in which the core decoder is configured to decode an LPC filter that has been quantized using a line spectral frequency, LSF, representation from the Unified Audio and Speech stream. Decoding the LPC filter from the Unified Audio and Speech stream comprises computing a first-stage approximation of a LSF vector, reconstructing a residual LSF vector, if an absolute quantization mode has been used for quantizing the LPC filter, determining inverse LSF weights for inverse weighting of the residual LSF vector by referring to pre-computed values for the inverse LSF weights or their respective corresponding LSF weights, inverse weighting the residual LSF vector by the determined inverse LSF weights, and calculating the LPC filter based on the inversely-weighted residual LSF vector and the first-stage approximation of the LSF vector. The present disclosure further relates to corresponding methods and storage media.

Stereo encoding method and stereo encoder

In a stereo encoding method, a channel combination encoding solution of a current frame is first obtained, and then a quantized channel combination ratio factor of the current frame and an encoding index of the quantized channel combination ratio factor are obtained based on the obtained channel combination encoding solution, so that an obtained primary channel signal and secondary channel signal of the current frame meet a characteristic of the current frame.

Stereo encoding method and stereo encoder

In a stereo encoding method, a channel combination encoding solution of a current frame is first obtained, and then a quantized channel combination ratio factor of the current frame and an encoding index of the quantized channel combination ratio factor are obtained based on the obtained channel combination encoding solution, so that an obtained primary channel signal and secondary channel signal of the current frame meet a characteristic of the current frame.