G10L19/087

Downmixer and Method of Downmixing

A downmixer for downmixing a multi-channel signal having at least two channels, includes: a weighting value estimator for estimating band-wise weighting values for the at least two channels; a spectral weighter for weighting spectral domain representations of the at least two channels using the band-wise weighting values; a converter for converting weighted spectral domain representations of the at least two channels into time representations of the at least two channels; and a mixer for mixing the time representations of the at least two channels to obtain a downmix signal.

Downmixer and Method of Downmixing

A downmixer for downmixing a multi-channel signal having at least two channels, includes: a weighting value estimator for estimating band-wise weighting values for the at least two channels; a spectral weighter for weighting spectral domain representations of the at least two channels using the band-wise weighting values; a converter for converting weighted spectral domain representations of the at least two channels into time representations of the at least two channels; and a mixer for mixing the time representations of the at least two channels to obtain a downmix signal.

Apparatus and method for synthesizing an audio signal, decoder, encoder, system and computer program

A method and an apparatus for synthesizing an audio signal are described. A spectral tilt is applied to the code of a codebook used for synthesizing a current frame of the audio signal. The spectral tilt is based on the spectral tilt of the current frame of the audio signal. Further, an audio decoder operating in accordance with the inventive approach is described.

Apparatus and method for synthesizing an audio signal, decoder, encoder, system and computer program

A method and an apparatus for synthesizing an audio signal are described. A spectral tilt is applied to the code of a codebook used for synthesizing a current frame of the audio signal. The spectral tilt is based on the spectral tilt of the current frame of the audio signal. Further, an audio decoder operating in accordance with the inventive approach is described.

HIGH-BAND SIGNAL GENERATION

A device for signal processing includes a memory and a processor. The memory is configured to store a parameter associated with a bandwidth-extended audio stream. The processor is configured to select a plurality of non-linear processing functions based at least in part on a value of the parameter. The processor is also configured to generate a high-band excitation signal based on the plurality of non-linear processing functions.

HIGH-BAND SIGNAL GENERATION

A device for signal processing includes a memory and a processor. The memory is configured to store a parameter associated with a bandwidth-extended audio stream. The processor is configured to select a plurality of non-linear processing functions based at least in part on a value of the parameter. The processor is also configured to generate a high-band excitation signal based on the plurality of non-linear processing functions.

HIGH-BAND SIGNAL GENERATION

A device for signal processing includes a memory and a processor. The memory is configured to store a parameter associated with a bandwidth-extended audio stream. The processor is configured to select a plurality of non-linear processing functions based at least in part on a value of the parameter. The processor is also configured to generate a high-band excitation signal based on the plurality of non-linear processing functions.

HIGH-BAND SIGNAL GENERATION

A device for signal processing includes a memory and a processor. The memory is configured to store a parameter associated with a bandwidth-extended audio stream. The processor is configured to select a plurality of non-linear processing functions based at least in part on a value of the parameter. The processor is also configured to generate a high-band excitation signal based on the plurality of non-linear processing functions.

Speech model parameter estimation and quantization

Quantizing speech model parameters includes, for each of multiple vectors of quantized excitation strength parameters, determining first and second errors between first and second elements of a vector of excitation strength parameters and, respectively, first and second elements of the vector of quantized excitation strength parameters, and determining a first energy and a second energy associated with, respectively, the first and second errors. First and second weights for, respectively, the first error and the second error, are determined and are used to produce first and second weighted errors, which are combined to produce a total error. The total errors of each of the multiple vectors of quantized excitation strength parameters are compared and the vector of quantized excitation strength parameters that produces the smallest total error is selected to represent the vector of excitation strength parameters.

Speech model parameter estimation and quantization

Quantizing speech model parameters includes, for each of multiple vectors of quantized excitation strength parameters, determining first and second errors between first and second elements of a vector of excitation strength parameters and, respectively, first and second elements of the vector of quantized excitation strength parameters, and determining a first energy and a second energy associated with, respectively, the first and second errors. First and second weights for, respectively, the first error and the second error, are determined and are used to produce first and second weighted errors, which are combined to produce a total error. The total errors of each of the multiple vectors of quantized excitation strength parameters are compared and the vector of quantized excitation strength parameters that produces the smallest total error is selected to represent the vector of excitation strength parameters.