Audio encoding/decoding based on an efficient representation of auto-regressive coefficients
11011181 · 2021-05-18
Assignee
Inventors
Cpc classification
G10L19/06
PHYSICS
International classification
G10L19/06
PHYSICS
Abstract
An encoder for encoding a parametric spectral representation (ƒ) of auto-regressive coefficients that partially represent an audio signal. The encoder includes a low-frequency encoder configured to quantize elements of a part of the parametric spectral representation that correspond to a low-frequency part of the audio signal. It also includes a high-frequency encoder configured to encode a high-frequency part (ƒ.sup.H) of the parametric spectral representation (ƒ) by weighted averaging based on the quantized elements ({circumflex over (ƒ)}.sup.L) flipped around a quantized mirroring frequency ({circumflex over (ƒ)}.sub.m), which separates the low-frequency part from the high-frequency part, and a frequency grid determined from a frequency grid codebook in a closed-loop search procedure. Described are also a corresponding decoder, corresponding encoding/decoding methods and UEs including such an encoder/decoder.
Claims
1. A method, comprising: encoding an audio signal, wherein encoding the audio signal comprises obtaining a parametric spectral representation (ƒ) of auto-regressive coefficients (a) that partially represent the audio signal, encoding a low-frequency part (ƒ.sup.L) of the parametric spectral representation (ƒ) by quantizing coefficients of the parametric spectral representation that correspond to a low-frequency part of the audio signal, and encoding a high-frequency part (ƒ.sup.H) of the parametric spectral representation (ƒ) by weighted averaging based on the quantized coefficients ({circumflex over (ƒ)}.sup.L) flipped around a quantized mirroring frequency ({circumflex over (ƒ)}.sub.m), which separates the low-frequency part from the high-frequency part, and a frequency grid codebook obtained in a closed-loop search procedure; and outputting, for transmission to a decoder, at least one quantitation index (I.sub.ƒL) representing the quantized coefficients ({circumflex over (ƒ)}.sup.L), a quantization index (I.sub.m) representing the quantized mirroring ƒ frequency ({circumflex over (ƒ)}.sub.m) and a quantization index (I.sub.g) representing a frequency grid (g.sup.opt).
2. The method of claim 1, further comprising transmitting encoded audio to a decoder, the encoded audio comprising the at least one quantitation index (I.sub.ƒL), the quantization index (I.sub.m), and the quantization index (I.sub.g).
3. The method of claim 1, wherein encoding the audio signal further comprises quantizing the mirroring frequency {circumflex over (ƒ)}.sub.m in accordance with:
{circumflex over (ƒ)}.sub.m=Q(ƒ(M/2)−{circumflex over (ƒ)}(M/2−1))+{circumflex over (ƒ)}(M/2−1), where Q denotes quantization of the expression in the adjacent parenthesis, M denotes the total number of coefficients in the parametric spectral representation, ƒ(M/2) denotes the first coefficient in the high-frequency part, and {circumflex over (ƒ)}(M/2−1) denotes the last quantized coefficient in the low-frequency part.
4. The method of claim 3, wherein encoding the audio signal further comprises flipping the quantized coefficients of the low frequency part (ƒ.sup.L) of the parametric spectral representation (ƒ) around the quantized mirroring frequency {circumflex over (ƒ)}.sub.m in accordance with:
ƒ.sub.flip(k)=2{circumflex over (ƒ)}.sub.m−{circumflex over (ƒ)}(M/2−1−k), 0≤k≤M/2−1, where {circumflex over (ƒ)}(M/2−1−k) denotes quantized coefficient M/2−1−k.
5. The method of claim 4, wherein encoding the audio signal further comprises rescaling the flipped coefficients ƒ.sub.flip(k) in accordance with:
6. The method of claim 5, wherein encoding the audio signal further comprises rescaling the frequency grids g.sup.i from the frequency grid codebook to fit into the interval between the last quantized coefficient {circumflex over (ƒ)}(M/2−1) in the low-frequency part and a maximum grid point value g.sub.max in accordance with:
{tilde over (g)}.sup.i(k)=g.sup.i(k).Math.(g.sub.max−{circumflex over (ƒ)}(M/2−1))+{circumflex over (ƒ)}(M/2−1).
7. The method of claim 6, wherein encoding the audio signal further comprises weighted averaging of the flipped and rescaled coefficients {tilde over (ƒ)}.sub.flip(k) and the rescaled frequency grids {tilde over (g)}.sup.i(k) in accordance with:
ƒ.sub.smooth.sup.i(k)=[1−λ(k)]{tilde over (ƒ)}.sub.flip(k)+λ(k){tilde over (g)}.sup.i(k) where λ(k) and [1−λ(k)] are predefined weights.
8. The method of claim 7, wherein encoding the audio signal further comprises selecting a frequency grid g.sup.opt, where the index opt satisfies the criterion:
9. The method of claim 8, wherein M=10, g.sub.max=0.5, and the weights λ(k) are defined as λ={0.2, 0.35, 0.5, 0.75, 0.8}.
10. The method of claim 1, wherein the encoding of the parametric spectral representation (ƒ) of auto-regressive coefficients is performed on a line spectral frequencies representation of the auto-regressive coefficients.
11. An encoding apparatus, comprising: an audio encoding circuit configured to: encode an audio signal by obtaining a parametric spectral representation (ƒ) of auto-regressive coefficients (a) that partially represent the audio signal, encoding a low-frequency part (ƒ.sup.L) of the parametric spectral representation (ƒ) by quantizing coefficients of the parametric spectral representation that correspond to a low-frequency part of the audio signal, and encoding a high-frequency part (ƒ.sup.H) of the parametric spectral representation (ƒ) by weighted averaging based on the quantized coefficients ({circumflex over (ƒ)}.sup.L) flipped around a quantized mirroring frequency ({circumflex over (ƒ)}.sub.m), which separates the low-frequency part from the high-frequency part, and a frequency grid codebook obtained in a closed-loop search procedure; and output, for transmission to a decoder, at least one quantitation index (I.sub.ƒL) representing the quantized coefficients ({circumflex over (ƒ)}.sup.L), a quantization index (I.sub.m) representing the quantized mirroring ƒ frequency ({circumflex over (ƒ)}.sub.m), and a quantization index (I.sub.g) representing a frequency grid (g.sup.opt).
12. The encoding apparatus of claim 11, further comprising output circuitry configured to transmit encoded audio to a decoder, the encoded audio comprising the at least one quantitation index (I.sub.ƒL), the quantization index (I.sub.m), and the quantization index (I.sub.g).
13. The encoding apparatus of claim 11, wherein the audio encoding circuit is further configured to quantize the mirroring frequency {circumflex over (ƒ)}.sub.m in accordance with:
{circumflex over (ƒ)}.sub.m=Q(ƒ(M/2)−{circumflex over (ƒ)}(M/2−1))+{circumflex over (ƒ)}(M/2−1), where Q denotes quantization of the expression in the adjacent parenthesis, M denotes the total number of coefficients in the parametric spectral representation, ƒ(M/2) denotes the first coefficient in the high-frequency part, and {circumflex over (ƒ)}(M/2−1) denotes the last quantized coefficient in the low-frequency part.
14. The encoding apparatus of claim 13, wherein the audio encoding circuit is further configured to flip the quantized coefficients of the low frequency part (ƒ.sup.L) of the parametric spectral representation (ƒ) around the quantized mirroring frequency {circumflex over (ƒ)}.sub.m, in accordance with:
ƒ.sub.flip(k)=2{circumflex over (ƒ)}.sub.m−{circumflex over (ƒ)}(M/2−1−k), 0≤k≤M/2−1 where {circumflex over (ƒ)}(M/2−1−k) denotes the quantized coefficient M/2−1−k.
15. The encoding apparatus of claim 14, wherein the audio encoding circuit is further configured to rescale the flipped coefficients ƒ.sub.flip(k) in accordance with:
16. The encoding apparatus of claim 15, wherein the audio encoding circuit is further configured to rescale the frequency grids g.sup.i from the frequency grid codebook to fit into the interval between the last quantized coefficient {circumflex over (ƒ)}(M/2−1) in the low-frequency part and a maximum grid point value g.sub.max in accordance with:
{tilde over (g)}.sup.i(k)=g.sup.i(k).Math.(g.sub.max−{circumflex over (ƒ)}(M/2−1))+{circumflex over (ƒ)}(M/2−1).
17. The encoding apparatus of claim 16, wherein the audio encoding circuit is further configured to perform weighted averaging of the flipped and rescaled coefficients {tilde over (ƒ)}.sub.flip(k) and the rescaled frequency grids {tilde over (g)}.sup.i(k) in accordance with:
ƒ.sub.smooth.sup.i(k)=[1−λ(k)]{tilde over (ƒ)}.sub.flip(k)+λ(k){tilde over (g)}.sup.i(k) where λ(k) and [1−λ(k)] are predefined weights.
18. The encoding apparatus of claim 17, wherein the audio encoding circuit is further configured to select a frequency grid g.sup.opt, where the index opt satisfies the criterion:
19. The encoding apparatus of claim 18, wherein M=10, g.sub.max=0.5, and the weights λ(k) are defined as λ={0.2, 0.35, 0.5, 0.75, 0.8}.
20. The encoding apparatus of claim 11, wherein the audio encoding circuit is configured perform encoding of the parametric spectral representation (ƒ) of auto-regressive coefficients on a line spectral frequencies representation of the auto-regressive coefficients.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The disclosed technology, together with further objects and advantages thereof, may best be understood by making reference to the following description taken together with the accompanying drawings, in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
DETAILED DESCRIPTION
(16) The disclosed technology requires as input a vector a of AR coefficients (another commonly used name is linear prediction (LP) coefficients). These are typically obtained by first computing the autocorrelations r(j) of the windowed audio segment s(n), n=1, . . . , N, i.e.:
(17)
where M is pre-defined model order. Then the AR coefficients a are obtained from the autocorrelation sequence r(j) through the Levinson-Durbin algorithm [3].
(18) In an audio communication system AR coefficients have to be efficiently transmitted from the encoder to the decoder part of the system. In the disclosed technology this is achieved by quantizing only certain coefficients, and representing the remaining coefficients with only a small number of bits.
Encoder
(19)
(20)
(21) Although the disclosed technology will be described with reference to an LSF representation, the general concepts may also be applied to an alternative implementation in which the AR vector is converted to another parametric spectral representation, such as Line Spectral Pair (LSP) or Immitance Spectral Pairs (ISP) instead of LSF.
(22) Only the low-frequency LSF subvector ƒ.sup.L is quantized in step S5, and its quantization indices I.sub.ƒL are transmitted to the decoder. The high-frequency LSFs of the subvector ƒ.sup.H are not quantized, but only used in the quantization of a mirroring frequency ƒ.sub.m (to {circumflex over (ƒ)}.sub.m), and the closed loop search for an optimal frequency grid g.sup.opt from a set of frequency grids g.sup.i forming a frequency grid codebook, as described with reference to equations (2)-(13) below. The quantization indices I.sub.m and I.sub.g for the mirroring frequency and optimal frequency grid, respectively, represent the coded high-frequency LSF vector ƒ.sup.H and are transmitted to the decoder. The encoding of the high-frequency subvector ƒ.sup.H will occasionally be referred to as “extrapolation” in the following description.
(23) In the disclosed embodiment quantization is based on a set of scalar quantizers (SQs) individually optimized on the statistical properties of the above parameters. In an alternative implementation the LSF elements could be sent to a vector quantizer (VQ) or one can even train a VQ for the combined set of parameters (LSFs, mirroring frequency, and optimal grid).
(24) The low-frequency LSFs of subvector ƒ.sup.L are in step S6 flipped into the space spanned by the high-frequency LSFs of subvector ƒ.sup.H. This operation is illustrated in
{circumflex over (ƒ)}.sub.m=Q(ƒ(M/2)−{circumflex over (ƒ)}(M/2−1) (2)
where ƒ denotes the entire LSF vector, and Q(⋅) is the quantization of the difference between the first element in ƒ.sup.H (namely ƒ(M/2)) and the last quantized element in ƒ.sup.L (namely {circumflex over (ƒ)}(M/2−1)), and where M denotes the total number of elements in the parametric spectral representation.
(25) Next the flipped LSFs ƒ.sub.flip(k) are calculated in accordance with:
ƒ.sub.flip(k)=2{circumflex over (ƒ)}.sub.m−{circumflex over (ƒ)}(M/2−1=k), 0≤k≤M/2−1 (3)
Then the flipped LSFs are rescaled so that they will be bound within the range [0 . . . 0.5] (as an alternative the range can be represented in radians as [0 . . . π]) in accordance with:
(26)
(27) The frequency grids g.sup.i are rescaled to fit into the interval between the last quantized LSF element {circumflex over (ƒ)}(M/2−1) and a maximum grid point value g.sub.max, i.e.:
{tilde over (g)}.sup.i(k)=g.sup.i(k).Math.(g.sub.max−{circumflex over (ƒ)}(M/2−1)+ƒ(M/2−1) (5)
These flipped and rescaled coefficients {tilde over (ƒ)}.sub.flip (k) (collectively denoted {tilde over (ƒ)}.sup.H in
ƒ.sub.smooth(k)=[1−λ(k)]{tilde over (ƒ)}.sub.flip(k)+λ(k){tilde over (g)}.sup.i(k) (6)
where λ(k) and [1−λ(k)] are predefined weights.
(28) Since equation (6) includes a free index i, this means that a vector ƒ.sub.smooth(k) will be generated for each {tilde over (g)}.sup.i(k). Thus, equation (6) may be expressed as:
ƒ.sub.smooth.sup.i(k)=[1−λ(k)]{tilde over (ƒ)}.sub.flip(k){tilde over (g)}.sup.i(k) (7)
(29) The smoothing is performed step S7 in a closed loop search over all frequency grids g.sup.i, to find the one that minimizes a pre-defined criterion (described after equation (12) below).
(30) For M/2=5 the weights λ(k) in equation (7) can be chosen as:
λ={0.2,0.35,0.5,0.75,0.8} (8)
(31) In an embodiment these constants are perceptually optimized (different sets of values are suggested, and the set that maximized quality, as reported by a panel of listeners, are finally selected). Generally the values of elements in λ increase as the index k increases. Since a higher index corresponds to a higher-frequency, the higher frequencies of the resulting spectrum are more influenced by {tilde over (g)}.sup.i(k) than by {tilde over (ƒ)}.sub.flip (see equation (7)). This result of this smoothing or weighted averaging is a more flat spectrum towards the high frequencies (the spectrum structure potentially introduced by 7.sub.flip is progressively removed towards high frequencies).
(32) Here g.sub.max is selected close to but less than 0.5. In this example g.sub.max is selected equal to 0.49.
(33) The method in this example uses 4 trained grids g.sup.i (less or more grids are possible). Template grid vectors on a range [0 . . . 1], pre-stored in memory, are of the form:
(34)
(35) If we assume that the position of the last quantized LSF coefficient {circumflex over (ƒ)}(M/2−1) is 0.25, the rescaled grid vectors take the form:
(36)
(37) An example of the effect of smoothing the flipped and rescaled LSF coefficients to the grid points is illustrated in
(38) If g.sub.max=0.5 instead of 0.49, the frequency grid codebook may instead be formed by:
(39)
(40) If we again assume that the position of the last quantized LSF coefficient {circumflex over (ƒ)}(M/2−1) is 0.25, the rescaled grid vectors take the form:
(41)
(42) It is noted that the rescaled grids {tilde over (g)}.sup.i may be different from frame to frame, since ƒ(M/2−1) in rescaling equation (5) may not be constant but vary with time. However, the codebook formed by the template grids g.sup.1 is constant. In this sense the rescaled grids {tilde over (g)}.sup.1 may be considered as an adaptive codebook formed from a fixed codebook of template grids g.sup.i.
(43) The LSF vectors ƒ.sup.i.sub.smooth created by the weighted sum in (7) are compared to the target LSF vector ƒ.sup.H, and the optimal grid g.sup.1 is selected as the one that minimizes the mean-squared error (MSE) between these two vectors. The index opt of this optimal grid may mathematically be expressed as:
(44)
(45) where ƒ.sup.H(k) is a target vector formed by the elements of the high-frequency part of the parametric spectral representation.
(46) In an alternative implementation one can use more advanced error measures that mimic spectral distortion (SD), e.g., inverse harmonic mean or other weighting on the LSF domain.
(47) In an embodiment the frequency grid codebook is obtained with a K-means clustering algorithm on a large set of LSF vectors, which has been extracted from a speech database. The grid vectors in equations (9) and (11) are selected as the ones that, after rescaling in accordance with equation (5) and weighted averaging with {tilde over (ƒ)}.sub.flip in accordance with equation (7), minimize the squared distance to ƒ.sup.H. In other words these grid vectors, when used in equation (7), give the best representation of the high-frequency LSF coefficients.
(48)
(49)
(50) The quantized low-frequency subvector {circumflex over (ƒ)}.sup.L and the not yet encoded high-frequency subvector ƒ.sup.H are forwarded to the high-frequency encoder 12. A mirroring frequency calculator 18 is configured to calculate the quantized mirroring frequency {circumflex over (ƒ)}.sub.m in accordance with equation (2). The dashed lines indicate that only the last quantized element {circumflex over (ƒ)}(M/2−1) in {circumflex over (ƒ)}.sup.L first element ƒ(M/2) in ƒ.sup.H are required for this. The quantization index I.sub.m representing the quantized mirroring frequency {circumflex over (ƒ)}.sub.m is outputted for transmission to the decoder.
(51) The quantized mirroring frequency {circumflex over (ƒ)}.sub.m is forwarded to a quantized low-frequency subvector flipping unit 20 configured to flip the elements of the quantized low-frequency subvector {circumflex over (ƒ)}.sup.L around the quantized mirroring frequency {circumflex over (ƒ)}.sub.m in accordance with equation (3). The flipped elements ƒ.sub.flip(k) and the quantized mirroring frequency {circumflex over (ƒ)}.sub.m are forwarded to a flipped element rescaler 22 configured to rescale the flipped elements in accordance with equation (4).
(52) The frequency grids g.sup.i(k) are forwarded from frequency grid codebook 24 to a frequency grid rescaler 26, which also receives the last quantized element {circumflex over (ƒ)}(M/2−1) in {circumflex over (ƒ)}.sup.L. The rescaler 26 is configured to perform rescaling in accordance with equation (5).
(53) The flipped and rescaled LSFs {tilde over (ƒ)}.sub.flip(k) from flipped element rescaler 22 and the rescaled frequency grids {tilde over (g)}.sup.i(k) from frequency grid rescaler 26 are forwarded to a weighting unit 28, which is configured to perform a weighted averaging in accordance with equation (7). The resulting smoothed elements ƒ.sub.smooth.sup.i(k) and the high-frequency target vector ƒ.sup.H are forwarded to a frequency grid search unit 30 configured to select a frequency grid g.sup.opt in accordance with equation (13). The corresponding index I.sub.g is transmitted to the decoder.
Decoder
(54)
(55) The method steps performed at the decoder are illustrated by the embodiment in
(56) In step S13 the quantized low-frequency part {circumflex over (ƒ)}.sup.L is reconstructed from a low-frequency codebook by using the received index I.sub.ƒL.
(57) The method steps performed at the decoder for reconstructing the high-frequency part {circumflex over (ƒ)}.sup.H are very similar to already described encoder processing steps in equations (3)-(7).
(58) The flipping and rescaling steps performed at the decoder (at S14) are identical to the encoder operations, and therefore described exactly by equations (3)-(4).
(59) The steps (at S15) of rescaling the grid (equation (5)), and smoothing with it (equation (6)), require only slight modification in the decoder, because the closed loop search is not performed (search over i). This is because the decoder receives the optimal index opt from the bit stream. These equations instead take the following form:
{tilde over (g)}.sup.opt(k)=g.sup.opt(k).Math.(g.sub.max−{circumflex over (ƒ)}(M/2−1))+{circumflex over (ƒ)}(M/2−1) (14)
and
ƒ.sub.smooth(k)=[1−λ(k)]{tilde over (ƒ)}.sub.flip(k)+λ(k){tilde over (g)}.sup.opt(k) (15)
respectively. The vector ƒ.sub.smooth represents the high frequency part {circumflex over (ƒ)}.sup.H of the deocded signal.
(60) Finally the low- and high-frequency parts {circumflex over (ƒ)}.sup.L, {circumflex over (ƒ)}.sup.H of the LSF vector are combined in step S16, and the resulting vector {circumflex over (ƒ)} is transformed to AR coefficients â in step S17.
(61)
(62)
(63) The steps, functions, procedures and/or blocks described herein may be implemented in hardware using any conventional technology, such as discrete circuit or integrated circuit technology, including both general-purpose electronic circuitry and application-specific circuitry.
(64) Alternatively, at least some of the steps, functions, procedures and/or blocks described herein may be implemented in software for execution by suitable processing equipment. This equipment may include, for example, one or several micro processors, one or several Digital Signal Processors (DSP), one or several Application Specific Integrated Circuits (ASIC), video accelerated hardware or one or several suitable programmable logic devices, such as Field Programmable Gate Arrays (FPGA). Combinations of such processing elements are also feasible.
(65) It should also be understood that it may be possible to reuse the general processing capabilities already present in a UE. This may, for example, be done by reprogramming of the existing software or by adding new software components.
(66)
(67)
(68)
(69)
(70) In one example application the disclosed AR quantization-extrapolation scheme is used in a BWE context. In this case AR analysis is performed on a certain high frequency band, and AR coefficients are used only for the synthesis filter. Instead of being obtained with the corresponding analysis filter, the excitation signal for this high band is extrapolated from an independently coded low band excitation.
(71) In another example application the disclosed AR quantization-extrapolation scheme is used in an ACELP type coding scheme. ACELP coders model a speaker's vocal tract with an AR model. An excitation signal e(n) is generated by passing a waveform s(n) through a whitening filter e(n)=A(z)s(n), where A(z)=1+a.sub.1z.sup.−2+ . . . +a.sub.Mz.sup.−M, is the AR model of order M. On a frame-by-frame basis a set of AR coefficients a=[a.sub.1a.sub.2 . . . a.sub.M].sup.T, and excitation signal are quantized, and quantization indices are transmitted over the network. At the decoder, synthesized speech is generated on a frame-by-frame basis by sending the reconstructed excitation signal through the reconstructed synthesis filter A(z).sup.−1.
(72) In a further example application the disclosed AR quantization-extrapolation scheme is used as an efficient way to parameterize a spectrum envelope of a transform audio codec. On short-time basis the waveform is transformed to frequency domain, and the frequency response of the AR coefficients is used to approximate the spectrum envelope and normalize transformed vector (to create a residual vector). Next the AR coefficients and the residual vector are coded and transmitted to the decoder.
(73) It will be understood by those skilled in the art that various modifications and changes may be made to the disclosed technology without departure from the scope thereof, which is defined by the appended claims.
ABBREVIATIONS
(74) ACELP Algebraic Code Excited Linear Prediction ASIC Application Specific Integrated Circuits AR Auto Regression BWE Bandwidth Extension DSP Digital Signal Processor FPGA Field Programmable Gate Array ISP Immitance Spectral Pairs LP Linear Prediction LSF Line Spectral Frequencies LSP Line Spectral Pair MSE Mean Squared Error SD Spectral Distortion SQ Scalar Quantizer UE User Equipment VQ Vector Quantization
REFERENCES
(75) [1] 3GPP TS 26.090, “Adaptive Multi-Rate (AMR) speech codec; Transcoding functions”, p. 13, 2007 [2] N. Iwakami, et al., High-quality audio-coding at less than 64 kbit/s by using transform-domain weighted interleave vector quantization (TWINVQ), IEEE ICASSP, vol. 5, pp. 3095-3098, 1995 [3] J. Makhoul, “Linear prediction: A tutorial review”, Proc. IEEE, vol 63, p. 566, 1975 [4] P. Kabal and R. P. Ramachandran, “The computation of line spectral frequencies using Chebyshev polynomials”, IEEE Trans. on ASSP, vol. 34, no. 6, pp. 1419-1426, 1986