Abstract
Methods and systems for advanced stereo processing of an audio signal are disclosed. The methods and systems include selecting a coding mode of either transform coding or linear predictive coding and performing advanced stereo processing when in the selected coding mode. Both encoding and decoding operations are provided.
Claims
1. A method for encoding a stereo input signal comprising a left channel and a right channel, and having a perceptual stereo image, the method comprising: selecting either a transform coding mode or a linear predictive coding mode as a selected coding mode; encoding the stereo input signal using only the selected coding mode to produce an encoded output signal; and generating a bitstream signal including the encoded output signal, wherein, if the linear predictive coding mode is selected, the encoding comprises: downmixing the stereo input signal to a mono signal, the mono signal being a sum of the left channel and the right channel, estimating stereo image parameters, for reconstructing a stereo signal that approximates the perceptual stereo image of the stereo input signal from the mono signal, generating a residual signal that indicates an error associated with representing the stereo signal by the mono signal and the estimated stereo image parameters, encoding the mono signal using linear predictive coding to produce an encoded mono signal, and outputting the encoded mono signal, the residual signal and the stereo image parameters as the encoded output signal, wherein, if the transform coding mode is selected, the encoding comprises: analyzing the stereo input signal by applying both mid/side stereo coding and left/right stereo coding and selecting either a mid/side stereo coding mode or a left/right stereo coding mode based on an estimated entropy for each stereo coding mode, encoding the stereo input signal using the selected stereo coding mode in a first frequency band to produce an encoded stereo signal in a first frequency band, downmixing the stereo input signal to a mono signal in a second frequency band, encoding the mono signal in the second frequency band using transform coding to produce an encoded mono signal in the second frequency band, and outputting the encoded stereo signal in the first frequency band and the encoded mono signal in the second frequency band as the encoded output signal.
2. The method of claim 1 wherein the analyzing includes selecting which stereo coding mode would more efficiently code the stereo input signal.
3. The method of claim 1 wherein the selecting of either the transform coding mode or the linear predictive mode is dependent upon characteristics of the stereo input signal.
4. The method of claim 1 wherein the transform coding further comprises not encoding one or more subbands and generating side information for reconstruction of the one or more subbands.
5. The method of claim 4 wherein the side information includes a parameter used to determine a spectral envelope of the one or more subbands not encoded.
6. The method of claim 1 wherein the transform coding includes a psychoacoustic model.
7. The method of claim 1 wherein the estimating includes estimating the stereo image parameters in a plurality of frequency bands.
8. The method of claim 1 wherein a bandwidth of the first frequency band and a bandwidth of the second frequency band is determined based at least in part on a desired target bitrate.
9. The method of claim 1 wherein the linear predictive coding mode is selected when the stereo input signal is speech.
10. A non-transitory computer readable medium containing instructions that when executed by a processor perform the method of claim 1.
11. A device for encoding a stereo input signal comprising a left channel and a right channel, and having a perceptual stereo image, to produce an encoded output signal, the device comprising: a mode selector for selecting either a transform coding mode or a linear predictive coding mode; a transform encoder for encoding the stereo input signal if the selected coding mode is the transform coding mode but not if the selected coding mode is the linear predictive coding mode; a linear predictive encoder for encoding the stereo input signal if the selected coding mode is the linear predictive coding mode but not if the selected coding mode is the transform coding mode; and a bitstream generator for generating a bitstream signal including the encoded output signal, wherein the linear predictive encoder is configured to: downmix the stereo input signal to a mono signal, the mono signal being a sum of the left channel and the right channel, estimate stereo image parameters, for reconstructing a stereo signal that approximates the perceptual stereo image of the stereo input signal, from the mono signal, generate a residual signal that indicates an error associated with representing the stereo signal by the mono signal and the estimated stereo image parameters, encode the mono signal using linear predictive coding to produce an encoded first mono signal, and output the encoded mono signal, the residual signal and the estimated stereo image parameters as the encoded output signal, wherein the transform encoder is configured to: analyze the stereo input signal by applying both mid/side stereo coding and left/right stereo coding and selecting either a mid/side stereo coding mode or a left/right stereo coding mode based on an estimated entropy for each stereo coding mode, encode the stereo input signal using the selected stereo coding mode in a first frequency band to produce an encoded stereo signal in the first frequency band, downmix the stereo input signal to a mono signal in a second frequency band, encode the mono signal in the second frequency band using transform coding to produce an encoded mono signal in the second frequency band, and output the encoded stereo signal in the first frequency band and the encoded mono signal in the second frequency band as the encoded output signal.
12. A method for decoding a bitstream signal to produce a decoded output signal having a left channel and a right channel, the method comprising: extracting an encoded audio signal from the bitstream signal, the encoded audio signal generated by encoding an input stereo audio signal having a left input channel and a right input channel using a selected coding mode, wherein the selected coding mode is one of a transform coding mode or a linear predictive coding mode; decoding the encoded audio signal using only the selected coding mode to produce a decoded signal; and outputting the decoded signal as the decoded output signal, wherein, if the selected coding mode is the linear predictive coding mode, the decoding comprises: receiving an encoded mono signal, the encoded mono signal being a sum of the left input channel and the right input channel of the input stereo audio signal, decoding the encoded mono signal using linear predictive decoding to produce a decoded mono signal, extracting stereo image parameters and a residual signal from the bitstream signal for reconstructing a stereo audio signal that approximates a perceptual stereo image of the input stereo audio signal, wherein the residual signal indicates an error associated with representing the stereo audio signal by the mono signal and the stereo image parameters, reconstructing the stereo audio signal using the decoded mono signal, the residual signal and the stereo image parameters to produce a reconstructed stereo audio signal that approximates the perceptual stereo image of the input stereo audio signal, and outputting the reconstructed stereo audio signal as the decoded signal, wherein, if the selected coding mode is the transform coding mode, the decoding comprises: receiving a stereo signal in a first frequency band, the stereo signal generated using a selected stereo coding mode, the selected stereo coding mode including either mid/side stereo coding or left/right stereo coding, receiving an encoded mono signal in a second frequency band, decoding the stereo signal in the first frequency band using the selected stereo coding mode to produce a decoded stereo signal in the first frequency band, decoding the encoded mono signal in the second frequency band using transform decoding to produce a decoded mono signal in the second frequency band, and outputting the decoded stereo signal in the first frequency band and the decoded mono signal in the second frequency band as the decoded signal.
13. The method of claim 12 wherein the transform coding further comprises extracting side information from the bitstream signal for reconstruction of one or more subbands not encoded.
14. The method of claim 13 wherein the side information includes a parameter used to determine a spectral envelope of the one or more subbands not encoded.
15. The method of claim 12 wherein the transform coding includes a psychoacoustic model.
16. The method of claim 12 wherein the stereo image parameters comprise stereo image parameters for each of a plurality of frequency bands.
17. The method of claim 12 wherein a bandwidth of the first frequency band and a bandwidth of the second frequency band is determined based at least in part on a desired target bitrate.
18. A device for decoding a bitstream signal to produce a decoded output signal having a left channel and a right channel, the device comprising: a demultiplexer for extracting an encoded audio signal from the bitstream signal, the encoded audio signal generated by encoding an input stereo audio signal having a left input channel and a right input channel using a selected coding mode, wherein the selected coding mode is one of a transform coding mode or a linear predictive coding mode; a transform decoder for decoding the encoded audio signal if the selected coding mode is the transform coding mode but not if the selected coding mode is the linear predictive coding mode; and a linear predictive decoder for decoding the encoded audio signal if the selected coding mode is the linear predictive coding mode but not if the selected coding mode is the transform coding mode, wherein the linear predictive decoder is configured to: receive an encoded mono signal, the encoded mono signal being a sum of the left input channel and the right input channel of the input stereo audio signal, decode the encoded mono signal using linear predictive decoding to produce a decoded mono signal, extract stereo image parameters and a residual signal from the bitstream signal for reconstructing a stereo audio signal that approximates a perceptual stereo image of the input stereo audio signal, wherein the residual signal indicates an error associated with representing the stereo signal by the mono signal and the stereo image parameters, reconstruct the stereo audio signal using the decoded mono signal, the residual signal and the stereo image parameters to produce a reconstructed stereo audio signal that approximates the perceptual stereo image of the input stereo audio signal, and output the reconstructed stereo audio signal as the decoded output signal, wherein transform decoder is configured to: receive a stereo signal in a first frequency band, the stereo signal generated using a selected stereo coding mode, the selected stereo coding mode including either a mid/side stereo coding mode or a left/right stereo coding mode, receive an encoded mono signal in a second frequency band, decode the stereo signal in the first frequency band using the selected stereo coding mode to produce a decoded stereo signal in the first frequency band, decode the encoded mono signal in the second frequency band using transform decoding to produce a decoded mono signal in the second frequency band, and output the decoded stereo signal in the first frequency band and the decoded mono signal in the second frequency band as the decoded output signal.
19. The device of claim 18 wherein the transform coding further comprises extracting side information from the bitstream signal for reconstruction of one or more subbands not encoded.
20. The device of claim 19 wherein the side information includes a parameter used to determine a spectral envelope of the one or more subbands not encoded.
21. The device of claim 18 wherein the transform coding includes a psychoacoustic model.
22. The device of claim 18 wherein the stereo image parameters comprise parameters for each of a plurality of frequency bands.
23. The device of claim 18 wherein a bandwidth of the first frequency band and a bandwidth of the second frequency band is determined based at least in part on a desired target bitrate.
Description
(1) The invention is explained below by way of illustrative examples with reference to the accompanying drawings, wherein
(2) FIG. 1 illustrates an embodiment of an encoder system, where optionally the PS parameters assist the psycho-acoustic control in the perceptual stereo encoder;
(3) FIG. 2 illustrates an embodiment of the PS encoder;
(4) FIG. 3 illustrates an embodiment of a decoder system;
(5) FIG. 4 illustrates a further embodiment of the PS encoder including a detector to deactivate PS encoding if L/R encoding is beneficial;
(6) FIG. 5 illustrates an embodiment of a conventional PS encoder system having an additional SBR encoder for the downmix;
(7) FIG. 6 illustrates an embodiment of an encoder system having an additional SBR encoder for the downmix signal;
(8) FIG. 7 illustrates an embodiment of an encoder system having an additional SBR encoder in the stereo domain;
(9) FIGS. 8a-8d illustrate various time-frequency representations of one of the two output channels at the decoder output;
(10) FIG. 9a illustrates an embodiment of the core encoder;
(11) FIG. 9b illustrates an embodiment of an encoder that permits switching between coding in a linear predictive domain (typically for mono signals only) and coding in a transform domain (typically for both mono and stereo signals);
(12) FIG. 10 illustrates an embodiment of an encoder system;
(13) FIG. 11a illustrates a part of an embodiment of an encoder system;
(14) FIG. 11b illustrates an exemplary implementation of the embodiment in FIG. 11a;
(15) FIG. 11c illustrates an alternative to the embodiment in FIG. 11a;
(16) FIG. 12 illustrates an embodiment of an encoder system;
(17) FIG. 13 illustrates an embodiment of the stereo coder as part of the encoder system of FIG. 12;
(18) FIG. 14 illustrates an embodiment of a decoder system for decoding the bitstream signal as generated by the encoder system of FIG. 6;
(19) FIG. 15 illustrates an embodiment of a decoder system for decoding the bitstream signal as generated by the encoder system of FIG. 7;
(20) FIG. 16a illustrates a part of an embodiment of a decoder system;
(21) FIG. 16b illustrates an exemplary implementation of the embodiment in FIG. 16a;
(22) FIG. 16c illustrates an alternative to the embodiment in FIG. 16a;
(23) FIG. 17 illustrates an embodiment of an encoder system; and
(24) FIG. 18 illustrates an embodiment of a decoder system.
(25) FIG. 1 shows an embodiment of an encoder system which combines PS encoding using a residual with adaptive L/R or M/S perceptual stereo encoding. This embodiment is merely illustrative for the principles of the present application. It is understood that modifications and variations of the embodiment will be apparent to others skilled in the art. The encoder system comprises a PS encoder 1 receiving a stereo signal L, R. The PS encoder 1 has a downmix stage for generating downmix DMX and residual RES signals based on the stereo signal L, R. This operation can be described by means of a 2.Math.2 downmix matrix H.sup.−1 that converts the L and R signals to the downmix signal DMX and residual signal RES:
(26)
(27) Typically, the matrix H.sup.−1 is frequency-variant and time-variant, i.e. the elements of the matrix H.sup.−1 vary over frequency and vary from time slot to time slot. The matrix H.sup.−1 may be updated every frame (e.g. every 21 or 42 ms) and may have a frequency resolution of a plurality of bands, e.g. 28, 20, or 10 bands (named “parameter bands”) on a perceptually oriented (Bark-like) frequency scale.
(28) The elements of the matrix H.sup.−1 depend on the time- and frequency-variant PS parameters IID (inter-channel intensity difference; also called CLD—channel level difference) and ICC (inter-channel cross-correlation). For determining PS parameters 5, e.g. IID and ICC, the PS encoder 1 comprises a parameter determining stage. An example for computing the matrix elements of the inverse matrix H is given by the following and described in the MPEG Surround specification document ISO/IEC 23003-1, subclause 6.5.3.2 which is hereby incorporated by reference:
(29)
(30) and where
(31)
(32) and where ρ=ICC.
(33) Moreover, the encoder system comprises a transform stage 2 that converts the downmix signal DMX and residual signal RES from the PS encoder 1 into a pseudo stereo signal L.sub.p, R.sub.p, e.g. according to the following equations:
L.sub.p=g(DMX+RES)
R.sub.p=g(DMX−RES)
(34) In the above equations the gain normalization factor g has e.g. a value of g=√{square root over (1/2)}. For g=√{square root over (1/2)}, the two equations for pseudo stereo signal L.sub.p, R.sub.p can be rewritten as:
(35)
(36) The pseudo stereo signal L.sub.p, R.sub.p is then fed to a perceptual stereo encoder 3, which adaptively selects either L/R or M/S stereo encoding. M/S encoding is a form of joint stereo coding. L/R encoding may be also based on joint encoding aspects, e.g. bits may be allocated jointly for the L and R channels from a common bit reservoir.
(37) The selection between L/R or M/S stereo encoding is preferably frequency-variant, i.e. some frequency bands may be L/R encoded, whereas other frequency bands may be M/S encoded. An embodiment for implementing the selection between L/R or M/S stereo encoding is described in the document “Sum-Difference Stereo Transform Coding”, J. D. Johnston et al., IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 1992, pages 569-572. The discussion of the selection between L/R or M/S stereo encoding therein, in particular sections 5.1 and 5.2, is hereby incorporated by reference.
(38) Based on the pseudo stereo signal L.sub.p, R.sub.p, the perceptual encoder 3 can internally compute (pseudo) mid/side signals M.sub.p, S.sub.p. Such signals basically correspond to the downmix signal DMX and residual signal RES (except for a possibly different gain factor). Hence, if the perceptual encoder 3 selects M/S encoding for a frequency band, the perceptual encoder 3 basically encodes the downmix signal DMX and residual signal RES for that frequency band (except for a possibly different gain factor) as it also would be done in a conventional perceptual encoder system using conventional PS coding with residual. The PS parameters 5 and the output bitstream 4 of the perceptual encoder 3 are multiplexed into a single bitstream 6 by a multiplexer 7.
(39) In addition to PS encoding of the stereo signal, the encoder system in FIG. 1 allows L/R coding of the stereo signal as will be explained in the following: As discussed above, the elements of the downmix matrix H.sup.−1 of the encoder (and also of the upmix matrix H used in the decoder) depend on the time- and frequency-variant PS parameters IID (inter-channel intensity difference; also called CLD—channel level difference) and ICC (inter-channel cross-correlation). An example for computing the matrix elements of the upmix matrix H is described above. In case of using residual coding, the right column of the 2.Math.2 upmix matrix H is given as
(40)
(41) However, preferably, the right column of the 2.Math.2 matrix H should instead be modified to
(42)
(43) The left column is preferably computed as given in the MPEG Surround specification.
(44) Modifying the right column of the upmix matrix H ensures that for IID=0 dB and ICC=0 (i.e. the case where for the respective band the stereo channels L and R are independent and have the same level) the following upmix matrix H is obtained for the band:
(45)
(46) Please note that the upmix matrix H and also the downmix matrix H.sup.−1 are typically frequency-variant and time-variant. Thus, the values of the matrices are different for different time/frequency tiles (a tile corresponds to the intersection of a particular frequency band and a particular time period). In the above case the downmix matrix H.sup.−1 is identical to the upmix matrix H. Thus, for the band the pseudo stereo signal L.sub.p, R.sub.p can computed by the following equation:
(47)
(48) Hence, in this case the PS encoding with residual using the downmix matrix H.sup.−1 followed by the generation of the pseudo L/R signal in the transform stage 2 corresponds to the unity matrix and does not change the stereo signal for the respective frequency band at all, i.e.
L.sub.p=L
R.sub.p=R
(49) In other words: the transform stage 2 compensates the downmix matrix H.sup.−1 such that the pseudo stereo signal L.sub.p, R.sub.p corresponds to the input stereo signal L, R. This allows to encode the original input stereo signal L, R by the perceptual encoder 3 for the particular band. When L/R encoding is selected by the perceptual encoder 3 for encoding the particular band, the encoder system behaves like a L/R perceptual encoder for encoding the band of the stereo input signal L, R.
(50) The encoder system in FIG. 1 allows seamless and adaptive switching between L/R coding and PS coding with residual in a frequency- and time-variant manner. The encoder system avoids discontinuities in the waveform when switching the coding scheme. This prevents artifacts. In order to achieve smooth transitions, linear interpolation may be applied to the elements of the matrix H.sup.−1 in the encoder and the matrix H in the decoder for samples between two stereo parameter updates.
(51) FIG. 2 shows an embodiment of the PS encoder 1. The PS encoder 1 comprises a downmix stage 8 which generates the downmix signal DMX and residual signal RES based on the stereo signal L, R. Further, the PS encoder 1 comprises a parameter estimating stage 9 for estimating the PS parameters 5 based on the stereo signal L, R.
(52) FIG. 3 illustrates an embodiment of a corresponding decoder system configured to decode the bitstream 6 as generated by the encoder system of FIG. 1. This embodiment is merely illustrative for the principles of the present application. It is understood that modifications and variations of the embodiment will be apparent to others skilled in the art. The decoder system comprises a demultiplexer 10 for separating the PS parameters 5 and the audio bitstream 4 as generated by the perceptual encoder 3. The audio bitstream 4 is fed to a perceptual stereo decoder 11, which can selectively decode an L/R encoded bitstream or an M/S encoded audio bitstream. The operation of the decoder 11 is inverse to the operation of the encoder 3. Analogously to the perceptual encoder 3, the perceptual decoder 11 preferably allows for a frequency-variant and time-variant decoding scheme. Some frequency bands which are L/R encoded by the encoder 3 are L/R decoded by the decoder 11, whereas other frequency bands which are M/S encoded by the encoder 3 are M/S decoded by the decoder 11. The decoder 11 outputs the pseudo stereo signal L.sub.p, R.sub.p which was input to the perceptual encoder 3 before. The pseudo stereo signal L.sub.p, R.sub.p as obtained from the perceptual decoder 11 is converted back to the downmix signal DMX and residual signal RES by a L/R to M/S transform stage 12. The operation of the L/R to M/S transform stage 12 at the decoder side is inverse to the operation of the transform stage 2 at the encoder side. Preferably, the transform stage 12 determines the downmix signal DMX and residual signal RES according to the following equations:
(53) 0
(54) In the above equations, the gain normalization factor g is identical to the gain normalization factor g at the encoder side and has e.g. a value of g=√{square root over (1/2)}.
(55) The downmix signal DMX and residual signal RES are then processed by the PS decoder 13 to obtain the final L and R output signals. The upmix step in the decoding process for PS coding with a residual can be described by means of the 2.Math.2 upmix matrix H that converts the downmix signal DMX and residual signal RES back to the L and R channels:
(56)
(57) The computation of the elements of the upmix matrix H was already discussed above.
(58) The PS encoding and PS decoding process in the PS encoder 1 and the PS decoder 13 is preferably carried out in an oversampled frequency domain. For time-to-frequency transform e.g. a complex valued hybrid filter bank having a QMF (quadrature mirror filter) and a Nyquist filter may be used upstream of the PS encoder, such as the filter bank described in MPEG Surround standard (see document ISO/IEC 23003-1). The complex QMF representation of the signal is oversampled with factor 2 since it is complex-valued and not real-valued. This allows for time and frequency adaptive signal processing without audible aliasing artifacts. Such hybrid filter bank typically provides high frequency resolution (narrow band) at low frequencies, while at high frequency, several QMF bands are grouped into a wider band. The paper “Low Complexity Parametric Stereo Coding in MPEG-4”, H. Purnhagen, Proc. of the 7.sup.th Int. Conference on Digital Audio Effects (DAFx'04), Naples, Italy, Oct. 5-8, 2004, pages 163-168 describes an embodiment of a hybrid filter bank (see section 3.2 and FIG. 4). This disclosure is hereby incorporated by reference. In this document a 48 kHz sampling rate is assumed, with the (nominal) bandwidth of a band from a 64 band QMF bank being 375 Hz. The perceptual Bark frequency scale however asks for a bandwidth of approximately 100 Hz for frequencies below 500 Hz. Hence, the first 3 QMF bands may be split into further more narrow subbands by means of a Nyquist filter bank. The first QMF band may be split into 4 bands (plus two more for negative frequencies), and the 2nd and 3rd QMF bands may be split into two bands each.
(59) Preferably, the adaptive L/R or M/S encoding, on the other hand, is carried out in the critically sampled MDCT domain (e.g. as described in AAC) in order to ensure an efficient quantized signal representation. The conversion of the downmix signal DMX and residual signal RES to the pseudo stereo signal L.sub.p, R.sub.p in the transform stage 2 may be carried out in the time domain since the PS encoder 1 and the perceptual encoder 3 may be connected in the time domain anyway. Also in the decoding system, the perceptual stereo decoder 11 and the PS decoder 13 are preferably connected in the time domain. Thus, the conversion of the pseudo stereo signal L.sub.p, R.sub.p to the downmix signal DMX and residual signal RES in the transform stage 12 may be also carried out in the time domain.
(60) An adaptive L/R or M/S stereo coder such as shown as the encoder 3 in FIG. 1 is typically a perceptual audio coder that incorporates a psychoacoustic model to enable high coding efficiency at low bitrates. An example for such encoder is an AAC encoder, which employs transform coding in a critically sampled MDCT domain in combination with time- and frequency-variant quantization controlled by using a psycho-acoustic model. Also, the time- and frequency-variant decision between L/R and M/S coding is typically controlled with help of perceptual entropy measures that are calculated using a psycho-acoustic model.
(61) The perceptual stereo encoder (such as the encoder 3 in FIG. 1) operates on a pseudo L/R stereo signal (see L.sub.p, R.sub.p in FIG. 1). For optimizing the coding efficiency of the stereo encoder (in particular for making the right decision between L/R encoding and M/S encoding) it is advantageous to modify the psycho-acoustic control mechanism (including the control mechanism which decides between L/R and M/S stereo encoding and the control mechanism which controls the time- and frequency-variant quantization) in the perceptual stereo encoder in order to account for the signal modifications (pseudo L/R to DMX and RES conversion, followed by PS decoding) that are applied in the decoder when generating the final stereo output signal L, R. These signal modifications can affect binaural masking phenomena that are exploited in the psycho-acoustic control mechanisms. Therefore, these psycho-acoustic control mechanisms should preferably be adapted accordingly. For this, it can be beneficial if the psycho-acoustic control mechanisms do not have access only to the pseudo L/R signal (see L.sub.p, R.sub.p in FIG. 1) but also to the PS parameters (see 5 in FIG. 1) and/or to the original stereo signal L, R. The access of the psycho-acoustic control mechanisms to the PS parameters and to the stereo signal L, R is indicated in FIG. 1 by the dashed lines. Based on this information, e.g. the masking threshold(s) may be adapted.
(62) An alternative approach to optimize psycho-acoustic control is to augment the encoder system with a detector forming a deactivation stage that is able to effectively deactivate PS encoding when appropriate, preferably in a time- and frequency-variant manner. Deactivating PS encoding is e.g. appropriate when L/R stereo coding is expected to be beneficial or when the psychoacoustic control would have problems to encode the pseudo L/R signal efficiently. PS encoding may be effectively deactivated by setting the downmix matrix H.sup.−1 in such a way that the downmix matrix H.sup.−1 followed by the transform (see stage 2 in FIG. 1) corresponds to the unity matrix (i.e. to an identity operation) or to the unity matrix times a factor. E.g. PS encoding may be effectively deactivated by forcing the PS parameters IID and/or ICC to IID=0 dB and ICC=0. In this case the pseudo stereo signal L.sub.p, R.sub.p corresponds to the stereo signal L, R as discussed above.
(63) Such detector controlling a PS parameter modification is shown in FIG. 4. Here, the detector 20 receives the PS parameters 5 determined by the parameter estimating stage 9. When the detector does not deactivate the PS encoding, the detector 20 passes the PS parameters through to the downmix stage 8 and to the multiplexer 7, i.e. in this case the PS parameters 5 correspond to the PS parameters 5′ fed to the downmix stage 8. In case the detector detects that PS encoding is disadvantageous and PS encoding should be deactivated (for one or more frequency bands), the detector modifies the affected PS parameters 5 (e.g. set the PS parameters IID and/or ICC to IID=0 dB and ICC=0) and feeds the modified PS parameters 5′ to downmix stage 8. The detector can optionally also consider the left and right signals L, R for deciding on a PS parameter modification (see dashed lines in FIG. 4).
(64) In the following figures, the term QMF (quadrature mirror filter or filter bank) also includes a QMF subband filter bank in combination with a Nyquist filter bank, i.e. a hybrid filter bank structure. Furthermore, all values in the description below may be frequency dependent, e.g. different downmix and upmix matrices may be extracted for different frequency ranges. Furthermore, the residual coding may only cover part of the used audio frequency range (i.e. the residual signal is only coded for a part of the used audio frequency range). Aspects of downmix as will be outlined below may for some frequency ranges occur in the QMF domain (e.g. according to prior art), while for other frequency ranges only e.g. phase aspects will be dealt with in the complex QMF domain, whereas amplitude transformation is dealt with in the real-valued MDCT domain.
(65) In FIG. 5, a conventional PS encoder system is depicted. Each of the stereo channels L, R, is at first analyzed by a complex QMF 30 with M subbands, e.g. a QMF with M=64 subbands. The subband signals are used to estimate PS parameters 5 and a downmix signal DMX in a PS encoder 31. The downmix signal DMX is used to estimate SBR (Spectral Bandwidth Replication) parameters 33 in an SBR encoder 32. The SBR encoder 32 extracts the SBR parameters 33 representing the spectral envelope of the original high band signal, possibly in combination with noise and tonality measures. As opposed to the PS encoder 31, the SBR encoder 32 does not affect the signal passed on to the core coder 34. The downmix signal DMX of the PS encoder 31 is synthesized using an inverse QMF 35 with N subbands. E.g. a complex QMF with N=32 may be used, where only the 32 lowest subbands of the 64 subbands used by the PS encoder 31 and the SBR encoder 32 are synthesized. Thus, by using half the number of subbands for the same frame size, a time domain signal of half the bandwidth compared to the input is obtained, and passed into the core coder 34. Due to the reduced bandwidth the sampling rate can be reduced to the half (not shown). The core encoder 34 performs perceptual encoding of the mono input signal to generate a bitstream 36. The PS parameters 5 are embedded in the bitstream 36 by a multiplexer (not shown).
(66) FIG. 6 shows a further embodiment of an encoder system which combines PS coding using a residual with a stereo core coder 48, with the stereo core coder 48 being capable of adaptive L/R or M/S perceptual stereo coding. This embodiment is merely illustrative for the principles of the present application. It is understood that modifications and variations of the embodiment will be apparent to others skilled in the art. The input channels L, R representing the left and right original channels are analyzed by a complex QMF 30, in a similar way as discussed in connection with FIG. 5. In contrast to the PS encoder 31 in FIG. 5, the PS encoder 41 in FIG. 6 does not only output a downmix signal DMX but also outputs a residual signal RES. The downmix signal DMX is used by an SBR encoder 32 to determine SBR parameters 33 of the downmix signal DMX. A fixed DMX/RES to pseudo L/R transform (i.e. an M/S to L/R transform) is applied to the downmix DMX and the residual RES signals in a transform stage 2. The transform stage 2 in FIG. 6 corresponds to the transform stage 2 in FIG. 1. The transform stage 2 creates a “pseudo” left and right channel signal L.sub.p, R.sub.p for the core encoder 48 to operate on. In this embodiment, the inverse L/R to M/S transform is applied in the QMF domain, prior to the subband synthesis by filter banks 35. Preferably, the number N (e.g. N=32) of subbands for the synthesis corresponds to half the number M (e.g. M=64) of subbands used for the analysis and the core coder 48 operates at half the sampling rate. It should be noted that there is no restriction to use 64 subband channels for the QMF analysis in the encoder, and 32 subbands for the synthesis, other values are possible as well, depending on which sampling rate is desired for the signal received by the core coder 48. The core stereo encoder 48 performs perceptual encoding of the signal of the filter banks 35 to generate a bitstream signal 46. The PS parameters 5 are embedded in the bitstream signal 46 by a multiplexer (not shown). Optionally, the PS parameters and/or the original L/R input signal may be used by the core encoder 48. Such information indicates to the core encoder 48 how the PS encoder 41 rotated the stereo space. The information may guide the core encoder 48 how to control quantization in a perceptually optimal way. This is indicated in FIG. 6 by the dashed lines.
(67) FIG. 7 illustrates a further embodiment of an encoder system which is similar to the embodiment in FIG. 6. In comparison to the embodiment of FIG. 6, in FIG. 7 the SBR encoder 42 is connected upstream of the PS encoder 41. In FIG. 7 the SBR encoder 42 has been moved prior to the PS encoder 41, thus operating on the left and right channels (here: in the QMF domain), instead of operating on the downmix signal DMX as in FIG. 6.
(68) Due to the re-arrangement of the SBR encoder 42, the PS encoder 41 may be configured to operate not on the full bandwidth of the input signal but e.g. only on the frequency range below the SBR crossover frequency. In FIG. 7, the SBR parameters 43 are in stereo for the SBR range, and the output from the corresponding PS decoder as will be discussed later on in connection with FIG. 15 produces a stereo source frequency range for the SBR decoder to operate on. This modification, i.e. connecting the SBR encoder module 42 upstream of the PS encoder module 41 in the encoder system and correspondingly placing the SBR decoder module after the PS decoder module in the decoder system (see FIG. 15), has the benefit that the use of a decorrelated signal for generating the stereo output can be reduced. Please note that in case no residual signal exists at all or for a particular frequency band, a decorrelated version of the downmix signal DMX is used instead in the PS decoder. However, a reconstruction based on a decorrelated signal reduces the audio quality. Thus, reducing the use of the decorrelated signal increases the audio quality.
(69) This advantage of the embodiment in FIG. 7 in comparison to the embodiment in FIG. 6 will be now explained more in detail with reference to FIGS. 8a to 8d.
(70) In FIG. 8a, a time frequency representation of one of the two output channels L, R (at the decoder side) is visualized. In case of FIG. 8a, an encoder is used where the PS encoding module is placed in front of the SBR encoding module such as the encoder in FIG. 5 or FIG. 6 (in the decoder the PS decoder is placed after the SBR decoder, see FIG. 14). Moreover, the residual is coded only in a low bandwidth frequency range 50, which is smaller than the frequency range 51 of the core coder. As evident from the spectrogram visualization in FIG. 8a, the frequency range 52 where a decorrelated signal is to be used by the PS decoder covers all of the frequency range apart from the lower frequency range 50 covered by the use of the residual signal. Moreover, the SBR covers a frequency range 53 starting significantly higher than that of the decorrelated signal. Thus, the entire frequency range separates in the following frequency ranges: in the lower frequency range (see range 50 in FIG. 8a), waveform coding is used; in the middle frequency range (see intersection of frequency ranges 51 and 52), waveform coding in combination with a decorrelated signal is used; and in the higher frequency range (see frequency range 53), a SBR regenerated signal which is regenerated from the lower frequencies is used in combination with the decorrelated signal produced by the PS decoder.
(71) In FIG. 8b, a time frequency representation of one of the two output channels L, R (at the decoder side) is visualized for the case when the SBR encoder is connected upstream of the PS encoder in the encoder system (and the SBR decoder is located after the PS decoder in the decoder system). In FIG. 8b a low bitrate scenario is illustrated, with the residual signal bandwidth 60 (where residual coding is performed) being lower than the bandwidth of the core coder 61. Since the SBR decoding process operates on the decoder side after the PS decoder (see FIG. 15), the residual signal used for the low frequencies is also used for the reconstruction of at least a part (see frequency range 64) of the higher frequencies in the SBR range 63.
(72) The advantage becomes even more apparent when operating on intermediate bitrates where the residual signal bandwidth approaches or is equal to the core coder bandwidth. In this case, the time frequency representation of FIG. 8a (where the order of PS encoding and SBR encoding as shown in FIG. 6 is used) results in the time frequency representation shown in FIG. 8c. In FIG. 8c, the residual signal essentially covers the entire lowband range 51 of the core coder; in the SBR frequency range 53 the decorrelated signal is used by the PS decoder. In FIG. 8d, the time frequency representation in case of the preferred order of the encoding/decoding modules (i.e. SBR encoding operating on a stereo signal before PS encoding, as shown in FIG. 7) is visualized. Here, the PS decoding module operates prior to the SBR decoding module in the decoder, as shown in FIG. 15. Thus, the residual signal is part of the low band used for high frequency reconstruction. When the residual signal bandwidth equals that of the mono downmix signal bandwidth, no decorrelated signal information will be needed to decoder the output signal (see the full frequency range being hatched in FIG. 8d).
(73) In FIG. 9a, an embodiment of the stereo core encoder 48 with adaptively selectable L/R or M/S stereo encoding in the MDCT transform domain is illustrated. Such stereo encoder 48 may be used in FIGS. 6 and 7. A mono core encoder 34 as shown in FIG. 5 can be considered as a special case of the stereo core encoder 48 in FIG. 9a, where only a single mono input channel is processed (i.e. where the second input channel, shown as dashed line in FIG. 9a, is not present).
(74) In FIG. 9b, an embodiment of a more generalized encoder is illustrated. For mono signals, encoding can be switched between coding in a linear predictive domain (see block 71) and coding in a transform domain (see block 48). Such type of core coder introduces several coding methods which can adaptively be used dependent upon the characteristics of the input signal. Here, the coder can choose to code the signal using either an AAC style transform coder 48 (available for mono and stereo signals, with adaptively selectable L/R or M/S coding in case of stereo signals) or an AMR-WB+ (Adaptive Multi Rate—WideBand Plus) style core coder 71 (only available for mono signals). The AMR-WB+ core coder 71 evaluates the residual of a linear predictor 72, and in turn also chooses between a transform coding approach of the linear prediction residual or a classic speech coder ACELP (Algebraic Code Excited Linear Prediction) approach for coding the linear prediction residual. For deciding between AAC style transform coder 48 and the AMR-WB+ style core coder 71, a mode decision stage 73 is used which decides based on the input signal between both coders 48 and 71.
(75) The encoder 48 is a stereo AAC style MDCT based coder. When the mode decision 73 steers the input signal to use MDCT based coding, the mono input signal or the stereo input signals are coded by the AAC based MDCT coder 48. The MDCT coder 48 does an MDCT analysis of the one or two signals in MDCT stages 74. In case of a stereo signal, further, an M/S or L/R decision on a frequency band basis is performed in a stage 75 prior to quantization and coding. L/R stereo encoding or M/S stereo encoding is selectable in a frequency-variant manner. The stage 75 also performs a L/R to M/S transform. If M/S encoding is decided for a particular frequency band, the stage 75 outputs an M/S signal for this frequency band. Otherwise, the stage 75 outputs a L/R signal for this frequency band.
(76) Hence, when the transform coding mode is used, the full efficiency of the stereo coding functionality of the underlying core coder can be used for stereo.
(77) When the mode decision 73 steers the mono signal to the linear predictive domain coder 71, the mono signal is subsequently analyzed by means of linear predictive analysis in block 72. Subsequently, a decision is made on whether to code the LP residual by means of a time-domain ACELP style coder 76 or a TCX style coder 77 (Transform Coded eXcitation) operating in the MDCT domain. The linear predictive domain coder 71 does not have any inherent stereo coding capability. Hence, to allow coding of stereo signal with the linear predictive domain coder 71, an encoder configuration similar to that shown in FIG. 5 can be used. In this configuration, a PS encoder generates PS parameters 5 and a mono downmix signal DMX, which is then encoded by the linear predictive domain coder.
(78) FIG. 10 illustrates a further embodiment of an encoder system, wherein parts of FIG. 7 and FIG. 9 are combined in a new fashion. The DMX/RES to pseudo L/R block 2, as outlined in FIG. 7, is arranged within the AAC style downmix coder 70 prior to the stereo MDCT analysis 74. This embodiment has the advantage that the DMX/RES to pseudo L/R transform 2 is applied only when the stereo MDCT core coder is used. Hence, when the transform coding mode is used, the full efficiency of the stereo coding functionality of the underlying core coder can be used for stereo coding of the frequency range covered by the residual signal.
(79) While the mode decision 73 in FIG. 9b operates either on the mono input signal or on the input stereo signal, the mode decision 73′ in FIG. 10 operates on the downmix signal DMX and the residual signal RES. In case of a mono input signal, the mono signal can directly be used as the DMX signal, the RES signal is set to zero, and the PS parameters can default to IID=0 dB and ICC=1.
(80) When the mode decision 73′ steers the downmix signal DMX to the linear predictive domain coder 71, the downmix signal DMX is subsequently analyzed by means of linear predictive analysis in block 72. Subsequently, a decision is made on whether to code the LP residual by means of a time-domain ACELP style coder 76 or a TCX style coder 77 (Transform Coded eXcitation) operating in the MDCT domain. The linear predictive domain coder 71 does not have any inherent stereo coding capability that can be used for coding the residual signal in addition to the downmix signal DMX. Hence, a dedicated residual coder 78 is employed for encoding the residual signal RES when the downmix signal DMX is encoded by the predictive domain coder 71. E.g. such coder 78 may be a mono AAC coder.
(81) It should be noted that the coder 71 and 78 in FIG. 10 may be omitted (in this case the mode decision stage 73′ is not necessary anymore).
(82) FIG. 11a illustrates a detail of an alternative further embodiment of an encoder system which achieves the same advantage as the embodiment in FIG. 10. In contrast to the embodiment of FIG. 10, in FIG. 11a the DMX/RES to pseudo L/R transform 2 is placed after the MDCT analysis 74 of the core coder 70, i.e. the transform operates in the MDCT domain. The transform in block 2 is linear and time-invariant and thus can be placed after the MDCT analysis 74. The remaining blocks of FIG. 10 which are not shown in FIG. 11 can be optionally added in the same way in FIG. 11a. The MDCT analysis blocks 74 may be also alternatively placed after the transform block 2.
(83) FIG. 11b illustrates an implementation of the embodiment in FIG. 11a. In FIG. 11b, an exemplary implementation of the stage 75 for selecting between M/S or L/R encoding is shown. The stage 75 comprises a sum and difference transform stage 98 (more precisely a L/R to M/S transform stage) which receives the pseudo stereo signal L.sub.p, R.sub.p. The transform stage 98 generates a pseudo mid/side signal M.sub.p, S.sub.p by performing an L/R to M/S transform. Except for a possible gain factor, the following applies: M.sub.p=DMX and S.sub.p=RES.
(84) The stage 75 decides between L/R or M/S encoding. Based on the decision, either the pseudo stereo signal L.sub.p, R.sub.p or the pseudo mid/side signal M.sub.p, S.sub.p are selected (see selection switch) and encoded in AAC block 97. It should be noted that also two AAC blocks 97 may be used (not shown in FIG. 11b), with the first AAC block 97 assigned to the pseudo stereo signal L.sub.p, R.sub.p and the second AAC block 97 assigned to the pseudo mid/side signal M.sub.p, S.sub.p. In this case, the L/R or M/S selection is performed by selecting either the output of the first AAC block 97 or the output of the second AAC block 97.
(85) FIG. 11c shows an alternative to the embodiment in FIG. 11a. Here, no explicit transform stage 2 is used. Rather, the transform stage 2 and the stage 75 is combined in a single stage 75′. The downmix signal DMX and the residual signal RES are fed to a sum and difference transform stage 99 (more precisely a DMX/RES to pseudo L/R transform stage) as part of stage 75′. The transform stage 99 generates a pseudo stereo signal L.sub.p, R.sub.p. The DMX/RES to pseudo L/R transform stage 99 in FIG. 11c is similar to the L/R to M/S transform stage 98 in FIG. 11b (expect for a possibly different gain factor). Nevertheless, in FIG. 11c the selection between M/S and L/R decoding needs to be inverted in comparison to FIG. 11b. Note that in both FIG. 11b and FIG. 11c, the position of the switch for the L/R or M/S selection is shown in L.sub.p/R.sub.p position, which is the upper one in FIG. 11b and the lower one in FIG. 11c. This visualizes the notion of the inverted meaning of the L/R or M/S selection.
(86) It should be noted that the switch in FIGS. 11b and 11c preferably exists individually for each frequency band in the MDCT domain such that the selection between L/R and M/S can be both time- and frequency-variant. In other words: the position of the switch is preferably frequency-variant. The transform stages 98 and 99 may transform the whole used frequency range or may only transform a single frequency band.
(87) Moreover, it should be noted that all blocks 2, 98 and 99 can be called “sum and difference transform blocks” since all blocks implement a transform matrix in the form of
(88)
(89) Merely, the gain factor c may be different in the blocks 2, 98, 99.
(90) In FIG. 12, a further embodiment of an encoder system is outlined. It uses an extended set of PS parameters which, in addition to IID an ICC (described above), includes two further parameters IPD (inter channel phase difference, see φ.sub.ipd below) and OPD (overall phase difference, see φ.sub.opd below) that allow to characterize the phase relationship between the two channels L and R of a stereo signal. An example for these phase parameters is given in ISO/IEC 14496-3 subclause 8.6.4.6.3 which is hereby incorporated by reference. When phase parameters are used, the resulting upmix matrix H.sub.COMPLEX (and its inverse H.sub.COMPLEX.sup.−1) becomes complex-valued, according to:
(91)
(92) and where
φ.sub.1=φ.sub.opd
φ.sub.2=φ.sub.opd−φ.sub.ipd,
(93) The stage 80 of the PS encoder which operates in the complex QMF domain only takes care of phase dependencies between the channels L, R. The downmix rotation (i.e. the transformation from the L/R domain to the DMX/RES domain which was described by the matrix H.sup.−1 above) is taken care of in the MDCT domain as part of the stereo core coder 81. Hence, the phase dependencies between the two channels are extracted in the complex QMF domain, while other, real-valued, waveform dependencies are extracted in the real-valued critically sampled MDCT domain as part of the stereo coding mechanism of the core coder used. This has the advantage that the extraction of linear dependencies between the channels can be tightly integrated in the stereo coding of the core coder (though, to prevent aliasing in the critical sampled MDCT domain, only for the frequency range that is covered by residual coding, possibly minus a “guard band” on the frequency axis).
(94) The phase adjustment stage 80 of the PS encoder in FIG. 12 extracts phase related PS parameters, e.g. the parameters IPD (inter channel phase difference) and OPD (overall phase difference). Hence, the phase adjustment matrix H.sub.φ.sup.−1 that it produces may be according to the following:
(95)
(96) As discussed before, the downmix rotation part of the PS module is dealt with in the stereo coding module 81 of the core coder in FIG. 12. The stereo coding module 81 operates in the MDCT domain and is shown in FIG. 13. The stereo coding module 81 receives the phase adjusted stereo signal L.sub.φ, R.sub.φ in the MDCT domain. This signal is downmixed in a downmix stage 82 by a downmix rotation matrix H.sup.−1 which is the real-valued part of a complex downmix matrix H.sub.COMPLEX.sup.−1 as discussed above, thereby generating the downmix signal DMX and residual signal RES. The downmix operation is followed by the inverse L/R to M/S transform according to the present application (see transform stage 2), thereby generating a pseudo stereo signal L.sub.p, R.sub.p. The pseudo stereo signal L.sub.p, R.sub.p is processed by the stereo coding algorithm (see adaptive M/S or L/R stereo encoder 83), in this particular embodiment a stereo coding mechanism that depending on perceptual entropy criteria decides to code either an L/R representation or an M/S representation of the signal. This decision is preferably time- and frequency-variant.
(97) In FIG. 14 an embodiment of a decoder system is shown which is suitable to decode a bitstream 46 as generated by the encoder system shown in FIG. 6. This embodiment is merely illustrative for the principles of the present application. It is understood that modifications and variations of the embodiment will be apparent to others skilled in the art. A core decoder 90 decodes the bitstream 46 into pseudo left and right channels, which are transformed in the QMF domain by filter banks 91. Subsequently, a fixed pseudo L/R to DMX/RES transform of the resulting pseudo stereo signal L.sub.p, R.sub.p is performed in transform stage 12, thus creating a downmix signal DMX and a residual signal RES. When using SBR coding, these signals are low band signals, e.g. the downmix signal DMX and residual signal RES may only contain audio information for the low frequency band up to approximately 8 kHz. The downmix signal DMX is used by an SBR decoder 93 to reconstruct the high frequency band based on received SBR parameters (not shown). Both the output signal (including the low and reconstructed high frequency bands of the downmix signal DMX) from the SBR decoder 93 and the residual signal RES are input to a PS decoder 94 operating in the QMF domain (in particular in the hybrid QMF+Nyquist filter domain). The downmix signal DMX at the input of the PS decoder 94 also contains audio information in the high frequency band (e.g. up to 20 kHz), whereas the residual signal RES at the input of the PS decoder 94 is a low band signal (e.g. limited up to 8 kHz). Thus, for the high frequency band (e.g. for the band from 8 kHz to 20 kHz), the PS decoder 94 uses a decorrelated version of the downmix signal DMX instead of using the band limited residual signal RES. The decoded signals at the output of the PS decoder 94 are therefore based on a residual signal only up to 8 kHz. After PS decoding, the two output channels of the PS decoder 94 are transformed in the time domain by filter banks 95, thereby generating the output stereo signal L, R.
(98) In FIG. 15 an embodiment of a decoder system is shown which is suitable to decode the bitstream 46 as generated by the encoder system shown in FIG. 7. This embodiment is merely illustrative for the principles of the present application. It is understood that modifications and variations of the embodiment will be apparent to others skilled in the art. The principle operation of the embodiment in FIG. 15 is similar to that of the decoder system outlined in FIG. 14. In contrast to FIG. 14, the SBR decoder 96 in FIG. 15 is located at the output of the PS decoder 94. Moreover, the SBR decoder makes use of SBR parameters (not shown) forming stereo envelope data in contrast to the mono SBR parameters in FIG. 14. The downmix and residual signal at the input of the PS decoder 94 are typically low band signals, e.g. the downmix signal DMX and residual signal RES may contain audio information only for the low frequency band, e.g. up to approximately 8 kHz. Based on the low band downmix signal DMX and residual signal RES, the PS encoder 94 determines a low band stereo signal, e.g. up to approximately 8 kHz. Based on the low band stereo signal and stereo SBR parameters, the SBR decoder 96 reconstructs the high frequency part of the stereo signal. In comparison to the embodiment in FIG. 14, the embodiment in FIG. 15 offers the advantage that no decorrelated signal is needed (see also FIG. 8d) and thus an enhanced audio quality is achieved, whereas in FIG. 14 for the high frequency part a decorrelated signal is needed (see also FIG. 8c), thereby reducing the audio quality.
(99) FIG. 16a shows an embodiment of a decoding system which is inverse to the encoding system shown in FIG. 11a. The incoming bitstream signal is fed to a decoder block 100, which generates a first decoded signal 102 and a second decoded signal 103. At the encoder either M/S coding or L/R coding was selected. This is indicated in the received bitstream. Based on this information, either M/S or L/R is selected in the selection stage 101. In case M/S was selected in the encoder, the first 102 and second 103 signals are converted into a (pseudo) L/R signal. In case L/R was selected in the encoder, the first 102 and second 103 signals may pass the stage 101 without transformation. The pseudo L/R signal L.sub.p, R.sub.p at the output of stage 101 is converted into an DMX/RES signal by the transform stage 12 (this stage quasi performs a L/R to M/S transform). Preferably, the stages 100, 101 and 12 in FIG. 16a operate in the MDCT domain. For transforming the downmix signal DMX and residual signals RES into the time domain, conversion blocks 104 may be used. Thereafter, the resulting signal is fed to a PS decoder (not shown) and optionally to an SBR decoder as shown in FIGS. 14 and 15. The blocks 104 may be also alternatively placed before block 12.
(100) FIG. 16b illustrates an implementation of the embodiment in FIG. 16a. In FIG. 16b, an exemplary implementation of the stage 101 for selecting between M/S or L/R decoding is shown. The stage 101 comprises a sum and difference transform stage 105 (M/S to L/R transform) which receives the first 102 and second 103 signals.
(101) Based on the encoding information given in the bitstream, the stage 101 selects either L/R or M/S decoding. When L/R decoding is selected, the output signal of the decoding block 100 is fed to the transform stage 12.
(102) FIG. 16c shows an alternative to the embodiment in FIG. 16a. Here, no explicit transform stage 12 is used. Rather, the transform stage 12 and the stage 101 are merged in a single stage 101′. The first 102 and second 103 signals are fed to a sum and difference transform stage 105′ (more precisely a pseudo L/R to DMX/RES transform stage) as part of stage 101′. The transform stage 105′ generates a DMX/RES signal. The transform stage 105′ in FIG. 16c is similar or identical to the transform stage 105 in FIG. 16b (expect for a possibly different gain factor). In FIG. 16c the selection between M/S and L/R decoding needs to be inverted in comparison to FIG. 16b. In FIG. 16c the switch is in the lower position, whereas in FIG. 16b the switch is in the upper position. This visualizes the inversion of the L/R or M/S selection (the selection signal may be simply inverted by an inverter).
(103) It should be noted that the switch in FIGS. 16b and 16c preferably exists individually for each frequency band in the MDCT domain such that the selection between L/R and M/S can be both time- and frequency-variant. The transform stages 105 and 105′ may transform the whole used frequency range or may only transform a single frequency band.
(104) FIG. 17 shows a further embodiment of an encoding system for coding a stereo signal L, R into a bitstream signal. The encoding system comprises a downmix stage 8 for generating a downmix signal DMX and a residual signal RES based on the stereo signal. Further, the encoding system comprises a parameter determining stage 9 for determining one or more parametric stereo parameters 5. Further, the encoding system comprises means 110 for perceptual encoding downstream of the downmix stage 8. The encoding is selectable: encoding based on a sum signal of the downmix signal DMX and the residual signal RES and based on a difference signal of the downmix signal DMX and the residual signal RES, or encoding based on the downmix signal DMX and the residual signal RES.
(105) Preferably, the selection is time- and frequency-variant.
(106) The encoding means 110 comprises a sum and difference transform stage 111 which generates the sum and difference signals. Further, the encoding means 110 comprise a selection block 112 for selecting encoding based on the sum and difference signals or based on the downmix signal DMX and the residual signal RES. Furthermore, an encoding block 113 is provided. Alternatively, two encoding blocks 113 may be used, with the first encoding block 113 encoding the DMX and RES signals and the second encoding block 113 encoding the sum and difference signals. In this case the selection 112 is downstream of the two encoding blocks 113.
(107) The sum and difference transform in block 111 is of the form
(108)
(109) The transform block 111 may correspond to transform block 99 in FIG. 11c.
(110) The output of the perceptual encoder 110 is combined with the parametric stereo parameters 5 in the multiplexer 7 to form the resulting bitstream 6.
(111) In contrast to the structure in FIG. 17, encoding based on the downmix signal DMX and residual signal RES may be realized when encoding a resulting signal which is generated by transforming the downmix signal DMX and residual signal RES by two serial sum and difference transforms as shown in FIG. 11b (see the two transform blocks 2 and 98). The resulting signal after two sum and difference transforms corresponds to the downmix signal DMX and residual signal RES (except for a possible different gain factor).
(112) FIG. 18 shows an embodiment of a decoder system which is inverse to the encoder system in FIG. 17. The decoder system comprises means 120 for perceptual decoding based on bitstream signal. Before decoding, the PS parameters are separated from the bitstream signal 6 in demultiplexer 10. The decoding means 120 comprise a core decoder 121 which generates a first signal 122 and a second signal 123 (by decoding). The decoding means output a downmix signal DMX and a residual signal RES.
(113) The downmix signal DMX and the residual signal RES are selectively based on the sum of the first signal 122 and of the second signal 123 and based on the difference of the first signal 122 and of the second signal 123 or based on the first signal 122 and based on the second signal 123.
(114) Preferably, the selection is time- and frequency-variant. The selection is performed in the selection stage 125.
(115) The decoding means 120 comprise a sum and difference transform stage 124 which generates sum and difference signals.
(116) The sum and difference transform in block 124 is of the form
(117)
(118) The transform block 124 may correspond to transform block 105′ in FIG. 16c.
(119) After selection, the DMX and RES signals are fed to an upmix stage 126 for generating the stereo signal L, R based on the downmix signal DMX and the residual signal RES. The upmix operation is dependent on the PS parameters 5.
(120) Preferably, in FIGS. 17 and 18 the selection is frequency-variant. In FIG. 17, e.g. a time to frequency transform (e.g. by a MDCT or analysis filter bank) may be performed as first step in the perceptual encoding means 110. In FIG. 18, e.g. a frequency to time transform (e.g. by an inverse MDCT or synthesis filter bank) may be performed as the last step in the perceptual decoding means 120.
(121) It should be noted that in the above-described embodiments, the signals, parameters and matrices may be frequency-variant or frequency-invariant and/or time-variant or time-invariant. The described computing steps may be carried out frequency-wise or for the complete audio band.
(122) Moreover, it should be noted that the various sum and difference transforms, i.e. the DMX/RES to pseudo L/R transform, the pseudo L/R to DMX/RES transform, the L/R to M/S transform and the M/S to L/R transform, are all of the form
(123)
(124) Merely, the gain factor c may be different. Therefore, in principle, each of these transforms may be exchanged by a different transform of these transforms. If the gain is not correct during the encoding processing, this may be compensated in the decoding process. Moreover, when placing two same or two different of the sum and difference transforms is series, the resulting transform corresponds to the identity matrix (possibly, multiplied by a gain factor).
(125) In an encoder system comprising both a PS encoder and a SBR encoder, different PS/SBR configurations are possible. In a first configuration, shown in FIG. 6, the SBR encoder 32 is connected downstream of the PS encoder 41. In a second configuration, shown in FIG. 7, the SBR encoder 42 is connected upstream of the PS encoder 41. Depending upon e.g. the desired target bitrate, the properties of the core encoder, and/or one or more various other factors, one of the configurations can be preferred over the other in order to provide best performance. Typically, for lower bitrates, the first configuration can be preferred, while for higher bitrates, the second configuration can be preferred. Hence, it is desirable if an encoder system supports both different configurations to be able to choose a preferred configuration depending upon e.g. desired target bitrate and/or one or more other criteria.
(126) Also in a decoder system comprising both a PS decoder and a SBR decoder, different PS/SBR configurations are possible. In a first configuration, shown in FIG. 14, the SBR decoder 93 is connected upstream of the PS decoder 94. In a second configuration, shown in FIG. 15, the SBR decoder 96 is connected downstream of the PS decoder 94. In order to achieve correct operation, the configuration of the decoder system has to match that of the encoder system. If the encoder is configured according to FIG. 6, then the decoder is correspondingly configured according to FIG. 14. If the encoder is configured according to FIG. 7, then the decoder is correspondingly configured according to FIG. 15. In order to ensure correct operation, the encoder preferably signals to the decoder which PS/SBR configuration was chosen for encoding (and thus which PS/SBR configuration is to be chosen for decoding). Based on this information, the decoder selects the appropriate decoder configuration.
(127) As discussed above, in order to ensure correct decoder operation, there is preferably a mechanism to signal from the encoder to the decoder which configuration is to be used in the decoder. This can be done explicitly (e.g. by means of an dedicated bit or field in the configuration header of the bitstream as discussed below) or implicitly (e.g. by checking whether the SBR data is mono or stereo in case of PS data being present).
(128) As discussed above, to signal the chosen PS/SBR configuration, a dedicated element in the bitstream header of the bitstream conveyed from the encoder to the decoder may be used. Such a bitstream header carries necessary configuration information that is needed to enable the decoder to correctly decode the data in the bitstream. The dedicated element in the bitstream header may be e.g. a one bit flag, a field, or it may be an index pointing to a specific entry in a table that specifies different decoder configurations.
(129) Instead of including in the bitstream header an additional dedicated element for signaling the PS/SBR configuration, information already present in the bitstream may be evaluated at the decoding system for selecting the correct PS/SBR configuration. E.g. the chosen PS/SBR configuration may be derived from bitstream header configuration information for the PS decoder and SBR decoder. This configuration information typically indicates whether the SBR decoder is to be configured for mono operation or stereo operation. If, for example, a PS decoder is enabled and the SBR decoder is configured for mono operation (as indicated in the configuration information), the PS/SBR configuration according to FIG. 14 can be selected. If a PS decoder is enabled and the SBR decoder is configured for stereo operation, the PS/SBR configuration according to FIG. 15 can be selected.
(130) The above-described embodiments are merely illustrative for the principles of the present application. It is understood that modifications and variations of the arrangements and the details described herein will be apparent to others skilled in the art. It is the intent, therefore, that the scope of the application is not limited by the specific details presented by way of description and explanation of the embodiments herein.
(131) The systems and methods disclosed in the application may be implemented as software, firmware, hardware or a combination thereof. Certain components or all components may be implemented as software running on a digital signal processor or microprocessor, or implemented as hardware and or as application specific integrated circuits.
(132) Typical devices making use of the disclosed systems and methods are portable audioplayers, mobile communication devices, set-top-boxes, TV-sets, AVRs (audio-video receiver), personal computers etc.