AUDIO ENCODER AND DECODER USING A FREQUENCY DOMAIN PROCESSOR , A TIME DOMAIN PROCESSOR, AND A CROSS PROCESSING FOR CONTINUOUS INITIALIZATION
20220051681 · 2022-02-17
Inventors
- Sascha Disch (Fürth, DE)
- Martin DIETZ (Nürnberg, DE)
- Markus MULTRUS (Nürnberg, DE)
- Guillaume Fuchs (Bubenreuth, DE)
- Emmanuel Ravelli (Erlangen, DE)
- Matthias Neusinger (Rohr, DE)
- Markus SCHNELL (Nürnberg, DE)
- Benjamin SCHUBERT (Nürnberg, DE)
- Bernhard GRILL (Rückersdorf, DE)
Cpc classification
G10L19/02
PHYSICS
G10L19/24
PHYSICS
G10L19/028
PHYSICS
International classification
G10L19/02
PHYSICS
G10L19/022
PHYSICS
Abstract
An audio encoder for encoding an audio signal includes: a first encoding processor for encoding a first audio signal portion in a frequency domain, wherein the first encoding processor includes: a time frequency converter for converting the first audio signal portion into a frequency domain representation having spectral lines up to a maximum frequency of the first audio signal portion; a spectral encoder for encoding the frequency domain representation; a second encoding processor for encoding a second different audio signal portion in the time domain; a cross-processor for calculating, from the encoded spectral representation of the first audio signal portion, initialization data of the second encoding processor, so that the second encoding processing is initialized to encode the second audio signal portion immediately following the first audio signal portion in time in the audio signal; a controller configured for analyzing the audio signal and for determining, which portion of the audio signal is the first audio signal portion encoded in the frequency domain and which portion of the audio signal is the second audio signal portion encoded in the time domain; and an encoded signal former for forming an encoded audio signal including a first encoded signal portion for the first audio signal portion and a second encoded signal portion for the second audio signal portion.
Claims
1. An audio encoder for encoding an audio signal, comprising: a first encoding processor configured for encoding a first audio signal portion in a frequency domain, wherein the first encoding processor comprises: a time-frequency converter configured for converting the first audio signal portion into a frequency domain representation comprising spectral lines up to a maximum frequency of the first audio signal portion; and a spectral encoder configured for encoding the frequency domain representation; a second encoding processor configured for encoding a second different audio signal portion in a time domain; a cross-processor configured for calculating, from an encoded spectral representation of the first audio signal portion, initialization data of the second encoding processor, so that the second encoding processor is initialized to encode the second different audio signal portion immediately following the first audio signal portion in time in the audio signal; a controller configured for analyzing the audio signal and configured for determining, which portion of the audio signal is the first audio signal portion encoded in the frequency domain and which portion of the audio signal is the second audio signal portion encoded in the time domain; and an encoded signal former configured for forming an encoded audio signal comprising a first encoded signal portion for the first audio signal portion and a second encoded signal portion for the second audio signal portion.
2. The audio encoder of claim 1, wherein the audio signal comprises a high band and a low band, and wherein the second encoding processor comprises: a sampling rate converter configured for converting the second audio signal portion to a lower sampling rate representation having a second sampling rate, the second sampling rate of the lower sampling rate representation being lower than a first sampling rate of the audio signal, wherein the lower sampling rate representation does not comprise the high band of the audio signal; a time domain low band encoder configured for time domain encoding the lower sampling rate representation; and a time domain bandwidth extension encoder configured for parametrically encoding the high band.
3. The audio encoder of claim 1, further comprising: a preprocessor configured for preprocessing the first audio signal portion and the second different audio signal portion, wherein the preprocessor comprises a prediction analyzer configured for determining prediction coefficients; and wherein the encoded signal former is configured for introducing an encoded version of the prediction coefficients into the encoded audio signal.
4. The audio encoder of claim 1, comprising: a preprocessor configured for preprocessing the first audio signal portion and the second different audio signal portion, wherein the preprocessor comprises a resampler configured for resampling the audio signal to a sampling rate of the second encoding processor to obtain a resampled audio signal; and wherein the preprocessor comprises a prediction analyzer configured to determine prediction coefficients using the resampled audio signal, or wherein the preprocessor further comprises a long term prediction analysis stage configured for determining one or more long term prediction parameters for the first audio signal portion.
5. The audio encoder of claim 1, wherein the cross-processor comprises: a spectral decoder configured for calculating a decoded version of the first encoded signal portion; a delay stage configured for delaying the decoded version of the first encoded signal portion to obtain a delayed version and for feeding the delayed version into a de-emphasis stage of the second encoding processor for initialization; a weighted prediction coefficient analysis filtering block configured for filtering the decoded version of the first encoded signal portion to obtain a filter output and for feeding the filter output into an innovative codebook determiner of the second encoding processor for initialization; an analysis filtering stage configured for filtering the decoded version of the first encoded signal portion or a pre-emphasized version derived by a pre-emphasis stage from the decoded version of the first encoded signal portion to obtain a filter residual signal and configured for feeding the filter residual signal into an adaptive codebook determiner of the second encoding processor for initialization; or a pre-emphasis filter configured for filtering the decoded version of the first encoded signal portion to obtain a pre-emphasized version and configured for feeding the pre-emphasized version or a delayed pre-emphasized version to a synthesis filtering stage of the second encoding processor for initialization.
6. The audio encoder of claim 1, wherein the first audio signal portion having associated therewith a sampling frequency, and wherein the maximum frequency is lower than or equal to half of the sampling frequency and at least one quarter of the sampling frequency or higher.
7. The audio encoder of claim 1, wherein the second encoding processor comprises at least one element of the following group of elements: a prediction analysis filter; an adaptive codebook stage; an innovative codebook stage; an estimator configured for estimating an innovative codebook entry; an ACELP/gain coding stage; a prediction synthesis filtering stage; a de-emphasis stage; and a bass post-filter analysis stage.
8. An audio decoder for decoding an encoded audio signal, comprising: a first decoding processor configured for decoding a first encoded audio signal portion in a frequency domain to obtain a decoded spectral representation, the first decoding processor comprising a frequency-time converter configured for converting the decoded spectral representation into a time domain to acquire a decoded first audio signal portion; a second decoding processor configured for decoding a second encoded audio signal portion in the time domain to acquire a decoded second audio signal portion; a cross-processor configured for calculating, from the decoded spectral representation of the first encoded audio signal portion, initialization data of the second decoding processor, so that the second decoding processor is initialized to decode the second encoded audio signal portion following in time the first encoded audio signal portion in the encoded audio signal; and a combiner configured for combining the decoded first audio signal portion and the decoded second audio signal portion to acquire a decoded audio signal.
9. The audio decoder of claim 8, wherein the wherein the decoded spectral representation extends until a maximum frequency of a time representation of the decoded audio signal, a spectral value for the maximum frequency being zero or different from zero.
10. The audio decoder of claim 8, wherein the first decoding processor is configured to reconstruct a first set of first spectral portions in a waveform-preserving manner to generate a spectrum having gaps, wherein the gaps in the spectrum are filled with an Intelligent Gap Filling (IGF) technology comprising using a frequency regeneration applying parametric data and using reconstructed first spectral portions of the first set of first spectral portions.
11. The audio decoder of claim 8, wherein the second decoding processor comprises at least one element of the group of elements comprising: a stage configured for decoding ACELP gains and an innovative codebook; an adaptive codebook synthesis stage; an ACELP post-processor; a prediction synthesis filter; and a de-emphasis stage.
12. A method of encoding an audio signal, comprising: encoding a first audio signal portion in a frequency domain, comprising: converting the first audio signal portion into a frequency domain representation comprising spectral lines up to a maximum frequency of the first audio signal portion; and encoding the frequency domain representation; encoding a second different audio signal portion in a time domain; calculating, from an encoded spectral representation of the first audio signal portion, initialization data for the step of encoding the second different audio signal portion, so that the step of encoding the second different audio signal portion is initialized to encode the second audio signal portion immediately following the first audio signal portion in time in the audio signal; analyzing the audio signal and determining, which portion of the audio signal is the first audio signal portion encoded in the frequency domain and which portion of the audio signal is the second audio signal portion encoded in the time domain; and forming an encoded audio signal comprising a first encoded signal portion for the first audio signal portion and a second encoded signal portion for the second audio signal portion.
13. A method of decoding an encoded audio signal, comprising: decoding a first encoded audio signal portion in a frequency domain to obtain a decoded spectral representation, the first decoding processor comprising converting the decoded spectral representation into a time domain to acquire a decoded first audio signal portion; decoding a second encoded audio signal portion in the time domain to acquire a decoded second audio signal portion; calculating, from the decoded spectral representation of the first encoded audio signal portion, initialization data of the step of decoding the second encoded audio signal portion, so that the step of decoding the second encoded audio signal portion is initialized to decode the second encoded audio signal portion following in time the first encoded audio signal portion in the encoded audio signal; and combining the decoded first audio signal portion and the decoded second audio signal portion to acquire a decoded audio signal.
14. A non-transitory digital storage medium having a computer program stored thereon to perform the method of encoding an audio signal, comprising: encoding a first audio signal portion in a frequency domain, comprising: converting the first audio signal portion into a frequency domain representation comprising spectral lines up to a maximum frequency of the first audio signal portion; and encoding the frequency domain representation; encoding a second different audio signal portion in a time domain; calculating, from an encoded spectral representation of the first audio signal portion, initialization data for the step of encoding the second different audio signal portion, so that the step of encoding the second different audio signal portion is initialized to encode the second audio signal portion immediately following the first audio signal portion in time in the audio signal; analyzing the audio signal and determining, which portion of the audio signal is the first audio signal portion encoded in the frequency domain and which portion of the audio signal is the second audio signal portion encoded in the time domain; and forming an encoded audio signal comprising a first encoded signal portion for the first audio signal portion and a second encoded signal portion for the second audio signal portion, when said computer program is run by a computer.
15. A non-transitory digital storage medium having a computer program stored thereon to perform the method of decoding an encoded audio signal, comprising: decoding a first encoded audio signal portion in a frequency domain to obtain a decoded spectral representation, the decoding comprising converting the decoded spectral representation into a time domain to acquire a decoded first audio signal portion; decoding a second encoded audio signal portion in the time domain to acquire a decoded second audio signal portion; calculating, from the decoded spectral representation of the first encoded audio signal portion, initialization data of the step of decoding the second encoded audio signal portion, so that the step of decoding the second encoded audio signal portion is initialized to decode the second encoded audio signal portion following in time the first encoded audio signal portion in the encoded audio signal; and combining the decoded first audio signal portion and the decoded second audio signal portion to acquire a decoded audio signal, when said computer program is run by a computer.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0061] Embodiments of the present invention will be detailed subsequently referring to the appended drawings, in which:
[0062]
[0063]
[0064]
[0065]
[0066]
[0067]
[0068]
[0069]
[0070]
[0071]
[0072]
[0073]
[0074]
[0075]
[0076]
[0077]
[0078]
[0079]
[0080]
[0081]
[0082]
[0083]
[0084]
[0085]
DETAILED DESCRIPTION OF THE INVENTION
[0086]
[0087] The audio encoder of
[0088] Hence, the controller 620 makes sure that for a single audio signal portion only a time domain representation or a frequency domain representation is in the encoded signal. This can be accomplished by the controller 620 in several ways. One way would be that, for one and the same audio signal portion, both representations arrive at block 630 and the controller 620 controls the encoded signal former 630 to only introduce one of both representations into the encoded signal. Alternatively, however, the controller 620 can control an input into the first encoding processor and an input into the second encoding processor so that, based on the analysis of the corresponding signal portion, only one of both blocks 600 or 610 is activated to actually perform the full encoding operation and the other block is deactivated.
[0089] This deactivation can be a deactivation or, as illustrated with respect to, for example,
[0090] In the further specific implementation of the second encoding processor operating in the time domain, the second encoding processor comprises a downsampler 900 or sampling rate converter for converting the audio signal portion into a representation with a lower sampling rate, wherein the lower sampling rate is lower than a sampling rate at the input into the first encoding processor. This is illustrated in
[0091] In a further embodiment of the present invention the audio encoder additionally comprises, although not illustrated in
[0092] Particularly, the preprocessor comprises a transient detector 1020, and the first branch is “opened” by a resampler 1021 to e.g. 12.8 kHz, followed by a preemphasis stage 1005a, an LPC analyzer 1002a, a weighted analysis filtering stage 1022a, and an FFT/Noise estimator/Voice Activity Detection (VAD) or Pitch Search stage 1007.
[0093] The second branch is “opened” by a resampler 1004 to e.g. 12.8 kHz or 16 kHz, i.e., to the ACELP Sampling Rate, followed by a preemphasis stage 1005b, an LPC analyzer 1002b, a weighted analysis filtering stage 1022b, and a TCX LTP parameter extraction stage 1024. Block 1024 provides its output to the bitstream multiplexor. Block 1002 is connected to an LPC quantizer 1010 controlled by the ACELP/TCX decision, and the block 1010 is also connected to the bitstream multiplexor.
[0094] Other embodiments can alternatively comprise only a single branch or more branches. In an embodiment, this preprocessor comprises a prediction analyzer for determining prediction coefficients. This prediction analyzer can be implemented as an LPC (linear prediction coding) analyzer for determining LPC coefficients. However, other analyzers can be implemented as well. Furthermore, the preprocessor in the alternative embodiment may comprise a prediction coefficient quantizer, wherein this device receives prediction coefficient data from the prediction analyzer.
[0095] Advantageously, however, the LPC quantizer is not necessarily part of the preprocessor, and it is implemented as part of the main encoding routine, i.e. not part of the preprocessor.
[0096] Furthermore, the preprocessor may additionally comprise an entropy coder for generating an encoded version of the quantized prediction coefficients. It is important to note that the encoded signal former 630 or the specific implementation, i.e., the bit stream multiplexer 630 makes sure that the encoded version of the quantized prediction coefficients is included into the encoded audio signal 632. Advantageously, the LPC coefficients are not directly quantized but are converted into an ISF representation, for example, or any other representation better suited for quantization. This conversion is advantageously performed either by the determine LPC coefficients block or is performed within the block for quantizing the LPC coefficients.
[0097] Furthermore, the preprocessor may comprise a resampler for resampling an audio input signal at an input sampling rate into a lower sampling rate for the time domain encoder. When the time domain encoder is an ACELP encoder having a certain ACELP sampling rate then the down sampling is performed to advantageously either 12.8 kHz or 16 kHz. The input sampling rate can be any of a particular number of sampling rates such as 32 kHz or an even higher sampling rate. On the other hand, the sampling rate of the time domain encoder will be predetermined by certain restrictions and the resampler 1004 performs this resampling and outputs the lower sampling rate representation of the input signal. Hence, the resampler can perform a similar functionality and can even be one and the same element as the downsampler 900 illustrated in the context of
[0098] Furthermore, it is advantageous to apply a pre-emphasis in the pre-emphasis block. The pre-emphasis processing is well-known in the art of time domain encoding and is described in literature referring to the AMR-WB+ processing and the pre-emphasis is particularly configured for compensating for a spectral tilt and, therefore, allows a better calculation of LPC parameters at a given LPC order.
[0099] Furthermore, the preprocessor may additionally comprise a TCX-LTP parameter extraction for controlling an LTP post filter illustrated at 1420 in
[0100] As illustrated, the result of block 1024 is input into the encoded signal, i.e., is in the embodiment of
[0101] Hence, to summarize, common to both paths is a preprocessing operation 1000 in which commonly used signal processing operations are performed. These comprise a resampling to an ACELP sampling rate (12.8 or 16 kHz) for one parallel path and this resampling is performed. Furthermore, a TCX LTP parameter extraction illustrated at block 1006 is performed and, additionally, a pre-emphasis and a determination of LPC coefficients is performed. As outlined, the pre-emphasis compensates for the spectral tilt and, therefore, makes the calculation of LPC parameters at a given LPC order more efficient.
[0102] Subsequently, reference is made to
[0103] Based on this audio signal portion, the controller 620 addresses a frequency domain encoder simulator 621 and a time domain encoder simulator 622 in order to calculate for each encoder possibility an estimated signal to noise ratio. Subsequently, the selector 623 selects the encoder which has provided the better signal to noise ratio, naturally under the consideration of a predefined bit rate. The selector then identifies the corresponding encoder via the control output. When it is determined that the audio signal portion under consideration is to be encoded using the frequency domain encoder, the time domain encoder is set into an initialization state or in other embodiments not requiring a very instant switching in a completely deactivated state. However, when it is determined that the audio signal portion under consideration is to be encoded by the time domain encoder, the frequency domain encoder is then deactivated.
[0104] Subsequently, an advantageous implementation of the controller illustrated in
[0105] In case the TCX branch is chosen, a TCX decoder is run in each frame which outputs a signal at the ACELP sampling rate. This is used to update the memories used for the ACELP encoding path (LPC residual, Mem w0, Memory deemphasis), to enable instant switching from TCX to ACELP. The memory update is performed in each TCX path.
[0106] Alternatively, a full analysis by synthesis process can performed, i.e., both encoder simulators 621, 622 implement the actual encoding operations and the results are compared by the selector 623. Alternatively, again, a complete feed forward calculation can be done by performing a signal analysis. For example, when it is determined that the signal is a speech signal by a signal classifier the time domain encoder is selected and when it is determined that the signal is a music signal then the frequency domain encoder is selected. Other procedures in order to distinguish between both encoders based on a signal analysis of the audio signal portion under consideration can also be applied.
[0107] Advantageously, the audio encoder additionally comprises a cross-processor 700 illustrated in
[0108] Hence, the time domain encoder 610 is configured to be initialized by the initialization data in order to encode an audio signal portion following an earlier audio signal portion encoded by the frequency domain encoder 600 in an efficient manner.
[0109] In particular, the cross-processor comprises a frequency-time converter for converting a frequency domain representation into a time domain representation which can be forwarded to the time domain encoder directly or after some further processing. This converter is illustrated in
[0110] In other embodiments, such as narrow-band operating modes with 8 kHz input sampling rate, the TCX branch operates at 8 kHz, whereas ACELP still runs at 12.8 kHz. I.e. the ACELP SR is not always lower than the TCX sampling rate. For 16 kHz input sampling rate (wideband), there are also scenarios where ACELP runs at the same sampling rate as TCX, i.e. both at 16 kHz. In a super wideband mode (SWB) the input sampling rate is at 32 or 48 kHz.
[0111] The ratio of the time domain coder sampling rate or ACELP sampling rate and the frequency domain coder sampling rate or input sampling rate can be calculated and is a downsampling factor DS illustrated in
[0112] For a downsampling factor greater than one, i.e., for an actual downsampling, the block 602 has a large transform size and the IMDCT block 702 has a small transform size. As illustrated in 7B, the IMDCT block 702 therefore comprises a selector 726 for selecting the lower spectral portion of an input into the IMDCT block 702. The portion of the full-band spectrum is defined by the downsampling factor DS. For example, when the lower sampling rate is 16 kHz and the input sampling rate is 32 kHz then the downsampling factor is 2.0 and, therefore, the selector 726 selects the lower half of the full-band spectrum. When the spectrum has, for example, 1024 MDCT lines then the selector selects the lower 512 MDCT lines.
[0113] This low frequency portion of the full-band spectrum is input into a small size transform and foldout block 720, as illustrated in
[0114] Thus, a very efficient downsampling operation can be applied since the downsampling is included in the IMDCT implementation. In this context, it is emphasized that the block 702 can be implemented by an IMDCT but can also be implemented by any other transform or filterbank implementation which can be suitably sized in the actual transform kernel and other transform related operations.
[0115] For a downsampling factor lower than one, i.e., for an actual upsampling, the notation in
[0116] The block 602 has a small transform size and the IMDCT block 702 has a large transform size. As illustrated in
[0117] This frequency portion of the full-band spectrum is input into a then large size transform and foldout block 720, as illustrated in
[0118] Thus, a very efficient upsampling operation can be applied since the upsampling is included in the IMDCT implementation. In this context, it is emphasized that the block 702 can be implemented by an IMDCT but can also be implemented by any other transform or filterbank implementation which can be suitably sized in the actual transform kernel and other transform related operations.
[0119] Generally, it is outlined that a definition of a sample rate in the frequency domain needs some explanation. Spectral bands are often downsampled. Hence, the notion of an effective sampling rate or an “associated” sample or sampling rate is used. In case of a filterbank/transform the effective sample rate would be defined as Fs_eff=subbandsamplerate*num_subbands
[0120] In a further embodiment illustrated in
[0121] Furthermore, the frequency domain encoder advantageously comprises a noise shaping block 606a. The noise shaping block 606a is controlled by quantized LPC coefficients as generated by block 1010. The quantized LPC coefficients used for noise shaping 606a perform a spectral shaping of the high resolution spectral values or spectral lines directly encoded (rather than parametrically encoded) and the result of block 606a is similar to the spectrum of a signal subsequent to an LPC filtering stage operating in the time domain such as an LPC analysis filtering block 704 to be described later on. Furthermore, the result of the noise shaping block 606a is then quantized and entropy coded as indicated by block 606b. The result of block 606b corresponds to the encoded first audio signal portion or a frequency domain coded audio signal portion (together with other side information).
[0122] The cross-processor 700 comprises a spectral decoder for calculating a decoded version of the first encoded signal portion. In the embodiment of
[0123] Furthermore, the cross-processor 700 may comprise in addition or alternatively a weighted prediction coefficient analysis filtering stage 708 for filtering the decoded version and for feeding a filtered decoded version to a codebook determinator 613 indicated as “MMSE” in
[0124] The time domain encoder processor 610 comprises, as illustrated in
[0125] Furthermore, an ACELP gains/coding stage 615 is provided in series to the innovative codebook stage 614 and the result of this block is input into a codebook determinator 613 indicated as MMSE in
[0126] As illustrated, several blocks of the time domain decoder depend on previous signals and these blocks are the adaptive codebook block 612, the codebook determinator 613, the LPC synthesis filtering block 616 and the de-emphasis block 617. These blocks are provided with data from the cross-processor derived from the frequency domain encoding processor data in order to initialize these blocks for the purpose of being ready for an instant switch from the frequency domain encoder to the time domain encoder. As can also be seen from
[0127] The advantageous audio decoder in
[0128] For ACELP initialization when switching from TCX to ACELP, a cross path (consisting of a shared TCX decoder frontend but additionally providing output at the lower sampling rate and some post-processing) exists that performs the inventive ACELP initialization. Sharing the same sampling rate and filter order between TCX and ACELP in the LPCs allows for an easier and more efficient ACELP initialization.
[0129] For visualizing the switching, two switches are sketched in 14B. While the second switch 1160 downstream chooses between TCX/IGF or ACELP/TD-BWE output, the first switch 1480 either pre-updates the buffers in the resampling QMF stage downstream the ACELP path by the output of the cross path or simply passes on the ACELP output.
[0130] Subsequently, audio decoder implementations in accordance with aspects of the present invention are discussed in the context of
[0131] An audio decoder for decoding an encoded audio signal 1101 comprises a first decoding processor 1120 for decoding a first encoded audio signal portion in a frequency domain. The first decoding processor 1120 comprises a spectral decoder 1122 for decoding first spectral regions with a high spectral resolution and for synthesizing second spectral regions using a parametric representation of the second spectral regions and at least a decoded first spectral region to obtain a decoded spectral representation. The decoded spectral representation is a full-band decoded spectral representation as discussed in the context of
[0132] Furthermore, the audio decoder comprises a second decoding processor 1140 for decoding the second encoded audio signal portion in the time domain to obtain a decoded second signal portion. Furthermore, the audio decoder comprises a combiner 1160 for combining the decoded first signal portion and the decoded second signal portion to obtain a decoded audio signal. The decoded signal portions are combined in sequence which is also illustrated in
[0133] Advantageously, the second decoding processor 1140 contains a time domain bandwidth extension processor 1220 and comprises, as illustrated in
[0134]
[0135] Subsequently, an advantageous implementation of the upsampler 1210 of
[0136] Further processing operations can be performed within the QMF domain in addition or instead of the bandpass filtering 1472. If no processing is performed at all, then the QMF analysis and the QMF synthesis constitute an efficient upsampler 1210.
[0137] Subsequently, the construction of the individual elements in
[0138] The full-band frequency domain decoder 1120 comprises a first decoding block 1122a for decoding the high resolution spectral coefficients and for additionally performing noise filling in the low band portion as known, for example, from the USAC technology. Furthermore, the full-band decoder comprises an IGF processor 1122b for filling the spectral holes using synthesized spectral values which have been encoded only parametrically and, therefore, encoded with a low resolution on the encoder-side. Then, in block 1122c, an inverse noise shaping is performed and the result is input into a TNS/TTS synthesis block 705 which provides, as a final output, an input to a frequency-time converter 1124, which is advantageously implemented as an inverse modified discrete cosine transform operating at the output, i.e., high sampling rate.
[0139] Furthermore, a harmonic or LTP post-filter is used which is controlled by data obtained by the TCX LTP parameter extraction block 1006 in
[0140] Several elements in
[0141] The time domain decoding processor 1140 advantageously comprises the ACELP or time domain low band decoder 1200 comprising an ACELP decoder stage 1149 for obtaining decoded gains and the innovative codebook information. Additionally, an ACELP adaptive codebook stage 1141 is provided and a subsequent ACELP post-processing stage 1142 and a final synthesis filter such as LPC synthesis filter 1143, which is again controlled by the quantized LPC coefficients 1145 obtained from the bitstream demultiplexer 1100 corresponding to the encoded signal parser 1100 in
[0142] In accordance with embodiments of the present invention, the audio decoder additionally comprises the cross-processor 1170 illustrated in
[0143] Advantageously, the cross-processor 1170 comprises an additional frequency-time converter 1171 operating at a lower sampling rate than the frequency-time converter of the first decoding processor in order to obtain a further decoded first signal portion in the time domain to be used as the initialization signal or for which any initialization data can be derived. Advantageously, this IMDCT or low sampling rate frequency-time converter is implemented as illustrated in
[0144] As illustrated in Fig., the cross-processor 1170 further comprises, alone or in addition to other elements, a delay stage 1172 for delaying the further decoded first signal portion and for feeding the delayed decoded first signal portion into a de-emphasis stage 1144 of the second decoding processor for initialization. Furthermore, the cross-processor comprises, in addition or alternatively, a pre-emphasis filter 1173 and a delay stage 1175 for filtering and delaying a further decoded first signal portion and for providing the delayed output of block 1175 into an LPC synthesis filtering stage 1143 of the ACELP decoder for the purpose of initialization.
[0145] Furthermore, the cross-processor may comprise alternatively or in addition to the other mentioned elements an LPC analysis filter 1174 for generating a prediction residual signal from the further decoded first signal portion or a pre-emphasized further decoded first signal portion and for feeding the data into a codebook synthesizer of the second decoding processor and advantageously, into the adaptive codebook stage 1141. Furthermore, the output of the frequency-time converter 1171 with the low sampling rate is also input into the QMF analysis stage 1471 of the upsampler 1210 for the purpose of initialization, i.e., when the currently decoded audio signal portion is delivered by the frequency domain full-band decoder 1120.
[0146] The advantageous audio decoder is described in the following: The waveform decoder part consists of a full-band TCX decoder path with IGF both operating at the input sampling rate of the codec. In parallel, an alternative ACELP decoder path at lower sampling rate exists that is reinforced further downstream by a TD-BWE.
[0147] For ACELP initialization when switching from TCX to ACELP, a cross path (consisting of a shared TCX decoder frontend but additionally providing output at the lower sampling rate and some post-processing) exists that performs the inventive ACELP initialization. Sharing the same sampling rate and filter order between TCX and ACELP in the LPCs allows for an easier and more efficient ACELP initialization.
[0148] For visualizing the switching, two switches are sketched in
[0149] To summarize, advantageous aspects of the invention which can be used alone or in combination relate to a combination of an ACELP and TD-BWE coder with a full-band capable TCX/IGF technology advantageously associated with using a cross signal.
[0150] A further specific feature is a cross signal path for the ACELP initialization to enable seamless switching.
[0151] A further aspect is that a short IMDCT is fed with a lower part of high-rate long MDCT coefficients to efficiently implement a sample rate conversion in the cross-path.
[0152] A further feature is an efficient realization of the cross-path partly shared with a full-band TCX/IGF in the decoder.
[0153] A further feature is the cross signal path for the QMF initialization to enable seamless switching from TCX to ACELP.
[0154] An additional feature is a cross-signal path to the QMF allowing compensating the delay gap between ACELP resampled output and a filterbank-TCX/IGF output when switching from ACELP to TCX.
[0155] A further aspect is that an LPC is provided for both the TCX and the ACELP coder at the same sampling rate and filter order, although the TCX/IGF encoder/decoder is full-band capable.
[0156] Subsequently,
[0157] Generally, the time domain decoder comprises an ACELP decoder, a subsequently connected resampler or upsampler and a time domain bandwidth extension functionality. Particularly, the ACELP decoder comprises an ACELP decoding stage for restoring gains and the innovative codebook 1149, an ACELP-adaptive codebook stage 1141, an ACELP post-processor 1142, an LPC synthesis filter 1143 controlled by quantized LPC coefficients from a bitstream demultiplexer or encoded signal parser and the subsequently connected de-emphasis stage 1144. Advantageously, the decoded time domain signal being at an ACELP sampling rate is input, alongside with control data from the bitstream, into a time domain bandwidth extension decoder 1220, which provides a high band at the outputs.
[0158] In order to upsample the de-emphasis 1144 output, an upsampler comprising the QMF analysis block 1471, and the QMF synthesis block 1473 are provided. Within the filterbank domain defined by blocks 1471 and 1473, a bandpass filter is advantageously applied. Particularly, as has been discussed before, the same functionalities can also be used which have been discussed with respect to the same reference numbers. Furthermore, the time domain bandwidth extension decoder 1220 can be implemented as illustrated in
[0159] Subsequently, further details with respect to the frequency domain encoder and decoder being full-band capable are discussed with respect to
[0160]
[0161] Typically, a first spectral portion such as 306 of
[0162]
[0163] The decoder further comprises a frequency regenerator 116 for regenerating a reconstructed second spectral portion having the first spectral resolution using a first spectral portion. The frequency regenerator 116 performs a tile filling operation, i.e., uses a tile or portion of the first set of first spectral portions and copies this first set of first spectral portions into the reconstruction range or reconstruction band having the second spectral portion and typically performs spectral envelope shaping or another operation as indicated by the decoded second representation output by the parametric decoder 114, i.e., by using the information on the second set of second spectral portions. The decoded first set of first spectral portions and the reconstructed second set of spectral portions as indicated at the output of the frequency regenerator 116 on line 117 is input into a spectrum-time converter 118 configured for converting the first decoded representation and the reconstructed second spectral portion into a time representation 119, the time representation having a certain high sampling rate.
[0164]
[0165] The spectral analyzer/tonal mask 226 separates the output of TNS block 222 into the core band and the tonal components corresponding to the first set of first spectral portions 103 and the residual components corresponding to the second set of second spectral portions 105 of
[0166] Advantageously, the analysis filterbank 222 is implemented as an MDCT (modified discrete cosine transform filterbank) and the MDCT is used to transform the signal 99 into a time-frequency domain with the modified discrete cosine transform acting as the frequency analysis tool.
[0167] The spectral analyzer 226 advantageously applies a tonality mask. This tonality mask estimation stage is used to separate tonal components from the noise-like components in the signal. This allows the core coder 228 to code all tonal components with a psycho-acoustic module.
[0168] This method has certain advantages over the classical SBR [1] in that the harmonic grid of a multi-tone signal is preserved by the core coder while only the gaps between the sinusoids is filled with the best matching “shaped noise” from the source region.
[0169] In case of stereo channel pairs an additional joint stereo processing is applied. This is used because for a certain destination range the signal can a highly correlated panned sound source. In case the source regions chosen for this particular region are not well correlated, although the energies are matched for the destination regions, the spatial image can suffer due to the uncorrelated source regions. The encoder analyses each destination region energy band, typically performing a cross-correlation of the spectral values and if a certain threshold is exceeded, sets a joint flag for this energy band. In the decoder the left and right channel energy bands are treated individually if this joint stereo flag is not set. In case the joint stereo flag is set, both the energies and the patching are performed in the joint stereo domain. The joint stereo information for the IGF regions is signaled similar the joint stereo information for the core coding, including a flag indicating in case of prediction if the direction of the prediction is from downmix to residual or vice versa.
[0170] The energies can be calculated from the transmitted energies in the UR-domain.
midNrg[k]=leftNrg[k]+rightNrg[k]
sideNrg[k]=leftNrg[k]−rightNrg[k]
[0171] with k being the frequency index in the transform domain.
[0172] Another solution is to calculate and transmit the energies directly in the joint stereo domain for bands where joint stereo is active, so no additional energy transformation is needed at the decoder side.
[0173] The source tiles are created according to the Mid/Side-Matrix:
midTile[k]=0.5.Math.(leftTile[k]+rightTile[k])
sideTile[k]=0.5.Math.(leftTile[k]−rightTile[k])
[0174] Energy adjustment:
midTile[k]=midTile[k]*midNrg[k];
sideTile[k]−sideTile[k]*sideNrg[k];
[0175] Joint stereo.fwdarw.LR transformation:
[0176] If no additional prediction parameter is coded:
leftTile[k]=midTile[k]+sideTile[k]
rightTile[k]=midTile[k]−sideTile[k]
[0177] If an additional prediction parameter is coded and if the signalled direction is from mid to side:
sideTile[k]=sideTile[k]−predictionCoeff.Math.midTile[k]
leftTile[k]=midTile[k]+sideTile[k]
rightTile[k]=midTile[k]−sideTile[k]
[0178] If the signalled direction is from side to mid:
midTile[k]=midTile[k]−predictionCoeff.Math.sideTile[k]
leftTile[k]=midTile[k]−sideTile[k]
rightTile[k]=midTile[k]+sideTile[k]
[0179] This processing ensures that from the tiles used for regenerating highly correlated destination regions and panned destination regions, the resulting left and right channels still represent a correlated and panned sound source even if the source regions are not correlated, preserving the stereo image for such regions.
[0180] In other words, in the bitstream, joint stereo flags are transmitted that indicate whether L/R or M/S as an example for the general joint stereo coding shall be used. In the decoder, first, the core signal is decoded as indicated by the joint stereo flags for the core bands. Second, the core signal is stored in both L/R and M/S representation. For the IGF tile filling, the source tile representation is chosen to fit the target tile representation as indicated by the joint stereo information for the IGF bands.
[0181] Temporal Noise Shaping (TNS) is a standard technique and part of AAC. TNS can be considered as an extension of the basic scheme of a perceptual coder, inserting an optional processing step between the filterbank and the quantization stage. The main task of the TNS module is to hide the produced quantization noise in the temporal masking region of transient like signals and thus it leads to a more efficient coding scheme. First, TNS calculates a set of prediction coefficients using “forward prediction” in the transform domain, e.g. MDCT. These coefficients are then used for flattening the temporal envelope of the signal. As the quantization affects the TNS filtered spectrum, also the quantization noise is temporarily flat. By applying the invers TNS filtering on decoder side, the quantization noise is shaped according to the temporal envelope of the TNS filter and therefore the quantization noise gets masked by the transient.
[0182] IGF is based on an MDCT representation. For efficient coding, advantageously long blocks of approx. 20 ms have to be used. If the signal within such a long block contains transients, audible pre- and post-echoes occur in the IGF spectral bands due to the tile filling.
[0183] This pre-echo effect is reduced by using TNS in the IGF context. Here, TNS is used as a temporal tile shaping (TTS) tool as the spectral regeneration in the decoder is performed on the TNS residual signal. The involved TTS prediction coefficients are calculated and applied using the full spectrum on encoder side as usual. The TNS/TTS start and stop frequencies are not affected by the IGF start frequency of the IGF tool. In comparison to the legacy TNS, the TTS stop frequency is increased to the stop frequency of the IGF tool, which is higher than f.sub.IGFstart. On decoder side the TNS/TTS coefficients are applied on the full spectrum again, i.e. the core spectrum plus the regenerated spectrum plus the tonal components from the tonality mask (see
[0184] In legacy decoders, spectral patching on an audio signal corrupts spectral correlation at the patch borders and thereby impairs the temporal envelope of the audio signal by introducing dispersion. Hence, another benefit of performing the IGF tile filling on the residual signal is that, after application of the shaping filter, tile borders are seamlessly correlated, resulting in a more faithful temporal reproduction of the signal.
[0185] In an IGF encoder, the spectrum having undergone TNS/TTS filtering, tonality mask processing and IGF parameter estimation is devoid of any signal above the IGF start frequency except for tonal components. This sparse spectrum is now coded by the core coder using principles of arithmetic coding and predictive coding. These coded components along with the signaling bits form the bitstream of the audio.
[0186]
[0187]
[0188] Advantageously, the high resolution is defined by a line-wise coding of spectral lines such as MDCT lines, while the second resolution or low resolution is defined by, for example, calculating only a single spectral value per scale factor band, where a scale factor band covers several frequency lines. Thus, the second low resolution is, with respect to its spectral resolution, much lower than the first or high resolution defined by the line-wise coding typically applied by the core encoder such as an AAC or USAC core encoder.
[0189] Regarding scale factor or energy calculation, the situation is illustrated in
[0190] Particularly, when the core encoder is under a low bitrate condition, an additional noise-filling operation in the core band, i.e., lower in frequency than the IGF start frequency, i.e., in scale factor bands SCB1 to SCB3 can be applied in addition. In noise-filling, there exist several adjacent spectral lines which have been quantized to zero. On the decoder-side, these quantized to zero spectral values are re-synthesized and the re-synthesized spectral values are adjusted in their magnitude using a noise-filling energy such as NF.sub.2 illustrated at 308 in
[0191] Advantageously, the bands, for which energy information is calculated coincide with the scale factor bands. In other embodiments, an energy information value grouping is applied so that, for example, for scale factor bands 4 and 5, only a single energy information value is transmitted, but even in this embodiment, the borders of the grouped reconstruction bands coincide with borders of the scale factor bands. If different band separations are applied, then certain re-calculations or synchronization calculations may be applied, and this can make sense depending on the certain implementation.
[0192] Advantageously, the spectral domain encoder 106 of
[0193] In the audio encoder of
[0194]
[0195] Then, at the output of block 422, a quantized spectrum is obtained corresponding to what is illustrated in
[0196] The set to zero blocks 410, 418, 422, which are provided alternatively to each other or in parallel are controlled by the spectral analyzer 424. The spectral analyzer advantageously comprises any implementation of a well-known tonality detector or comprises any different kind of detector operative for separating a spectrum into components to be encoded with a high resolution and components to be encoded with a low resolution. Other such algorithms implemented in the spectral analyzer can be a voice activity detector, a noise detector, a speech detector or any other detector deciding, depending on spectral information or associated metadata on the resolution requirements for different spectral portions.
[0197]
[0198] Subsequently, reference is made to
[0199] As illustrated at 301 in
[0200] Advantageously, an IGF operation, i.e., a frequency tile filling operation using spectral values from other portions can be applied in the complete spectrum. Thus, a spectral tile filling operation can not only be applied in the high band above an IGF start frequency but can also be applied in the low band. Furthermore, the noise-filling without frequency tile filling can also be applied not only below the IGF start frequency but also above the IGF start frequency. It has, however, been found that high quality and high efficient audio encoding can be obtained when the noise-filling operation is limited to the frequency range below the IGF start frequency and when the frequency tile filling operation is restricted to the frequency range above the IGF start frequency as illustrated in
[0201] Advantageously, the target tiles (TT) (having frequencies greater than the IGF start frequency) are bound to scale factor band borders of the full rate coder. Source tiles (ST), from which information is taken, i.e., for frequencies lower than the IGF start frequency are not bound by scale factor band borders. The size of the ST should correspond to the size of the associated TT.
[0202] Subsequently, reference is made to
[0203] Then, the first spectral portion of the reconstruction band such as 307 of
[0204] Furthermore, as illustrated in
[0205] As illustrated, the encoder operates without downsampling and the decoder operates without upsampling. In other words, the spectral domain audio coder is configured to generate a spectral representation having a Nyquist frequency defined by the sampling rate of the originally input audio signal.
[0206] Furthermore, as illustrated in
[0207] As outlined, the spectral domain audio decoder 112 is configured so that a maximum frequency represented by a spectral value in the first decoded representation is equal to a maximum frequency included in the time representation having the sampling rate wherein the spectral value for the maximum frequency in the first set of first spectral portions is zero or different from zero. Anyway, for this maximum frequency in the first set of spectral components a scale factor for the scale factor band exists, which is generated and transmitted irrespective of whether all spectral values in this scale factor band are set to zero or not as discussed in the context of
[0208] The IGF is, therefore, advantageous that with respect to other parametric techniques to increase compression efficiency, e.g. noise substitution and noise filling (these techniques are exclusively for efficient representation of noise like local signal content) the IGF allows an accurate frequency reproduction of tonal components. To date, no state-of-the-art technique addresses the efficient parametric representation of arbitrary signal content by spectral gap filling without the restriction of a fixed a-priory division in low band (LF) and high band (HF).
[0209] Subsequently, further optional features of the full band frequency domain first encoding processor and the full band frequency domain decoding processor incorporating the gap-filling operation, which can be implemented separately or together are discussed and defined.
[0210] Particularly, the spectral domain decoder 112 corresponding to block 1122a is configured to output a sequence of decoded frames of spectral values, a decoded frame being the first decoded representation, wherein the frame comprises spectral values for the first set of spectral portions and zero indications for the second spectral portions. The apparatus for decoding furthermore comprises a combiner 208. The spectral values are generated by a frequency regenerator for the second set of second spectral portions, where both, the combiner and the frequency regenerator are included within block 1122b. Thus, by combining the second spectral portions and the first spectral portions a reconstructed spectral frame comprising spectral values for the first set of the first spectral portions and the second set of spectral portions are obtained and the spectrum-time converter 118 corresponding to the IMDCT block 1124 in
[0211] As outlined, the spectrum-time converter 118 or 1124 is configured to perform an inverse modified discrete cosine transform 512, 514 and further comprises an overlap-add stage 516 for overlapping and adding subsequent time domain frames
[0212] Particularly, the spectral domain audio decoder 1122a is configured to generate the first decoded representation so that the first decoded representation has a Nyquist frequency defining a sampling rate being equal to a sampling rate of the time representation generated by the spectrum-time converter 1124.
[0213] Furthermore, the decoder 1112 or 1122a is configured to generate the first decoded representation so that a first spectral portion 306 is placed with respect to frequency between two second spectral portions 307a, 307b.
[0214] In a further embodiment, a maximum frequency represented by a spectral value for the maximum frequency in the first decoded representation is equal to a maximum frequency included in the time representation generated by the spectrum-time converter, wherein the spectral value for the maximum frequency in the first representation is zero or different from zero.
[0215] Furthermore, as illustrated in
[0216] Furthermore, the spectral domain audio decoder 112 is configured to generate the first decoded representation having the first spectral portions with the frequency values being greater than the frequency being equal to a frequency in the middle of the frequency range covered by the time representation output by the spectrum-time converter 118 or 1124.
[0217] Furthermore, the spectral analyzer or full-band analyzer 604 is configured to analyze the representation generated by the time-frequency converter 602 for determining a first set of first spectral portions to be encoded with the first high spectral resolution and the different second set of second spectral portions to be encoded with a second spectral resolution which is lower than the first spectral resolution and, by means of the spectral analyzer, a first spectral portion 306 is determined, with respect to frequency, between two second spectral portions in
[0218] Particularly, the spectral analyzer is configured for analyzing the spectral representation up to a maximum analysis frequency being at least one quarter of a sampling frequency of the audio signal.
[0219] Particularly, the spectral domain audio encoder is configured to process a sequence of frames of spectral values for a quantization and entropy coding, wherein, in a frame, spectral values of the second set of second portions are set to zero, or wherein, in the frame, spectral values of the first set of first spectral portions and the second set of the second spectral portions are present and wherein, during subsequent processing, spectral values in the second set of spectral portions are set to zero as exemplarily illustrated at 410, 418, 422.
[0220] The spectral domain audio encoder is configured to generate a spectral representation having a Nyquist frequency defined by the sampling rate of the audio input signal or the first portion of the audio signal processed by the first encoding processor operating in the frequency domain.
[0221] The spectral domain audio encoder 606 is furthermore configured to provide the first encoded representation so that, for a frame of a sampled audio signal, the encoded representation comprises the first set of first spectral portions and the second set of second spectral portions, wherein the spectral values in the second set of spectral portions are encoded as zero or noise values.
[0222] The full band analyzer 604 or 102 is configured to analyze the spectral representation starting with the gap-filing start frequency 209 and ending with a maximum frequency f.sub.max represented by a maximum frequency included in the spectral representation and a spectral portion extending from a minimum frequency up to the gap-filling start frequency 309 belongs to the first set of first spectral portions.
[0223] Particularly, the analyzer is configured to apply a tonal mask processing at least of a portion of the spectral representation so that tonal components and non-tonal components are separated from each other, wherein the first set of the first spectral portions comprises the tonal components and wherein the second set of the second spectral portions comprises the non-tonal components.
[0224] Although the present invention has been described in the context of block diagrams where the blocks represent actual or logical hardware components, the present invention can also be implemented by a computer-implemented method. In the latter case, the blocks represent corresponding method steps where these steps stand for the functionalities performed by corresponding logical or physical hardware blocks.
[0225] Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus. Some or all of the method steps may be executed by (or using) a hardware apparatus, like for example, a microprocessor, a programmable computer or an electronic circuit. In some embodiments, some one or more of the most important method steps may be executed by such an apparatus.
[0226] The inventive transmitted or encoded signal can be stored on a digital storage medium or can be transmitted on a transmission medium such as a wireless transmission medium or a wired transmission medium such as the Internet.
[0227] Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example a floppy disc, a DVD, a Blu-Ray, a CD, a ROM, a PROM, and EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. Therefore, the digital storage medium may be computer readable.
[0228] Some embodiments according to the invention comprise a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.
[0229] Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may, for example, be stored on a machine readable carrier.
[0230] Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier.
[0231] In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.
[0232] A further embodiment of the inventive method is, therefore, a data carrier (or a non-transitory storage medium such as a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein. The data carrier, the digital storage medium or the recorded medium are typically tangible and/or non-transitory.
[0233] A further embodiment of the invention method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may, for example, be configured to be transferred via a data communication connection, for example, via the internet.
[0234] A further embodiment comprises a processing means, for example, a computer or a programmable logic device, configured to, or adapted to, perform one of the methods described herein.
[0235] A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.
[0236] A further embodiment according to the invention comprises an apparatus or a system configured to transfer (for example, electronically or optically) a computer program for performing one of the methods described herein to a receiver. The receiver may, for example, be a computer, a mobile device, a memory device or the like. The apparatus or system may, for example, comprise a file server for transferring the computer program to the receiver.
[0237] In some embodiments, a programmable logic device (for example, a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are advantageously performed by any hardware apparatus.
[0238] While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations and equivalents as fall within the true spirit and scope of the present invention.