Frequency band extension in an audio signal decoder
11312164 · 2022-04-26
Assignee
Inventors
Cpc classification
B41K1/10
PERFORMING OPERATIONS; TRANSPORTING
B41K1/12
PERFORMING OPERATIONS; TRANSPORTING
G10L19/00
PHYSICS
B41K1/38
PERFORMING OPERATIONS; TRANSPORTING
G10L21/02
PHYSICS
B41K1/40
PERFORMING OPERATIONS; TRANSPORTING
B41K1/42
PERFORMING OPERATIONS; TRANSPORTING
International classification
B41K1/10
PERFORMING OPERATIONS; TRANSPORTING
B41K1/38
PERFORMING OPERATIONS; TRANSPORTING
B41K1/04
PERFORMING OPERATIONS; TRANSPORTING
B41K1/42
PERFORMING OPERATIONS; TRANSPORTING
B41K1/40
PERFORMING OPERATIONS; TRANSPORTING
B41K1/12
PERFORMING OPERATIONS; TRANSPORTING
G10L19/02
PHYSICS
Abstract
A method is provided for extending the frequency band of an audio signal during a decoding or improvement process. The method includes obtaining the decoded signal in a first frequency band, referred to as a low band. Tonal components and a surround signal are extracted from the signal from the low-band signal, and the tonal components and the surround signal are combined by adaptive mixing using energy-level control factors to obtain an audio signal, referred to as a combined signal. The low-band decoded signal before the extraction step or the combined signal after the combination step are extended over at least one second frequency band which is higher than the first frequency band. Also proved are a frequency-band extension device which implements the described method and a decoder including a device of this type.
Claims
1. A method, comprising: obtaining a decoded audio signal, wherein the decoded audio signal is decoded in a first frequency band; extending frequencies of the decoded audio signal into a second frequency band, wherein the extension of frequencies is arranged to produce a frequency-extended decoded audio signal, wherein the second frequency band is higher than the first frequency band; obtaining an ambience signal by computing a mean value of a frequency spectrum of the frequency-extended decoded audio signal; obtaining dominant tonal components from the frequency-extended decoded audio signal, wherein the dominant tonal components are tonal components, wherein the tonal components comprise magnitudes, wherein the magnitudes exceed a threshold, wherein obtaining the dominant tonal components comprises subtracting the obtained ambience signal from the frequency-extended decoded audio signal; and combining the dominant tonal components and the ambience signal using adaptive mixing and energy level control factors to obtain a combined signal.
2. The method of claim 1, wherein the decoded audio signal is a decoded audio excitation signal.
3. The method of claim 1, wherein an energy level control factor is computed as a function of the total energy of the frequency-extended decoded audio signal and of the dominant tonal components, wherein the adaptive mixing uses the energy level factor.
4. The method of claim 1, further comprising transforming or filter bank-based sub-band decomposing the decoded audio signal, wherein obtaining the dominant tonal components uses the frequency domain or a sub-band domain, wherein the ambience signal is created in the frequency domain or a sub-band domain, wherein the combining is created in the frequency domain or a sub-band domain.
5. The method of claim 1, wherein extending the frequencies of the decoded audio signal into the second frequency band employs the following equation:
6. The method of claim 1, wherein obtaining the dominant tonal components comprises detecting the dominant tonal components of the frequency-extended decoded audio signal in the frequency domain, wherein the ambience signal is created in the frequency domain.
7. A computer program stored on a non-transitory medium, wherein the computer program when executed on a processor performs the method as claimed in claim 1.
8. A computer program stored on a non-transitory medium, wherein the computer program when executed on a processor performs the method as claimed in claim 2.
9. A computer program stored on a non-transitory medium, wherein the computer program when executed on a processor performs the method as claimed in claim 3.
10. A computer program stored on a non-transitory medium, wherein the computer program when executed on a processor performs the method as claimed in claim 4.
11. A computer program stored on a non-transitory medium, wherein the computer program when executed on a processor performs the method as claimed in claim 5.
12. A computer program stored on a non-transitory medium, wherein the computer program when executed on a processor performs the method as claimed in claim 6.
13. A method, comprising: obtaining a decoded audio signal, wherein the decoded audio signal has been decoded in a first frequency band; obtaining an ambience signal by computing a mean value of a frequency spectrum of the decoded audio signal; obtaining dominant tonal components from the decoded audio signal, wherein the dominant tonal components are tonal components, wherein the tonal components comprise magnitudes, wherein the magnitudes exceed a threshold, wherein obtaining the dominant tonal components comprises subtracting the ambience signal from the decoded audio signal; combining the dominant tonal components and the ambience signal by adaptive mixing using energy level control factors to obtain a combined signal; and extending frequencies of the combined signal into a second frequency band to produce a frequency-extended combined signal, wherein the second frequency band is higher than the first frequency band.
14. The method of claim 13, wherein obtaining the dominant tonal components comprises detecting the dominant tonal components of the frequency-extended decoded audio signal in the frequency domain, wherein the ambience signal is created in the frequency domain.
15. The method of claim 13, wherein extending the frequencies of the combined audio signal into the second frequency band employs the following equation:
16. A computer program stored on a non-transitory medium, wherein the computer program when executed on a processor performs the method as claimed in claim 13.
17. A computer program stored on a non-transitory medium, wherein the computer program when executed on a processor performs the method as claimed in claim 14.
18. A computer program stored on a non-transitory medium, wherein the computer program when executed on a processor performs the method as claimed in claim 15.
19. A method, comprising: obtaining a decoded audio signal, wherein the decoded audio signal is decoded in a first frequency band; extending frequencies of the decoded audio signal into a second frequency band, wherein the extension of frequencies is arranged to produce a frequency-extended decoded audio signal, wherein the second frequency band is higher than the first frequency band; obtaining dominant tonal components from the frequency-extended decoded audio signal, wherein the dominant tonal components are tonal components, wherein the tonal components comprise magnitudes, wherein the magnitudes exceed a threshold; removing the dominant tonal components from the frequency-extended decoded audio signal to obtain an ambience signal; and combining the dominant tonal components and the ambience signal by adaptive mixing using energy level control factors to obtain a frequency-extended combined signal.
20. The method of claim 19, wherein an energy level control factor is computed as a function of the total energy of the frequency-extended decoded audio signal and of the dominant tonal components, wherein the adaptive mixing uses the energy level factor.
21. The method of claim 19, further comprising transforming or filter bank-based sub-band decomposing the decoded audio signal, wherein obtaining the dominant tonal components uses the frequency domain or a sub-band domain, wherein the ambience signal is created in the frequency domain or a sub-band domain, wherein the combining is created in the frequency domain or a sub-band domain.
22. The method of claim 19, wherein extending the frequencies of the decoded audio signal into the second frequency band employs the following equation:
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Other features and advantages of the invention will become more clearly apparent on reading the following description, given purely as a non-limiting example and with reference to the attached drawings, in which:
(2)
(3)
(4)
(5)
(6)
(7)
DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS
(8)
(9) Unlike the AMR-WB decoding which operates with an output sampling frequency of 16 kHz and the G.718 decoder which operates at 8 or 16 kHz, a decoder is considered here which can operate with an output (synthesis) signal at the frequency fs=8, 16, 32 or 48 kHz. Note that it is assumed here that the coding has been performed according to the AMR-WB algorithm with an internal frequency of 12.8 kHz for the low band CELP coding and at 23.85 kbit/s a sub-frame gain coding at the frequency of 16 kHz, but interoperable variants of the AMR-WB coder are also possible; although the invention is described here at the decoding level, it is assumed here that the coding can also operate with an input signal at the frequency fs=8, 16, 32 or 48 kHz and appropriate resampling operations, outside the scope of the invention, are implemented on coding as a function of the value of fs. It may be noted that when fs=8 kHz at the decoder, in the case of a decoding that is compatible with AMR-WB, it is not necessary to extend the 0-6.4 kHz low band, since the reconstructed audio band at the frequency fs is limited to 0-4000 Hz.
(10) In
(11) The decoding according to
u′(n)=ĝ.sub.pv(n)+ĝ.sub.cc(n),n=0,L,63 by following the notations of clause 7.1.2.1 of G.718 concerning the CELP decoding, where v(n) and c(n) are respectively the code words of the adaptive and fixed dictionaries, and ĝ.sub.p and ĝ.sub.c are the associated decoded gains. This excitation u′(n) is used in the adaptive dictionary of the next sub-frame; it is then post-processed and, as in G.718, the excitation u′(n) (also denoted exc) is distinguished from its modified post-processed version u(n) (also denoted exc2) which serves as input for the synthesis filter, 1/Â(z), in the block 303. In variants which can be implemented for the invention, the post-processing operations applied to the excitation can be modified (for example, the phase dispersion can be enhanced) or these post-processing operations can be extended (for example, a reduction of the cross-harmonics noise can be implemented), without affecting the nature of the band extension method according to the invention; synthesis filtering by 1/Â(z) (block 303) where the decoded LPC filter Â(z) is of order 16; narrow-band post-processing (block 304) according to clause 7.3 of G.718 if fs=8 kHz; de-emphasis (block 305) by the filter 1/(1−0.68z.sup.−1); post-processing of the low frequencies (block 306) as described in clause 7.14.1.1 of G.718. This processing introduces a delay which is taken into account in the decoding of the high band (>6.4 kHz); re-sampling of the internal frequency of 12.8 kHz at the output frequency fs (block 307). A number of embodiments are possible. Without losing generality, it is considered here, by way of example, that if fs=8 or 16 kHz, the re-sampling described in clause 7.6 of G.718 is repeated here, and if fs=32 or 48 kHz, additional finite impulse response (FIR) filters are used; computation of the parameters of the “noise gate” (block 308) which is performed preferentially as described in clause 7.14.3 of G.718.
(12) In variants which can be implemented for the invention, the post-processing operations applied to the excitation can be modified (for example, the phase dispersion can be enhanced) or these post-processing operations can be extended (for example, a reduction of the cross-harmonics noise can be implemented), without affecting the nature of the band extension. We do not describe here the case of the decoding of the low band when the current frame is lost (bfi=1) which is informative in the 3GPP AMR-WB standard; in general, whether dealing with the AMR-WB decoder or a general decoder relying on the source-filter model, one is typically involved with best estimating the LPC excitation and the coefficients of the LPC synthesis filter so as to reconstruct the lost signal while retaining the source-filter model. When bfi=1 it is considered here that the band extension (block 309) can operate as in the case bfi=0 and a bitrate <23.85 kbit/s; thus, the description of the invention will subsequently assume, without loss of generality, that bfi=0.
(13) It can be noted that the use of blocks 306, 308, 314 is optional.
(14) It will also be noted that the decoding of the low band described above assumes a so-called “active” current frame with a bit rate between 6.6 and 23.85 kbit/s. In fact, when the DTX mode is activated, certain frames can be coded as “inactive” and in this case it is possible to either transmit a silence descriptor (on 35 bits) or transmit nothing. In particular, it is recalled that the SID frame of the AMR-WB coder describes several parameters: ISF parameters averaged over 8 frames, mean energy over 8 frames, “dithering flag” for the reconstruction of non-stationary noise. In all cases, in the decoder, there is the same decoding model as for an active frame, with a reconstruction of the excitation and of an LPC filter for the current frame, which makes it possible to apply the invention even to inactive frames. The same observation applies for the decoding of “lost frames” (or FEC, PLC) in which the LPC model is applied.
(15) This exemplary decoder operates in the domain of the excitation and therefore comprises a step of decoding the low band excitation signal. The band extension device and the band extension method within the meaning of the invention also operates in a domain different from the domain of the excitation and in particular with a low band decoded direct signal or a signal weighted by a perceptual filter.
(16) Unlike the AMR-WB or G.718 decoding, the decoder described makes it possible to extend the decoded low band (50-6400 Hz taking into account the 50 Hz high-pass filtering on the decoder, 0-6400 Hz in the general case) to an extended band, the width of which varies, ranging approximately from 50-6900 Hz to 50-7700 Hz depending on the mode implemented in the current frame. It is thus possible to refer to a first frequency band of 0 to 6400 Hz and to a second frequency band of 6400 to 8000 Hz. In reality, in the favored embodiment, the excitation for the high frequencies and generated in the frequency domain in a band from 5000 to 8000 Hz, to allow a bandpass filtering of width 6000 to 6900 or 7700 Hz whose slope is not too steep in the rejected upper band.
(17) The high-band synthesis part is produced in the block 309 representing the band extension device according to the invention and which is detailed in
(18) In order to align the decoded low and high bands, a delay (block 310) is introduced to synchronize the outputs of the blocks 306 and 309 and the high band synthesized at 16 kHz is resampled from 16 kHz to the frequency fs (output of block 311). The value of the delay T will have to be adapted for the other cases (fs=32, 48 kHz) as a function of the processing operations implemented. It will be recalled that when fs=8 kHz, it is not necessary to apply the blocks 309 to 311 because the band of the signal at the output of the decoder is limited to 0-4000 Hz.
(19) It will be noted that the extension method of the invention implemented in the block 309 according to the first embodiment preferentially does not introduce any additional delay relative to the low band reconstructed at 12.8 kHz; however, in variants of the invention (for example by using a time/frequency transformation with overlap), a delay will be able to be introduced. Thus, generally, the value of T in the block 310 will have to be adjusted according to the specific implementation. For example in the case where the post-processing of the low frequencies (block 306) is not used, the delay to be introduced for fs=16 kHz may be fixed at T=15.
(20) The low and high bands are then combined (added) in the block 312 and the synthesis obtained is post-processed by 50 Hz high-pass filtering (of IIR type) of order 2, the coefficients of which depend on the frequency fs (block 313) and output post-processing with optional application of the “noise gate” in a manner similar to G.718 (block 314).
(21) The band extension device according to the invention, illustrated by the block 309 according to the embodiment of the decoder of
(22) This extension device can also be independent of the decoder and can implement the method described in
(23) This device receives as input a signal decoded in a first frequency band termed the low band u(n) which can be in the domain of the excitation or in that of the signal. In the embodiment described here, a step of sub-band decomposition (E401b) by time frequency transform or filter bank is applied to the low band decoded signal to obtain the spectrum of the low band decoded signal U(k) for an implementation in the frequency domain.
(24) A step E401a of extending the low band decoded signal in a second frequency band higher than the first frequency band, so as to obtain an extended low band decoded signal U.sub.HB1(k), can be performed on this low band decoded signal before or after the analysis step (decomposition into sub-bands). This extension step can comprise at one and the same time a resampling step and an extension step or simply a step of frequency translation or transposition as a function of the signal obtained at input. It will be noted that in variants, step E401a will be able to be performed at the end of the processing described in
(25) This step is detailed subsequently in the embodiment described with reference to
(26) A step E402 of extracting an ambience signal (U.sub.HBA(k)) and tonal components (y(k)) is performed on the basis of the decoded low band signal (U(k)) or decoded and extended low band signal (U.sub.HB1(k)). The ambience is defined here as the residual signal which is obtained by deleting the main (or dominant) harmonics (or tonal components) from the existing signal.
(27) In most broadband signals (sampled at 16 kHz), the high band (>6 kHz) contains ambience information which is in general similar to that present in the low band.
(28) The step of extracting the tonal components and the ambience signal comprises for example the following steps:
(29) detection of the dominant tonal components of the decoded (or decoded and extended) low band signal, in the frequency domain; and
(30) computation of a residual signal by extraction of the dominant tonal components to obtain the ambience signal.
(31) This step can also be obtained by:
(32) obtaining of the ambience signal by computing a mean of the decoded (or decoded and extended) low band signal; and
(33) obtaining of the tonal components by subtracting the computed ambience signal, from the decoded or decoded and extended low band signal.
(34) The tonal components and the ambience signal are thereafter combined in an adaptive manner with the aid of energy level control factors in step E403 to obtain a so-called combined signal (U.sub.HB2(k)). The extension step E401a can then be implemented if it has not already been performed on the decoded low band signal.
(35) Thus, the combining of these two types of signals makes it possible to obtain a combined signal with characteristics that are more suitable for certain types of signals such as musical signals and richer in frequency content and in the extended frequency band corresponding to the whole frequency band including the first and the second frequency band.
(36) The band extension according to the method improves the quality for signals of this type with respect to the extension described in the AMR-WB standard.
(37) Using a combination of ambience signal and of tonal components makes it possible to enrich this extension signal so as to render it closer to the characteristics of the true signal and not of an artificial signal.
(38) This combining step will be detailed subsequently with reference to
(39) A synthesis step, which corresponds to the analysis at 401b, is performed at E404b to restore the signal to the time domain.
(40) In an optional manner, a step of energy level adjustment of the high band signal can be performed at E404a, before and/or after the synthesis step, by applying a gain and/or by appropriate filtering. This step will be explained in greater detail in the embodiment described in
(41) In an exemplary embodiment, the band extension device 500 is now described with reference to
(42) Thus, the processing block 510 receives a decoded low band signal (u(n)). In a particular embodiment, the band extension uses the decoded excitation at 12.8 kHz (exc2 or u(n)) as output by the block 302 of
(43) This signal is decomposed into frequency sub-bands by the sub-band decomposition module 510 (which implements step E401b of
(44) In a particular embodiment, a transform of DCT-IV (for “Discrete Cosine Transform”—type IV) (block 510) type is applied to the current frame of 20 ms (256 samples), without windowing, which amounts to directly transforming u(n) with n=0,L,255 according to the following formula:
(45)
in which N=256 and k=0,L,255.
(46) A transformation without windowing (or equivalently with an implicit rectangular window of the length of the frame) is possible when the processing is performed in the excitation domain, and not the signal domain. In this case no artifact (block effects) is audible, thereby constituting a significant advantage of this embodiment of the invention.
(47) In this embodiment, the DCT-IV transformation is implemented by FFT according to the so-called “Evolved DCT (EDCT)” algorithm described in the article by D. M. Zhang, H. T. Li, A Low Complexity Transform—Evolved DCT, IEEE 14th International Conference on Computational Science and Engineering (CSE), August 2011, pp. 144-149, and implemented in the standards ITU-T G.718 Annex B and G.729.1 Annex E.
(48) In variants of the invention, and without loss of generality, the DCT-IV transformation will be able to be replaced by other short-term time-frequency transformations of the same length and in the excitation domain or in the signal domain, such as an FFT (for “Fast Fourier Transform”) or a DCT-II (Discrete Cosine Transform—type II). Alternatively, it will be possible to replace the DCT-IV on the frame by a transformation with overlap-addition and windowing of length greater than the length of the current frame, for example by using an MDCT (for “Modified Discrete Cosine Transform”). In this case, the delay T in the block 310 of
(49) In another embodiment, the sub-band decomposition is performed by applying a real or complex filter bank, for example of PQMF (Pseudo-QMF) type. For certain filter banks, for each sub-band in a given frame, one obtains not a spectral value but a series of temporal values associated with the sub-band; in this case, the embodiment favored in the invention can be applied by carrying out for example a transform of each sub-band and by computing the ambience signal in the domain of the absolute values, the tonal components still being obtained by differencing between the signal (in absolute value) and the ambience signal. In the case of a complex filter bank, the complex modulus of the samples will replace the absolute value.
(50) In other embodiments, the invention will be applied in a system using two sub-bands, the low band being analyzed by transform or by filter bank.
(51) In the case of a DCT, the DCT spectrum, U(k), of 256 samples covering the band 0-6400 Hz (at 12.8 kHz), is thereafter extended (block 511) into a spectrum of 320 samples covering the band 0-8000 Hz (at 16 kHz) in the following form:
(52)
in which it is preferentially taken that start_band=160.
(53) The block 511 implements step E401a of
(54) In the frequency band corresponding to the samples ranging from indices 200 to 239, the original spectrum is retained, to be able to apply thereto a progressive attenuation response of the high-pass filter in this frequency band and also to not introduce audible defects in the step of addition of the low-frequency synthesis to the high-frequency synthesis.
(55) It will be noted that, in this embodiment, the generation of the oversampled and extended spectrum is performed in a frequency band ranging from 5 to 8 kHz therefore including a second frequency band (6.4-8 kHz) above the first frequency band (0-6.4 kHz).
(56) Thus, the extension of the decoded low band signal is performed at least on the second frequency band but also on a part of the first frequency band.
(57) Obviously, the values defining these frequency bands can be different depending on the decoder or the processing device in which the invention is applied.
(58) Furthermore, the block 511 performs an implicit high-pass filtering in the 0-5000 Hz band since the first 200 samples of U.sub.HB1(k) are set to zero; as explained later, this high-pass filtering may also be complemented by a part of progressive attenuation of the spectral values of indices k=200,L,255 in the 5000-6400 Hz band; this progressive attenuation is implemented in the block 501 but could be performed separately outside of the block 501. Equivalently, and in variants of the invention, the implementation of the high-pass filtering separated into blocks of coefficients of index k=0,L,199 set to zero, of attenuated coefficients k=200,L,255 in the transformed domain, will therefore be able to be performed in a single step.
(59) In this exemplary embodiment and according to the definition of U.sub.HB1(k), it will be noted that the 5000-6000 Hz band of U.sub.HB1(k) (which corresponds to the indices k=200,L,239) is copied from the 5000-6000 Hz band of U(k). This approach makes it possible to retain the original spectrum in this band and avoids introducing distortions in the 5000-6000 Hz band upon the addition of the HF synthesis with the LF synthesis—in particular the phase of the signal (implicitly represented in the DCT-IV domain) in this band is preserved.
(60) The 6000-8000 Hz band of U.sub.HB1(k) is here defined by copying the 4000-6000 Hz band of U(k) since the value of start_band is preferentially set at 160.
(61) In a variant of the embodiment, the value of start_band will be able to be made adaptive around the value of 160, without modifying the nature of the invention. The details of the adaptation of the start_band value are not described here because they go beyond the framework of the invention without changing its scope.
(62) In most broadband signals (sampled at 16 kHz), the high band (>6 kHz) contains ambience information which is naturally similar to that present in the low band. The ambience is defined here as the residual signal which is obtained by deleting the main (or dominant) harmonics from the existing signal. The harmonicity level in the 6000-8000 Hz band is generally correlated with that of the lower frequency bands.
(63) This decoded and extended low band signal is provided as input to the extension device 500 and in particular as input to the module 512. Thus the block 512 for extracting tonal components and an ambience signal implements step E402 of
(64) In a particular embodiment, the extraction of the tonal components and of the ambience signal (in the 6000-8000 Hz band) is performed according to the following operations: Computation of the total energy of the extended decoded low band signal ener.sub.HB;
(65)
(66) where ε=0.1 (this value may be different, it is fixed here by way of example). Computation of the ambience (in absolute value) which corresponds here to the mean level of the spectrum lev(i) (spectral line by spectral line) and computation of the energy ener.sub.tonal of the dominant tonal parts (in the high-frequency spectrum)
(67) For i=0 . . . L−1, this mean level is obtained through the following equation:
(68)
(69) This corresponds to the mean level (in absolute value) and therefore represents a sort of envelope of the spectrum. In this embodiment, L=80 and represents the length of the spectrum and the index i from 0 to L−1 corresponds to the indices J+240 from 240 to 319, i.e. the spectrum from 6 to 8 kHz.
(70) In general fb(i)=i−7 and fn(i)=i+7, however the first and last 7 indices (i=0,L,6 and i=L−7,L,L−1) require special processing and without loss of generality we then define:
fb(i)=0 and fn(i)=i+7 for i=0,L,6
fb(i)=i−7 and fn(i)=L−1 for i=L−7,L,L−1
(71) In variants of the invention, the mean of |U.sub.HB1(j+240)|, j=fb(i), . . . ,fn(i), may be replaced with a median value over the same set of values, i.e. lev(i)=median.sub.j=fb(i), . . . ,fn(i) (|U.sub.HB1(j+240)|) This variant has the defect of being more complex (in terms of number of computations) than a sliding mean. In other variants a non-uniform weighting may be applied to the averaged terms, or the median filtering may be replaced for example with other nonlinear filters of “stack filters” type.
(72) The residual signal is also computed:
y(i)=|U.sub.HB1(i+240)|−lev(i), i=0,K,L−1
(73) which corresponds (approximately) to the tonal components if the value y(i) at a given spectral line i is positive (y(i)>0).
(74) This computation therefore involves an implicit detection of the tonal components. The tonal parts are therefore implicitly detected with the aid of the intermediate term y(i) representing an adaptive threshold. The detection condition being y(i)>0. In variants of the invention this condition may be changed for example by defining an adaptive threshold dependent on the local envelope of the signal or in the form y(i)>lev(i)+x dB where x has a predefined value (for example x=10 dB).
(75) The energy of the dominant tonal parts is defined by the following equation:
(76)
(77) Other schemes for extracting the ambience signal can of course be envisaged. For example, this ambience signal can be extracted from a low-frequency signal or optionally another frequency band (or several frequency bands).
(78) The detection of the tonal spikes or components may be done differently.
(79) The extraction of this ambience signal could also be done on the decoded but not extended excitation, that is to say before the spectral extension or translation step, that is to say for example on a portion of the low-frequency signal rather than directly on the high-frequency signal.
(80) In a variant embodiment, the extraction of the tonal components and of the ambience signal is performed in a different order and according to the following steps: detection of the dominant tonal components of the decoded (or decoded and extended) low band signal, in the frequency domain; computation of a residual signal by extraction of the dominant tonal components to obtain the ambience signal.
(81) This variant can for example be carried out in the following manner: A spike (or tonal component) is detected at a spectral line of index i in the spectrum of amplitude |U.sub.HB1(i+240)| if the following criterion is satisfied:
|U.sub.HB1(i+240)|>|U.sub.HM1(i+240−1)| and |U.sub.HB1(i+240)|>|U.sub.HB1(i+240+1)|,
(82) for i=0,K,L−1. As soon as a spike is detected at the spectral line of index i a sinusoidal model is applied so as to estimate the amplitude, frequency and optionally phase parameters of a tonal component associated with this spike. The details of this estimation are not presented here but the estimation of the frequency can typically call upon a parabolic interpolation over 3 points so as to locate the maximum of the parabola approximating the 3 points of amplitude |U.sub.HB1(i+240)| (expressed as dB), the amplitude estimation being obtained by way of this same interpolation. As the transform domain used here (DCT-IV) does not make it possible to obtain the phase directly, it will be possible, in one embodiment, to neglect this term, but in variants it will be possible to apply a quadrature transform of DST type to estimate a phase term. The initial value of y(i) is set to zero for i=0,K,L−1. The sinusoidal parameters (frequency, amplitude, and optionally phase) of each tonal component being estimated, the term y(i) is then computed as the sum of predefined prototypes (spectra) of pure sinusoids transformed into the DCT-IV domain (or other domain if some other sub-band decomposition is used) according to the estimated sinusoidal parameters. Finally, an absolute value is applied to the terms y(i) to express the domain of the amplitude spectrum as absolute values.
(83) Other schemes for determining the tonal components are possible, for example it would also be possible to compute an envelope of the signal env(i) by spline interpolation of the local maximum values (detected spikes) of |U.sub.HB1(i+240)|, to lower this envelope by a certain level in dB in order to detect the tonal components as the spikes which exceed this envelope and to define y(i) as
y(i)=max(|U.sub.HB1(i+240)|−env(i),0)
(84) In this variant the ambience is therefore obtained through the equation:
lev(i)=|U.sub.HB1(i+240)|−y(i), i=0,K,L−1
(85) In other variants of the invention, the absolute value of the spectral values will be replaced for example by the square of the spectral values, without changing the principle of the invention; in this case a square root will be necessary in order to return to the signal domain, this being more complex to carry out.
(86) The combining module 513 performs a combining step by adaptive mixing of the ambience signal and of the tonal components. Accordingly, an ambience level control factor Γ is defined by the following equation:
(87)
(88) β being a factor, an exemplary computation of which is given hereinbelow.
(89) To obtain the extended signal, we first obtain the combined signal in absolute values for i=0 . . . L−1:
(90)
(91) to which are applied the signs of U.sub.HB1(k):
y″(i)=sgn(U.sub.HB1(i+240)).y′(i)
(92) where the function sgn(.) gives the sign:
(93)
(94) By definition the factor Γ is >1. The tonal components, detected spectral line by spectral line by the condition y(i)>0, are reduced by the factor Γ; the mean level is amplified by the factor 1/Γ.
(95) In the adaptive mixing block 513, a control factor for the energy level is computed as a function of the total energy of the decoded (or decoded and extended) low band signal and of the tonal components.
(96) In a preferred embodiment of the adaptive mixing, the energy adjustment is performed in the following manner:
U.sub.HB2(k)=fac.y″(k−240), k=240,L,319
(97) U.sub.HB2(k) being the band extension combined signal.
(98) The adjustment factor is defined by the following equation:
(99)
(100) Where γ makes it possible to avoid over-estimation of the energy. In an exemplary embodiment, we compute β so as to retain the same level of ambience signal with respect to the energy of the tonal components in the consecutive bands of the signal. We compute the energy of the tonal components in three bands: 2000-4000 Hz, 4000-6000 Hz and 6000-8000 Hz, with
(101)
(102) And where N(k.sub.1,k.sub.2) is the set of the indices k for which the coefficient of index k is classified as being associated with the tonal components. This set may be for example obtained by detecting the local spikes in U′(k) satisfying |U′(k)|>lev(k) or lev(k) is computed as the mean level of the spectrum, spectral line by spectral line.
(103) It may be noted that other schemes for computing the energy of the tonal components are possible, for example by taking the median value of the spectrum over the band considered. We fix β in such a way that the ratio between the energy of the tonal components in the 4-6 kHz and 6-8 kHz bands is the same as between the 2-4 kHz and 4-6 kHz bands:
(104)
(105) and max(.,.) is the function which gives the maximum of the two arguments.
(106) In variants of the invention, the computation of β may be replaced with other schemes. For example, in a variant, it will be possible to extract (compute) various parameters (or “features”) characterizing the low band signal, including a “tilt” parameter similar to that computed in the AMR-WB codec, and the factor β will be estimated as a function of a linear regression on the basis of these various parameters by limiting its value between 0 and 1. The linear regression will, for example, be able to be estimated in a supervised manner by estimating the factor β by being given the original high band in a learning base. It will be noted that the way in which β is computed does not limit the nature of the invention.
(107) Thereafter, the parameter β can be used to compute γ by taking account of the fact that a signal with an ambience signal added in a given band is in general perceived as stronger than a harmonic signal with the same energy in the same band. If we define α to be the quantity of ambience signal added to the harmonic signal:
α=√{square root over (1−β)}
(108) it will be possible to compute γ as a decreasing function of α, for example γ=b−a√{square root over (α)}, b=1.1, a=1.2 and γ limited from 0.3 to 1. Here again, other definitions of α and γ are possible within the framework of the invention.
(109) At the output of the band extension device 500, the block 501, in a particular embodiment, carries out in an optional manner a dual-operation of application of bandpass filter frequency response and of de-emphasis (or deaccentuation) filtering in the frequency domain.
(110) In a variant of the invention, the de-emphasis filtering will be able to be performed in the time domain, after the block 502, even before the block 510; however, in this case, the bandpass filtering performed in the block 501 may leave certain low-frequency components of very low levels which are amplified by de-emphasis, which can modify, in a slightly perceptible manner, the decoded low band. For this reason, it is preferred here to perform the de-emphasis in the frequency domain. In the preferred embodiment, the coefficients of index k=0,L,199 are set to zero, so the de-emphasis is limited to the higher coefficients.
(111) The excitation is first de-emphasized according to the following equation:
(112)
(113) in which G.sub.deemph(k) is the frequency response of the filter 1/(1−0.68z.sup.−1) over a restricted discrete frequency band. By taking into account the discrete (odd) frequencies of the DCT-IV, G.sub.deemph(k) is defined here as:
(114)
(115) In the case where a transformation other than DCT-IV is used, the definition of θ.sub.k will be able to be adjusted (for example for even frequencies).
(116) It should be noted that the de-emphasis is applied in two phases for k=200,L,255 corresponding to the 5000-6400 Hz frequency band, where the response 1/(1−0.68z.sup.−1) is applied as at 12.8 kHz, and for k=256,L,319 corresponding to the 6400-8000 Hz frequency band, where the response is extended from 16 kHz here to a constant value in the 6.4-8 kHz band.
(117) It can be noted that, in the AMR-WB codec, the HF synthesis is not de-emphasized. In the embodiment presented here, the high-frequency signal is on the contrary de-emphasized so as to restore it to a domain consistent with the low-frequency signal (0-6.4 kHz) which exits the block 305 of
(118) In a variant of the embodiment, in order to reduce the complexity, it will be possible to set G.sub.deemph(k) at a constant value independent of k, by taking for example G.sub.deemph(k)=0.6 which corresponds approximately to the average value of G.sub.deemph(k) for k=200,L,319 in the conditions of the embodiment described above.
(119) In another variant of the embodiment of the decoder, the de-emphasis will be able to be carried out in an equivalent manner in the time domain after inverse DCT.
(120) In addition to the de-emphasis, a bandpass filtering is applied with two separate parts: one, high-pass, fixed, the other, low-pass, adaptive (function of the bit rate).
(121) This filtering is performed in the frequency domain.
(122) In the preferred embodiment, the low-pass filter partial response is computed in the frequency domain as follows:
(123)
(124) in which N.sub.lp=60 at 6.6 kbit/s, 40 at 8.85 kbit/s, and 20 at the bit rates >8.85 bit/s.
(125) Then, a bandpass filter is applied in the form:
(126)
The definition of G.sub.hp(k), k=0,L,55, is given, for example, in table 1 below.
(127) TABLE-US-00001 TABLE 1 K g.sub.hp(k) 0 0.001622428 1 0.004717458 2 0.008410494 3 0.012747280 4 0.017772424 5 0.023528982 6 0.030058032 7 0.037398264 8 0.045585564 9 0.054652620 10 0.064628539 11 0.075538482 12 0.087403328 13 0.100239356 14 0.114057967 15 0.128865425 16 0.144662643 17 0.161445005 18 0.179202219 19 0.197918220 20 0.217571104 21 0.238133114 22 0.259570657 23 0.281844373 24 0.304909235 25 0.328714699 26 0.353204886 27 0.378318805 28 0.403990611 29 0.430149896 30 0.456722014 31 0.483628433 32 0.510787115 33 0.538112915 34 0.565518011 35 0.592912340 36 0.620204057 37 0.647300005 38 0.674106188 39 0.700528260 40 0.726472003 41 0.751843820 42 0.776551214 43 0.800503267 44 0.823611104 45 0.845788355 46 0.866951597 47 0.887020781 48 0.905919644 49 0.923576092 50 0.939922577 51 0.954896429 52 0.968440179 53 0.980501849 54 0.991035206 55 1.000000000
It will be noted that, in variants of the invention, the values of G.sub.hp(k) will be able to be modified while keeping a progressive attenuation. Similarly, the low-pass filtering with variable bandwidth, G.sub.lp(k), will be able to be adjusted with values or a frequency support that are different, without changing the principle of this filtering step.
(128) It will also be noted that the bandpass filtering will be able to be adapted by defining a single filtering step combining the high-pass and low-pass filtering.
(129) In another embodiment, the bandpass filtering will be able to be performed in an equivalent manner in the time domain (as in the block 112 of
(130) The inverse transform block 502 performs an inverse DCT on 320 samples to find the high-frequency signal sampled at 16 kHz. Its implementation is identical to the block 510, because the DCT-IV is orthonormal, except that the length of the transform is 320 instead of 256, and the following is obtained:
(131)
where N.sub.16k=320 and k=0,L,319.
(132) In the case where the block 510 is not a DCT, but some other transformation or decomposition into sub-bands, the block 502 carries out the synthesis corresponding to the analysis carried out in the block 510.
(133) The sampled signal at 16 kHz is thereafter in an optional manner scaled by gains defined per sub-frame of 80 samples (block 504).
(134) In a preferred embodiment, a gain g.sub.HB1(m) is first computed (block 503) per sub-frame by ratios of energy of the sub-frames such that, in each sub-frame of index m=0, 1, 2 or 3 of the current frame:
(135)
with ε=0.01. The gain per sub-frame g.sub.HB1(m) can be written in the form:
(136)
which shows that, in the signal u.sub.HB, the same ratio between energy per sub-frame and energy per frame as in the signal u(n) is assured.
(137) The block 504 performs the scaling of the combined signal (included in step E404a of
u.sub.HB′(n)=g.sub.HB1(m)u.sub.HB(n),n=80m,L,80(m+1)−1
(138) It will be noted that the implementation of the block 503 differs from that of the block 101 of
(139) Thus, this scaling step makes it possible to retain, in the high band, the ratio of energy between the sub-frame and the frame in the same way as in the low band.
(140) In an optional manner, the block 506 thereafter performs the scaling of the signal (included in step E404a of
u.sub.HB″(n)=g.sub.HB2(m)u.sub.HB′(n),n=80m,L,80(m+1)−1
where the gain g.sub.HB2(m) is obtained from the block 505 by executing the blocks 103, 104 and 105 of the AMR-WB codec (the input of the block 103 being the excitation decoded in low band, u(n)). The blocks 505 and 506 are useful for adjusting the level of the LPC synthesis filter (block 507), here as a function of the tilt of the signal. Other schemes for computing the gain g.sub.HB2(m) are possible without changing the nature of the invention.
(141) Finally, the signal, u.sub.HB′(n) or u.sub.HB″(n), is filtered by the filtering module 507 which can be embodied here by taking as transfer function 1/Â(z/γ), where γ=0.9 at 6.6 kbit/s and γ=0.6 at the other bit rates, thereby limiting the order of the filter to order 16. In a variant, this filtering will be able to be performed in the same way as is described for the block 111 of
(142) In variant embodiments of the invention, the coding of the low band (0-6.4 kHz) will be able to be replaced by a CELP coder other than that used in AMR-WB, such as, for example, the CELP coder in G.718 at 8 kbit/s. With no loss of generality, other wide-band coders or coders operating at frequencies above 16 kHz, in which the coding of the low band operates with an internal frequency at 12.8 kHz, could be used. Moreover, the invention can obviously be adapted to sampling frequencies other than 12.8 kHz, when a low-frequency coder operates with a sampling frequency lower than that of the original or reconstructed signal. When the low-band decoding does not use linear prediction, there is no excitation signal to be extended, in which case it will be possible to perform an LPC analysis of the signal reconstructed in the current frame and an LPC excitation will be computed so as to be able to apply the invention.
(143) Finally, in another variant of the invention, the excitation or the low band signal (u(n)) is resampled, for example by linear interpolation or cubic “spline” interpolation, from 12.8 to 16 kHz before transformation (for example DCT-IV) of length 320. This variant has the defect of being more complex, since the transform (DCT-IV) of the excitation or of the signal is then computed over a greater length and the resampling is not performed in the transform domain.
(144) Furthermore, in variants of the invention, all the computations necessary for the estimation of the gains (G.sub.HBN,g.sub.HB1(m),g.sub.HB2(m),g.sub.HBN, . . . ) will be able to be performed in a logarithmic domain.
(145)
(146) This type of device comprises a processor PROC cooperating with a memory block BM comprising a storage and/or working memory MEM.
(147) Such a device comprises an input module E able to receive a decoded or extracted audio signal in a first frequency band termed the low band restored to the frequency domain (U(k)). It comprises an output module S able to transmit the extension signal in a second frequency band (U.sub.HB2(k)) for example to a filtering module 501 of
(148) The memory block can advantageously comprise a computer program comprising code instructions for the implementation of the steps of the band extension method within the meaning of the invention, when these instructions are executed by the processor PROC, and in particular the steps of extracting (E402) tonal components and an ambience signal from a signal arising from the decoded low band signal (U(k)), of combining (E403) the tonal components (y(k)) and the ambience signal (U.sub.HBA(k)) by adaptive mixing using energy level control factors to obtain an audio signal, termed the combined signal (U.sub.HB2(k)), of extending (E401a) over at least one second frequency band higher than the first frequency band the low band decoded signal before the extraction step or the combined signal after the combining step.
(149) Typically, the description of
(150) The memory MEM stores, generally, all the data necessary for the implementation of the method.
(151) In one possible embodiment, the device thus described can also comprise low-band decoding functions and other processing functions described for example in
(152) Although the present disclosure has been described with reference to one or more examples, workers skilled in the art will recognize that changes may be made in form and detail without departing from the scope of the disclosure and/or the appended claims.