Optimized coding and decoding of spatialization information for the parametric coding and decoding of a multichannel audio signal
11664034 · 2023-05-30
Assignee
Inventors
Cpc classification
G10L25/18
PHYSICS
International classification
G10L19/00
PHYSICS
G10L19/008
PHYSICS
G10L21/00
PHYSICS
Abstract
A method of parametric coding of a multichannel digital audio signal including coding a signal arising from a channels reduction processing applied to the multichannel signal and coding spatialization information of the multichannel signal. The method includes the following acts: extraction of a plurality of items of spatialization information of the multichannel signal; obtaining at least one representation model of the extracted spatialization information; determination of at least one angle parameter of a model obtained; coding the at least one determined angle parameter so as to code the spatialization information extracted during the coding of spatialization information. Also provided are a method for decoding such a coded signal and corresponding coding and decoding devices.
Claims
1. A method comprising: parametric decoding a multichannel digital audio signal comprising the following acts performed by a decoding device: decoding a signal arising from a channels reduction processing applied to the multichannel digital audio signal; and decoding spatialization cues in respect of the multichannel digital audio signal, comprising: receiving at least one coded angle parameter from a communication network or reading the at least one coded angle parameter from a non-transitory computer-readable medium; decoding the at least one coded angle parameter to obtain at least one decoded angle parameter; obtaining at least one representation model of spatialization cues based on the at least one decoded angle parameter; and determining a plurality of spatialization cues in respect of the multichannel digital audio signal on the basis of the at least one model obtained and of the at least one decoded angle parameter.
2. The method as claimed in claim 1, wherein the spatialization cues are defined by frequency sub-bands of the multichannel digital audio signal and at least one coded angle parameter per sub-band is received or read from the storage medium.
3. The method as claimed in claim 1, wherein the method furthermore comprises receiving a reference spatialization cue and decoding this reference spatialization cue.
4. The method as claimed in claim 1, wherein one of the spatialization cues is an interchannel time shift (ITD) cue.
5. The method as claimed in claim 1, wherein one of the spatialization cues is an interchannel intensity difference (ILD) cue.
6. The method as claimed in claim 5, wherein the method furthermore comprises the following acts for decoding an interchannel intensity difference cue: estimating an interchannel intensity difference cue on the basis of the model obtained and of the at least one decoded angle parameter; decoding the difference between the interchannel intensity difference cue.
7. The method as claimed in claim 1, comprising obtaining a spatialization-cue-based representation model.
8. The method as claimed in claim 1, comprising obtaining a representation model common to several spatialization cues obtained.
9. The method as claimed in claim 1, further comprising receiving and decoding an index of a table of models and obtaining the at least one representation model of the spatialization cues to be decoded on the basis of the decoded index.
10. A parametric decoder of a multichannel digital audio signal, comprising: a processor; and a non-transitory computer-readable medium comprising instructions stored thereon, which when executed by the processor configure the parametric decoder to perform acts to parametric decode the multichannel digital audio signal: decoding a signal arising from a channels reduction processing applied to the multichannel digital audio signal; and decoding spatialization cues in respect of the multichannel digital audio signal, comprising: receiving at least one coded angle parameter from a communication network or reading the at least one coded angle parameter from a storage medium; decoding the at least one coded angle parameter to obtain at least one decoded angle parameter; obtaining at least one representation model of spatialization cues based on the at least one decoded angle parameter; and determining a plurality of spatialization cues in respect of the multichannel digital audio signal on the basis of the at least one model obtained and of the at least one decoded angle parameter.
11. A non-transitory computer-readable medium on which is recorded a computer program comprising code instructions for execution of a method of parametric decoding a multichannel digital audio signal when the instructions are executed by a processor of a decoding device, wherein the method comprises: decoding a signal arising from a channels reduction processing applied to the multichannel digital audio signal; and decoding spatialization cues in respect of the multichannel digital audio signal, comprising: receiving at least one coded angle parameter from a communication network or reading the at least one coded angle parameter from a storage medium; decoding the at least one coded angle parameter to obtain at least one decoded angle parameter; obtaining at least one representation model of spatialization cues based on the least one decoded angle parameter; and determining a plurality of spatialization cues in respect of the multichannel digital audio signal on the basis of the at least one model obtained and of the at least one decoded angle parameter.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Other characteristics and advantages of the invention will become more clearly apparent on reading the following description, given solely by way of nonlimiting example and with reference to the appended drawings in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS
(14) With reference to
(15) The case of a signal with two channels is described here. The invention also applies to the case of a multichannel signal with a number of channels greater than 2.
(16) To avoid overburdening the text, the coder described in
(17) This parametric stereo coder such as illustrated uses an EVS mono coding according to the specifications 3GPP TS 26.442 (fixed-point source code) or TS 26.443 (floating-point source code), it operates with stereo or multichannel signals sampled at the sampling frequency F.sub.s of 8, 16, 32 and 48 kHz, with 20-ms frames. Hereinafter, with no loss of generality, the description is given mainly for the case F.sub.s=16 kHz and for the case N=2 channels.
(18) It should be noted that the choice of a frame length of 20 ms is not in any case restrictive in the invention which applies likewise in variants of the embodiment where the frame length is different, for example 5 or 10 ms, with a codec other than EVS.
(19) Moreover, the invention applies likewise to other types of mono coding (e.g.: IETF OPUS, UIT-T G.722) operating at identical or non-identical sampling frequencies.
(20) Each temporal channel (L(n) and R(n)) sampled at 16 kHz is firstly pre-filtered by a high-pass filter (HPF for High Pass Filter in English) typically eliminating the components below 50 Hz (blocks 301 and 302). This pre-filtering is optional, but it can be used to avoid the bias due to the continuous component (DC) in the estimation of parameters such as the ICTD or the ICC.
(21) The channels L′(n) and R′(n) arising from the pre-filtering blocks are analyzed in terms of frequencies by discrete Fourier transform with sinusoidal windowing with overlap of 50% of length 40 ms i.e. 640 samples (blocks 303 to 306). For each frame, the signal (L′(n), R′(n)) is therefore weighted by a symmetric analysis window covering 2 frames of 20 ms i.e. 40 ms (or 640 samples for F.sub.s=16 kHz). The 40-ms analysis window covers the current frame and the future frame. The future frame corresponds to a “future” signal segment commonly called “lookahead” of 20 ms. In variants of the invention, other windows could be used, for example a low-delay asymmetric window called “ALDO” in the EVS codec. Moreover, in variants, the analysis windowing could be rendered adaptive as a function of the current frame, so as to use an analysis with a long window on stationary segments and an analysis with short windows on transient/non-stationary segments, optionally with transition windows between long and short windows.
(22) For the current frame of 320 samples (20 ms at F.sub.s=16 kHz), the spectra obtained, L[k] and R[k] (k=0 . . . 320), comprise 321 complex coefficients, with a resolution of 25 Hz per frequency coefficient. The coefficient of index k=0 corresponds to the continuous component (0 Hz), it is real. The coefficient of index k=320 corresponds to the Nyquist frequency (8000 Hz for F.sub.s=16 kHz), it is also real. The coefficients of index 0<k<160 are complex and correspond to a sub-band of width 25 Hz centered on the frequency of k.
(23) The spectra L[k] and R[k] are combined in the block 307 to obtain a mono signal (downmix) M[k] in the frequency domain. This signal is converted into time by inverse FFT and windowing-overlap with the “lookahead” part of the previous frame (blocks 308 to 310).
(24) An example of frequency “downmix” technique is described in the document entitled “A stereo to mono downmixing scheme for MPEG-4 parametric stereo encoder” by Samsudin, E. Kurniawati, N. Boon Poh, F. Sattar, S. George, in Proc. ICASSP, 2006.
(25) In this document, the L and R channels are aligned in phase before performing the channels reduction processing.
(26) More precisely, the phase of the L channel for each frequency sub-band is chosen as the reference phase, the R channel is aligned according to the phase of the L channel for each sub-band through the following formula:
R′[k]=e.sup.j.Math.ICPD[b]R[k] (7)
where R′[k] is the aligned R channel, k is the index of a coefficient in the b.sup.th frequency sub-band, ICPD[b] is the inter-channel phase difference in the b.sup.th frequency sub-band given by equation (2).
(27) Note that when the sub-band of index b is reduced to a frequency coefficient, we find:
R′[k]=|R[k]|.Math.e.sup.j∠L[k] (8)
(28) Finally the mono signal obtained by the “downmix” of the document of Samsudin et al. cited previously is calculated by averaging the L channel and the aligned R′ channel, according to the following equation:
(29)
(30) The phase alignment therefore makes it possible to preserve the energy and to avoid the problems of attenuation by eliminating the influence of the phase. This “downmix” corresponds to the “downmix” described in the document by Breebart et al. where:
M[k]=w.sub.1L[k]+w.sub.2R[k] (10)
with w.sub.1=0.5 and
(31)
in the case where the sub-band of index b comprises only a frequency value of index k.
(32) Other “downmix” schemes can of course be chosen without modifying the scope of the invention.
(33) The algorithmic delay of the EVS codec is 30.9375 ms at F.sub.s=8 kHz and 32 ms for the other frequencies F.sub.s=16, 32 or 48 kHz. This delay includes the current frame of 20 ms, the additional delay with respect to the frame length is therefore 10.9375 ms at F.sub.s=8 kHz and 12 ms for the other frequencies (i.e. 192 samples at F.sub.s=16 kHz), the mono signal is delayed (block 311) by T=320−192=128 samples so that the delay accumulated between the mono signal decoded by EVS and the original stereo channels becomes a multiple of the length of frames (320 samples). Accordingly, to synchronize the extraction of stereo parameters (block 314) and the spatial synthesis on the basis of the mono signal performed at the decoder, the lookahead for the calculation of the mono signal (20 ms) and the mono coding/decoding delay to which is added the delay T to align the mono synthesis (20 ms) correspond to an additional delay of 2 frames (40 ms) with respect to the current frame. This delay of 2 frames is specific to the implementation detailed here, in particular it is related to the 20-ms sinusoidal symmetric windows. This delay could be different. In a variant embodiment, it would be possible to obtain a delay of a frame with an optimized window with a smaller overlap between adjacent windows with a block 311 not introducing any delay (T=0).
(34) The shifted mono signal is thereafter coded (block 312) by the mono EVS coder for example at a bitrate of 13.2, 16.4 or 24.4 kbit/s. In variants, the coding could be performed directly on the unshifted signal; in this case the shift could be performed after decoding.
(35) In a particular embodiment of the invention, illustrated here in
(36) It would be possible in a more advantageous manner in terms of quantity of data to be stored, to shift the outputs of the parameters extraction block 314 or else the outputs of the quantization blocks 318, 316 and 319. It would also be possible to introduce this shift at the decoder on receiving the binary train of the stereo coder.
(37) In parallel with the mono coding, the coding of the spatial cue is implemented in the blocks 315 to 319 according to a coding method of the invention. Moreover, the coding comprises an optional step of classifying the input signal in the block 321.
(38) This classification block, according to the multichannel signal to be coded, can make it possible to pass from one mode of coding to another. One of the coding modes being that implementing the invention for the coding of the spatialization cues. The other coding modes are not detailed here, but it will be possible to use conventional techniques for stereo or multichannel coding, including techniques for parametric coding with ILD, ITD, IPD, ICC parameters. The classification is indicated here with the L and R temporal signals as input, optionally the signals in the frequency domain and the stereo or multichannel parameters will also be able to serve for the classification. It will also be possible to use the classification to apply the invention to a given spatial parameter (for example to code the ITD or the ILD), stated otherwise to switch the type of coding of spatial parameters with a possible choice between a coding scheme according to a model as in the invention or an alternative coding scheme of the prior art.
(39) The spatial parameters are extracted (block 314) on the basis of the spectra L[k], R[k] and M[k] shifted by two frames: L.sub.buf[k], R.sub.buf[k] and M.sub.buf[k] and coded (blocks 315 to 319) according to a coding method described with reference to
(40) For the extraction of the parameters ILD (block 314), the spectra L.sub.buf[k] and R.sub.buf[k] are for example sliced into frequency sub-bands.
(41) In one embodiment, a ⅓ octave sub-band slicing defined in array 1 hereinbelow will be taken:
(42) TABLE-US-00001 No Octave Thirds 1 2 3 4 5 6 7 8 9 10 11 12 Base frequency (Hz) 0 111 140 177 223 281 354 445 561 707 891 1122 High Frequency (Hz) 111 140 177 223 281 354 445 561 707 891 1122 1414 No Octave Thirds 13 14 15 16 17 18 19 20 21 22 23 24 Base frequency (Hz) 1414 1782 2245 2828 3564 4490 5657 7127 8980 11314 14254 17959 High Frequency (Hz) 1782 2245 2828 3564 4490 5657 7127 8980 11314 14254 17959 22627
(43) Array 1
(44) This array covers all the cases of sampling frequency, for example for a coder with a sampling frequency at 16 kHz only the first B=20 sub-bands will be retained. Thus, it will be possible to define the array:
(45) TABLE-US-00002 k.sub.b = 0 . . . 20 = [0 4 6 7 9 11 14 18 22 28 36 45 57 71 90 113 143 180 226 285 320]
(46) The above array delimits (as index of Fourier spectral lines) the frequency sub-bands of index b=0 to B−1 for the case F.sub.s=16 kHz. Each sub-band of index b comprises the coefficients k.sub.b=0 to k.sub.b+1−1. The frequency spectral line of index k=320 which corresponds to the Nyquist frequency is not taken into account here.
(47) In variants, it will be possible to use another sub-band slicing, for example according to the ERB scale; in this case, it will be possible to use B=35 sub-bands, the latter are defined by the following boundaries in the case where the input signal is sampled at 16 kHz:
(48) TABLE-US-00003 k.sub.b = 0 . . . 35 = [0 1 2 3 5 6 8 10 12 14 17 20 23 27 31 35 40 46 52 58 66 74 83 93 104 117 130 145 162 181 201 224 249 277 307 320]
(49) The above array delimits (as index of Fourier spectral lines) the frequency sub-bands of index b=0 to B−1. For example the first sub-band (b=0) goes from the coefficient k.sub.b=0 to k.sub.b+1−1=0; it is therefore reduced to a single coefficient which represents 25 Hz. Likewise, the last sub-band (k=34) goes from the coefficient k.sub.b=307 to k.sub.b+1−.sup.1=319, it comprises 12 coefficients (300 Hz). The frequency spectral line of index k=320 which corresponds to the Nyquist frequency is not taken into account here.
(50) For each frame, the ILD of the sub-band b=0, . . . , B−1 is calculated according to equations (5) and (6) repeated here:
(51)
where σ.sub.L.sup.2[b] and σ.sub.R.sup.2[b] represent respectively the energy of the left channel (L.sub.buf[k]) and of the right channel (R.sub.buf[k]):
(52)
(53) According to a particular embodiment, the parameters ITD and ICC are extracted in the time domain (block 320). In variants of the invention these parameters could be extracted in the frequency domain (block 314), this not being represented in
(54) In one embodiment the parameters ITD and ICC are estimated in the following manner. The ITD is sought by intercorrelation according to equation (3) repeated here:
ITD=max−d≤τ≤dΣ.sub.n=0.sup.N−τ−1L(n+τ).Math.R(n) (13)
(55) with for example d=630 μs×F.sub.s, i.e. 10 samples at 16 kHz. This value of 630 μs is obtained for the binaural case, on the basis of Woodworth's law defined hereinafter, with a spherical approximation of the head (with a mean radius α=8.5 cm) and an azimuth θ=π/2.
(56) The ITD obtained according to equation (3) is thereafter smoothed to attenuate its temporal variations. The benefit of the smoothing is to attenuate the fluctuations of the instantaneous ITD which may degrade the quality of the spatial synthesis at the decoder. The smoothing scheme adopted lies outside the scope of the invention and it is not detailed here.
(57) During the calculation of the ITD, the ICC is also calculated according to equation (4) defined hereinabove.
(58) The spatial parameters or cues ILD and ITD are coded according to a scheme forming the subject of the invention and described with reference to
(59) These blocks 315 and 317 implement schemes based on models of respective representations of the cues ITD and ILD.
(60) Certain parameters of the respective models obtained on output from the blocks 315 and 317 are thereafter coded at 316 and 318 for example according to a scalar quantization scheme.
(61) All the spatialization cues thus coded are multiplexed by the multiplexer 322 before being transmitted.
(62) Certain significant notions about sound perception are recalled in
(63) In one embodiment it is considered that the signal comprises a sound source situated in the horizontal plane.
(64) In the case of a binaural signal, it may be useful to define the position of a virtual source associated with the multichannel signal to be coded. As illustrated in
(65) The angle θ is defined between the frontal axis 530 of the listener and the axis of the source 520. The two ears of the listener are represented as 550R for the right ear and as 550L for the left ear. The cue in respect of time shift between the two channels of a binaural signal is associated with the interaural time difference, that is to say the difference in time that a sound takes to arrive at the two ears. If the source is directly in front of the listener, the wave arrives at the same moment at both ears and the ITD cue is zero.
(66) The interaural time difference (ITD) can be simplified by using a geometric approximation in the form of the following sine law:
ITD(θ)=α sin(θ)/c (14)
where θ is the azimuth in the horizontal plane, α is the radius of a spherical approximation of the head and c the speed of sound (in m.Math.s.sup.−1) which can be defined as c=343 m.Math.s.sup.−1. This law is independent of frequency, and it is known to give good results in terms of spatial location.
(67) A virtual sound source can therefore be located with an angle θ and the ITD cue can be deduced through the following formula:
ITD(θ)=ITD.sub.max sin(θ) (15)
where
ITD.sub.max=α/c (16)
The value given to ITD.sub.max may for example correspond to 630 μs, which is the limit of perceptual separation between two pulses. For larger values of ITD the subject will hear two different sounds and will not be able to interpret the sounds as a single sound source.
(68) In variants of the invention the sine law could be replaced with Woodworth's ITD model defined in the work by R. S Woodworth, Experimental Psychology (Holt, N.Y.), 1938, pp. 520-523, by the following equation:
ITD(θ)=α(sin(θ)+θ)/c (17)
which is valid for a far field (typically a source at a distance of at least 10. α). Employing the principle of normalization by a maximum value ITD.sub.max as in equation (15), the ITD model according to Woodworth's law can be written in the form:
(69)
(70) In variants, it would be possible to define a multiplicative factor which does not represent the maximum value of the ITD but a proportional value for example the factor α/c. The invention also applies in this case. For example, to simplify the expression for Woodworth's law it is possible to write:
ITD(θ)=ITD.sub.max(sin(θ)+θ) (20)
where
ITD.sub.max=α/c (21)
In this case the value of ITD.sub.max does not represent the maximum value of the ITD. Hereinafter, this “disparity of notation” will be used.
(71) Thus, with reference to
(72) This model is for example the model such as defined hereinabove in equation (15) with a value ITD.sub.max=630 μs predefined in the model or the model of equation (20).
(73) In variants, the value ITD.sub.max could be rendered flexible by coding either this value directly, or by coding the difference between this value and a predetermined value. This approach makes it possible in fact to extend the application of the ITD model to more general cases, but its drawback is to require additional bitrate. To indicate that the explicit coding of the value ITD.sub.max is optional, the block 412 appears dashed in
(74) A module 411 for determining the angle θ such as defined hereinabove is implemented to obtain the angle defined by the sound source. More precisely this module searches for the azimuth parameter θ which makes it possible to approach as close as possible to the ITD extracted. When the law is known as in equation (15), this angle can be obtained in an analytical manner:
θ=α sin(ITD/ITD.sub.max) (22)
(75) In variants, the α sin function could be approximated.
(76) An equivalent approach for determining the azimuth can be implemented in the block 411. According to this approach, the determination of the angle θ for the sine law calls upon a search with the aid of the ITD model, for the closest value as a function of the possible values of azimuth:
θ=argmin.sub.θ∈T(ITD−ITD.sub.max sin(θ)).sup.2 (23)
(77) This search can be performed by pre-storing the various candidate values of ITD.sub.max.Math.sin(θ) arising from the ITD model in a table M.sub.ITD for a search interval which may be T=[−π/2, π/2] assuming that the ITD is symmetric when the source is in front of or behind the subject. In this case, the values of θ are discretized, for example with a step size of 1° over the search interval.
(78) In the case of Woodworth's law, it is also possible to follow the same approach as hereinabove for the sine law. The analytical expression for the inverse function of sin(θ)+θ not being trivial, it will be possible to prefer the search:
θ=argmin.sub.θ∈T(ITD−ITD.sub.max(sin(θ)+θ)).sup.2 (24)
(79) The angle parameter θ determined in the block 411 is thereafter coded according to a conventional coding scheme for example by scalar quantization on 4 bits by the block 316. This block carries out a search for the quantization index
i=argmin.sub.j=0, . . . ,15(θ−Q.sub.θ[j]).sup.2 (25)
(80) where the table is given for the case of a uniform scalar quantization on 4 bits
(81)
(82) In variants, the number of bits allocated to the coding of the azimuth could be different, and the quantization levels could be non-uniform to take account of the perceptual limits of location of a sound source according to the azimuth.
(83) It is the coding of this parameter which makes it possible to code the time shift cue ITD, optionally with the coding of ITD.sub.max (block 412) as additional cue if the value predetermined by the ITD model must be adapted. The spatialization cue will therefore be retrieved on decoding by decoding the angle parameter, optionally by decoding ITD.sub.max, and by applying the same representation model of the ITD. The bitrate necessary for coding this angle parameter is low (for example 4 bits per frame) when no correction of the value ITD.sub.max predefined in the model is coded. Thus, the coding of this spatialization cue (ITD) consumes little bitrate.
(84) At very low bitrate, the coding of a single angle θ can be implemented to code the spatialization cue in respect of a binaural signal.
(85) In a variant embodiment, it will be possible to estimate an ITD per frequency band, for example by taking a slicing into B sub-bands, defined previously. In this case, an angle θ per frequency band is coded and transmitted to the decoder, which for the example of B sub-bands gives B angles to be transmitted.
(86) In another variant, it will be possible to ignore the estimation of the ITD for certain high frequency bands for which the phase differences are not perceptible. Likewise, it will be possible to omit the estimation of the ITD for very low frequencies. For example, the ITD will not be able to be estimated for bands above 1 kHz, and for a sub-band slicing as defined previously it will be possible to retain the bands b=0 to 11 in the embodiment using the ⅓ octave and 1 to 16 in the variants using the ERB scale (the first band b=0 being omitted in the latter case since it entails frequencies below 25 Hz). In variants of the invention, a sub-band slicing with a different resolution from 25 Hz could be used; it will thus be possible to group together certain sub-bands since the ⅓ octave slicing or the ERB scale may be too fine for the coding of the ITD. This avoids coding too many angles per frame. For each frequency band, the ITD is thereafter converted into an angle as in the case of a single angle described hereinabove with a bit allocation which can be either fixed or variable as a function of the significance of the sub-band. In all these variants where several angles are determined and coded, a vector quantization could be implemented in the block 316.
(87)
(88) In this variant embodiment, one considers the definition of several “competing” models for coding the ITD, knowing that the invention also applies when a single ITD model is defined.
(89) Thus, the model such as defined for the interchannel time shift (ITD) cue might not be fixed and be parametrizable. Each model defines a set of values of ITD as a function of an angle parameter: the sine law and Woodworth's law constitute two examples of models. In this variant, for coding, a model index and an angle index (also called angle parameter) to be coded are determined in the block 432 on the basis of an ITD models table obtained at 430 according to the following equation:
(90)
where N.sub.M is the number of models in the ITD models table, N.sub.θ(m) is the number of azimuth angles considered for the m-th model and M.sub.ITD(m, t) corresponds to a precise value of the cue ITD.
(91) An exemplary model M.sub.ITD(m, t) is given hereinbelow in the case of a model of index m=0 according to a Woodworth law as in equation 20 with ITD.sub.max=0.2551 ms:
M.sub.ITD(m=1,t=0 . . . 7)=[−0.5362−0.3807−0.1978 0 0.1978 0.3807 0.5362 0.6558]
where each value is in ms. The angle index t corresponds in fact to an angle θ covering the interval
(92)
with a step size of
(93)
(94) This table can also be referred to samples for example in the case of a sampling at 16 kHz, one obtains in an equivalent manner:
M.sub.ITD(m=1, t=0 . . . 7)=[−8.5795−6.0919−3.1648 0 3.1648 6.0919 8.5795 10.4930]
(95) In this case, N.sub.θ(m)=8 and N.sub.M=1. It is therefore possible to code the cue ITD on 3 bits with this single model.
(96) It will be noted that for a given model index m, the model M.sub.ITD (m, t) is implicitly dependent on the azimuth angle, insofar as the index t in fact represents a quantization index for the angle θ. Thus, the model M.sub.ITD t) is an efficient means of combining the relation between ITD and θ, and the quantization of θ on N.sub.θ(m) levels, and of potentially using several models (at least one), indexed by m.sub.opt when more than one model is used.
(97) In one embodiment the case of two different models is for example considered: m=0: A binaural model previously defined with Woodworth's law with ITD(θ)=ITD.sub.max (sin(θ)+θ) and ITD.sub.max=10 (samples at 16 kHz)
m=1: A model according to a sine law as in equation (15) but for a mic A-B (2 omnidirectional microphones separated by a distance α). The sine law applies here also, only the parameter α depends on the distance between the microphones:
ITD(θ)=ITD.sub.max sin(θ) and ITD.sub.max=30 (samples at 16 kHz)
It will be noted that the size N.sub.θ(m) may be identical for all the models, but in the general case it is possible for different sizes to be used. For example it will be possible to define N.sub.θ(m)=16 and N.sub.M=2. It is therefore possible to code the cue ITD on 4+1=5 bits.
An index of the selected law m.sub.opt is then coded on ┌log.sub.2N.sub.M┐ bits and transmitted to the decoder in addition to the azimuth angle t.sub.opt coded on ┌log.sub.2N.sub.θ┐ bits. In the example taken hereinabove, it will be possible to code m.sub.opt on 1 bit, and t.sub.opt on 4 bits.
In a variant, it will be possible to replace the model m=0 by an ITD table as a function of the azimuth arising from real measurements of HRTFs, without parametric law, but with ITD values estimated on the real data; in this case, the size N.sub.θ(m) will be able to depend on the angular resolution used to measure HRTFs (assuming that no angular interpolation has been applied).
As in
(98) In a variant of the invention the representation model of the ITD could be generalized so as to reduce solely to the horizontal plane but also include the elevation. In this case, two angles are determined, the azimuth angle θ and the elevation angle φ.
(99) The search for the two angles can be made according to the following equation:
(100)
with N.sub.φ(m) the number of elevation angles considered for the m-th model and p.sub.opt representing the elevation angle to be coded.
(101) In the invention, one also seeks to reduce the coding bitrate of spatialization cues other than the ITD, such as the spatialization interchannel intensity difference (ILD) cue. It will be noted that the block 316 of
(102) Thus, in the same way as for the ITD it is possible to resort to a parametrization of the ILD. In the binaural case, in accordance with the thesis of Jérome Daniel, entitled “Representation de champs acoustiques, application a la transmission et a la reproduction de scenes sonores complexes dans un contexte multimedia” [Representation of acoustic fields, application to the transmission and reproduction of complex sound scenes in a multimedia context], University of Paris 6, Jul. 2011, the ILD can also be approximated according to the following law:
(103)
(104) where f is the frequency, r the distance from the sound source and c the speed of sound.
(105) By defining a relative ILD, ILD.sub.max, it is possible under certain conditions to reduce this approximation to the equation:
ILD.sub.glob(θ)=ILD.sub.max sin(θ) (30)
(106) The above law is only an approximation corresponding to the global level of the HRTFs at a given azimuth; it does not make it possible to completely characterize the spectral coloration given by the HRTFs but it characterizes only their global level. The reference ILD can be defined —at a later time, when defining the ILD model, by taking a base of normalized signals or a base of HRTF filters—by taking the maximum of the total ILD of a binaural signal. In the invention it is considered that this sine law applies not only to the total (or global) ILD but also to the sub-band based ILD; in this case, the parameter ILD.sub.max depends on the index of the sub-band and the model becomes:
ILD[b](θ)=ILD.sub.max[b]sin(θ) (31)
(107) Experimentally, it may be verified that if the energy (illustrated with reference to
(108) It will be noted that even if the symmetry of the frontal half-plane (azimuth lying in [0, 180] degrees) and the half-plane at the rear of the head (azimuth lying in [180, 360] degrees) is in general not totally valid, this sine law is used in the invention to code and decode the ILD.
(109) Just as for the case of the ITD where a value ITD.sub.max has been defined, it is therefore possible either to transmit the parameter ILD.sub.max, or to use a predetermined and stored value ILD.sub.max, so as to derive therefrom a value ILD.sub.glob (θ) according to equation (30) and thus apply a global ILD, valid over the whole spectrum of the signal to obtain a rudimentary (global) location.
(110) Another exemplary model relies on the configuration of ORTF stereo microphones which is illustrated in
(111) In this example, the sub-band based ILD model could be defined in relation to a configuration of ORTF microphones as follows:
ILD(θ)=L(θ)−R(θ)=α(cos(θ−θ.sub.0)−cos(θ+θ.sub.0) (32)
with
L(θ)=α(1+cos(θ−θ0)) (33)
R(θ)=α(1+cos(θ+θ.sub.0)) (34)
where θ.sub.0 (in radians) corresponds to 55°.
(112) This model can also be written in the form:
ILD(θ)=L(θ)−R(θ)=α(cos(θ)cos(θ.sub.0)+sin(θ)sin(θ.sub.0)) (35)
Here again it is possible to define a value ILD.sub.max which corresponds to:
ILD.sub.max=α (36)
Here again, it is assumed that the model defined in equation 35 applies not only to the case of a total (or global) ILD but also to the sub-band based ILD; in this case the parameter ILD.sub.max (or a proportional version) will be dependent on the sub-band in the form ILD[b].sub.max.
(113) Thus, with reference to
(114) This model is for example the model such as defined hereinabove in equation (30) or with other models described in this document.
(115) The angle parameter θ already defined at 411 can be reused at the decoder to retrieve the global ILD or the sub-band based ILD such as defined by equation (30), (31) or (35); this in fact makes it possible to “pool” the coding of the ITD and of the ILD. In the case where the value ILD.sub.max is not fixed, the latter is determined at 423 and coded.
(116) In a particular embodiment, a module 421 for estimating an interchannel intensity difference cue is implemented on the basis on the one hand of the angle parameter obtained by the block 411 in order to code the time shift cue (ITD) and on the other hand of the representation model of equation (30), (31) or (35). In an optional manner, the module 422 calculates a residual of the cue ILD, that is to say the difference between the cue in respect of real interchannel intensity difference (ILD) extracted at 314 and the interchannel intensity difference (ILD) cue estimated at 421 on the basis of the ILD model.
(117) This residual can be coded at 318 for example by a conventional scalar quantization scheme. However, in contradistinction to the coding of a direct ILD, the quantization table may be for example limited to a dynamic range of +/−12 dB with a step size of 3 dB.
(118) This ILD residual makes it possible to improve the quality of decoding of the cue ILD in the case where the ILD model is too specific and applies only to the signal to be coded in the current frame; it is recalled that a classification may optionally be used at the coder to avoid such cases, however in the general case it may be useful to code an ILD residual.
(119) Thus, the coding of these parameters as well as that of angle of the ITD makes it possible to retrieve at the decoder the interchannel intensity difference (ILD) cue of the binaural audio signal with a good quality.
(120) In the same way as for the ITD, the spatialization cue (global or sub-band based) will therefore be retrieved on decoding by applying the same representation model and by decoding if relevant the residual parameter and reference ILD parameter. The bitrate necessary for coding these parameters is lower than if the cue ILD itself were coded, in particular when the ILD residual does not have to be transmitted and when use is made of the parameter or parameters ILD.sub.max predefined in the ILD model or models. Thus, the coding of this spatialization cue (ILD) consumes little bitrate.
(121) This ILD model using only a global ILD value is however very simplistic since in general the ILD is defined on several sub-bands.
(122) In the coder described previously, B sub-bands according to a ⅓ octave slicing or according to the ERB scale were defined. To make it possible to represent more than one parameter of total (or global) ILD the representation model of the ILD is therefore extended to several sub-bands. This extension applies to the invention described in FIG. 4a, however the associated description is given hereinafter in the context of
(123) We consider the variant embodiment described in
(124)
(125) where N.sub.M is the number of models in the ILD models table, N.sub.θ(m) is the number of azimuth angles considered for the m-th model, M.sub.ILD (m, t) corresponds to a precise value of the cue ILD and dist(.,.) is a criterion of distance between ILD vectors. However, in a variant embodiment, this search could be simplified by using the angle cue already obtained in the block 432 for the ITD model. It will be noted that the values t=0, . . . , N.sub.θ(m)−1 for the ILD model do not necessarily correspond to the same set of values as for the ITD model, however it is advantageous to harmonize these sets so as to have coherence between representation models for the ILD and the ITD.
(126) The following may for example be taken as possible distance criteria:
dist(X,Y)=|Σ.sub.b=0.sup.B−1X[b]−Σ.sub.b=0.sup.B−1Y[b]|.sup.q (38)
where q=1 or 2.
(127) An exemplary ILD model is illustrated in
(128) In a variant of the invention the representation model of the ILD could be generalized so as not to reduce solely to the horizontal plane but also to include the elevation. In this case, the search for two angles becomes:
(129)
with N.sub.φ(m) the number of elevation angles considered for the m-th model and p.sub.opt representing the elevation angle to be coded.
(130) In a variant, an exemplary model M.sub.ILD (m, t, p) can be obtained on the basis of a suite of HRTFs in the following manner. Given the HRTF filters for θ and φ, it is possible to: calculate the ILDs per sub-band between left and right channels per sub-band optionally normalize the ILDs store the ILDs and determine the value of ILD.sub.max in each sub-band so as to adjust an expansion factor for the ILDs
The multidimensional table M.sub.ILD (m, t, p) can be seen as a directivity model referred to the domain of the ILD.
(131) An index of the selected law m.sub.opt is then coded and transmitted to the decoder at 318.
(132) In the same way as for
(133) Hitherto separate models have been considered for the ITD and the ILD, even if it was noted that the determination of the angle may be “pooled”. For example, the azimuth may be determined by using the ITD model and this same angle is used directly for the ILD model. Another variant embodiment calling upon a (joint) “integrated model” is now considered. This variant is described in
(134) In this variant, rather than having separate models for the ITD and the ILD (M.sub.ITD (m, t, p) and M.sub.ILD (m, t, p)) it will be possible to define a joint model in the block 450: M.sub.ITD,ILD (m, t, p) whose inputs comprise candidate values of ITD and of ILD; thus, for various discrete values representing θ and φ “vectors” (ITD, ILD) are defined. In this case, the distance measurement used for the search must combine the distance on the ITD and the distance on the ILD, however it is still possible to perform a separate search.
(135) Thus, an index of the selected law m.sub.opt, of the azimuth angle t.sub.opt and of the elevation angle p.sub.opt that are determined at 453, are coded at 331 and transmitted to the decoder. Just as for
(136) A variant of the coder illustrated in
(137) With reference to
(138) This decoder comprises a demultiplexer 701 in which the coded mono signal is extracted so as to be decoded at 702 by a mono EVS decoder (according to the specifications 3GPP TS 26.442 or TS 26.443) in this example. The part of the binary train corresponding to the mono EVS coder is decoded according to the bitrate used at the coder. It is assumed here that there is no loss of frames nor any binary errors in the binary train to simplify the description, however known techniques for correcting loss of frames can quite obviously be implemented in the decoder.
(139) The decoded mono signal corresponds to {circumflex over (M)}(n) in the absence of channel errors. An analysis by short-term discrete Fourier transform with the same windowing as at the coder is carried out on {circumflex over (M)}(n) (blocks 703 and 704) to obtain the spectrum {circumflex over (M)}[k]. It is considered here that a decorrelation in the frequency domain (block 720) is also applied. This decorrelation could also have been applied in the time domain.
(140) The details of implementation of the block 708 for the synthesis of the stereo signal are not presented here since they lie outside the scope of the invention, but the conventional synthesis techniques known from the prior art could be used.
(141) In the synthesis block 708, it is for example possible to reconstruct a signal with two channels with the following processing on the mono signal decoded and transformed into frequencies:
{circumflex over (L)}[k]=c.sub.1{circumflex over (M)}[k] (40)
{circumflex over (R)}[k]=c.sub.2{circumflex over (M)}[k]e.sup.−j2πkiTD/NFFT (41)
where c=10.sup.ILD[b]/10 (with b the index of the sub-band containing the spectral line of index k),
(142)
ITD is the ITD decoded for the spectral line k (if a single ITD is coded, this value is identical for the various spectral lines of index k) and NFFT is the length of the FFT and of the inverse FFT (blocks 704, 709, 712).
It is also possible to take into account the parameter ICC decoded at 718 to recreate a non-localized sound ambience (background noise) to improve the quality.
(143) The spectra {circumflex over (L)}[k] and {circumflex over (R)}[k] are thus calculated and thereafter converted into the time domain by inverse FFT, windowing, addition and overlap (blocks 709 to 714) to obtain the synthesized channels {circumflex over (L)}(n) and {circumflex over (R)}(n).
(144) The parameters which have been coded to obtain the spatialization cues are decoded at 705, 715 and 718.
(145) At 718, it is the cues ICC.sup.q[b] which are decoded if, however, they have been coded.
(146) At 705, it is the angle parameter θ which is decoded, optionally with a value ITD.sub.max. On the basis of this parameter, the module 706 for obtaining a representation model of an interchannel time shift cue is implemented to obtain this model. Just as for the coder, this model can be defined by equation (15) defined hereinabove. Thus, on the basis of this model and of the decoded angle parameter, it is possible for the module 707 to determine the interchannel time shift (ITD) cue in respect of the multichannel signal.
(147) If at the decoder an angle per frequency or per frequency band is coded, then these various angles per frequency or frequency bands are decoded to define the cues ITD per frequency or frequency bands.
(148) In the same way, in the case where parameters making it possible to code the interchannel intensity difference (ILD) cue are coded, they are decoded by the module for decoding these parameters at 715, at the decoder.
(149) Thus, the residual parameter (Resid. ILD) and reference ILD parameter (ILD.sub.max) are decoded at 715.
(150) On the basis of these parameters, the module 716 for obtaining a representation model of an interchannel intensity difference cue is implemented to obtain this model. Just as for the coder, this model can be defined by equation (30) defined hereinabove.
(151) Thus, on the basis of this model, of the ILD residual parameters (that is to say the difference between the cue in respect of real interchannel intensity difference (ILD) and the interchannel intensity difference (ILD) cue estimated with the model), of the reference ILD parameter (ILD.sub.max) and of the angle parameter decoded at 705 for the cue ITD, it is possible for the module 717 to determine the interchannel intensity difference (ILD) cue of the multichannel signal.
(152) If at the coder the ILD coding parameters were itemized by frequency band, then these various frequency band based parameters are decoded to define the cues ILD per frequency or frequency bands.
(153) It will be noted that the decoder of
(154) In a variant of the invention the decoder of
(155) The coder presented with reference to
(156) The coders and decoders such as described with reference to
(157)
(158) In the case of a coder, the memory block can advantageously comprise a computer program comprising code instructions for the implementation of the steps of the coding method in the sense of the invention, when these instructions are executed by the processor PROC, and in particular the steps of extracting a plurality of spatialization cues in respect of the multichannel signal, of obtaining at least one representation model of the spatialization cues extracted, of determining at least one angle parameter of a model obtained and of coding the at least one angle parameter determined so as to code the spatialization cues extracted during the coding of spatialization cues.
(159) In the case of a decoder, the memory block can advantageously comprise a computer program comprising code instructions for the implementation of the steps of the decoding method in the sense of the invention, when these instructions are executed by the processor PROC, and in particular the steps of receiving and decoding at least one coded angle parameter, of obtaining at least one representation model of spatialization cues and of determining a plurality of spatialization cues in respect of the multichannel signal on the basis of the at least one model obtained and of the at least one decoded angle parameter.
(160) The memory MEM can store the representation model or models of various spatialization cues which are used in the coding and decoding methods according to the invention.
(161) Typically, the descriptions of
(162) Such an item of equipment in the guise of coder comprises an input module able to receive a multichannel signal for example a binaural signal comprising the channels R and L for right and left, either through a communication network, or by reading a content stored on a storage medium. This multimedia equipment item can also comprise means for capturing such a binaural signal.
(163) The device in the guise of coder comprises an output module able to transmit a mono signal M arising from a channels reduction processing and at the minimum, an angle parameter θ making it possible to apply a representation model of a spatialization cue so as to retrieve this spatial cue. If relevant, other parameters such as the ILD residual, ILD or reference ITD (ILDmax or ITDmax) parameters are also transmitted via the output module.
(164) Such an item of equipment in the guise of decoder comprises an input module able to receive a mono signal M arising from a channels reduction processing and at the minimum an angle parameter θ making it possible to apply a representation model of the spatialization cue so as to retrieve this spatial cue. If relevant, to retrieve the spatialization cue, other parameters such as the ILD residual, ILD or reference ITD (ILDmax or ITDmax) parameters are also received via the input module E.
(165) The device in the guise of decoder comprises an output module able to transmit a multichannel signal for example a binaural signal comprising the channels R and L for right and left.
(166) Although the present disclosure has been described with reference to one or more examples, workers skilled in the art will recognize that changes may be made in form and detail without departing from the scope of the disclosure and/or the appended claims.