Apparatus, Method, or Computer Program for Processing an Encoded Audio Scene using a Parameter Conversion

20230238006 · 2023-07-27

    Inventors

    Cpc classification

    International classification

    Abstract

    An apparatus for processing an encoded audio scene representing a sound field related to a virtual listener position, the encoded audio scene including information on a transport signal and a first set of parameters related to the virtual listener position includes a parameter converter for converting the first set of parameters into a second set of parameters related to a channel representation including two or more channels for a reproduction at predefined spatial positions for the two or more channels, and an output interface for generating a processed audio scene using the second set of parameters and the information on the transport signal.

    Claims

    1. Apparatus for processing an encoded audio scene representing a sound field related to a virtual listener position, the encoded audio scene comprising information on a transport signal and a first set of parameters related to the virtual listener position, the apparatus comprising: a parameter converter for converting the first set of parameters into a second set of parameters related to a channel representation comprising two or more channels for a reproduction at predefined spatial positions for the two or more channels; and an output interface for generating a processed audio scene using the second set of parameters and the information on the transport signal.

    2. Apparatus of claim 1, wherein the output interface is configured for upmixing the transport signal using the second set of parameters into an upmix signal comprising the two or more channels.

    3. Apparatus of claim 1, wherein the output interface is configured to generate the processed audio scene by combining the transport signal or the information on the transport signal and the second set of parameters to acquire a transcoded audio scene as the processed audio scene.

    4. Apparatus of claim 1, wherein the first set of parameters comprises, for each input time frame of a plurality of input time frames and for each input frequency band of a plurality of input frequency bands, at least one DirAC parameter, wherein the parameter converter is configured to calculate the second set of parameters as parametric stereo or multichannel parameters.

    5. Apparatus of claim 4, wherein the at least one parameter comprises at least one of a direction of arrival parameter, a diffuseness parameter, a direction information parameter related to a sphere with the virtual listening position as an origin of the sphere, and a distance parameter, and wherein the parametric stereo or multichannel parameters comprise at least one of a side gain parameter, a residual prediction gain parameter, an inter-channel level difference parameter, an inter-channel time difference parameter, an inter-channel phase difference parameter and an inter-channel coherence parameter.

    6. Apparatus of claim 1, wherein an input time frame, to which the first set of parameters is related, comprises two or more input time subframes, and wherein an output time frame, to which the second set of parameters is related, is smaller than the input time frame and longer than an input time subframe of the two or more input time subframes, and wherein the parameter converter is configured to calculate a raw parameter of the second set of parameters for each one of the two or more input time subframes being time-subsequent and to combine at least two raw parameters to derive a parameter of the second set of parameters related to the output subframe.

    7. Apparatus of claim 6, wherein the parameter converter is configured to perform, in the combining of the at least two raw parameters, a weighted combination of the at least two raw parameters, wherein weighting factors for the weighted combination are derived based on an amplitude-related measure of the transport signal in the corresponding input time subframe.

    8. Apparatus of claim 7, wherein the parameter converter is configured to use, as the amplitude-related measure, an energy or a power, and wherein a weighting factor for an input subframe is greater in case of a higher energy or power of the transport signal in the corresponding input time subframe compared to a weighting factor for an input subframe comprising a lower energy or power of the transport signal in the corresponding input time subframe.

    9. Apparatus of claim 1, wherein the parameter converter is configured to calculate at least one raw parameter for each output time frame using at least one parameter of the first set of parameters for the input time frame, wherein the parameter converter is configured to calculate a smoothing factor for each raw parameter in accordance with a smoothing rule, and wherein the parameter converter is configured to apply a corresponding smoothing factor to the corresponding raw parameter to derive the parameter of the second set of parameters for the output time frame.

    10. Apparatus of claim 9, wherein the parameter converter is configured to calculate a long-term average over an amplitude-related measure of a first time portion of the transport signal, and to calculate a short-term average over an amplitude-related measure of a second time portion of the transport signal, wherein the second time portion is shorter than the first time portion, and to calculate a smoothing factor based on a ratio between the long-term average and the short-term average.

    11. Apparatus of claim 9, wherein the parameter converter is configured to calculate a smoothing factor for a band using a compression function, the compression function being different for different frequency bands, and wherein a compression strength of the compression function is stronger for a lower frequency band than for a higher frequency band.

    12. Apparatus of claim 9, wherein the parameter converter is configured to calculate the smoothing factor using different maximum bounds for different bands, wherein a maximum bound for a lower band is higher than a maximum bound for a higher band.

    13. Apparatus of claim 9, wherein the parameter converter is configured to apply, as the smoothing rule, a recursive smoothing rule over time-subsequent output time frames, so that a smoothed parameter for a current output time frame is calculated by combining the parameter for a preceding output time frame weighted by a first weighting value and a raw parameter for the current output time frame weighted by a second weighting value, wherein the first weighting value and the second weighting value are derived from the smoothing factor for the current time frame.

    14. Apparatus of claim 1, wherein the output interface is configured to perform a conversion of a time portion of the transport signal corresponding to an output time frame into a spectral representation, wherein the portion is shorter than an input time frame, in which the parameters of the first set of parameters are organized, to perform an upmixing operation of the spectral representation using the second set of parameters to acquire the two or more channels in the spectral representation; and to convert each channel of the two or more channels in the spectral representation into a time representation.

    15. Apparatus of claim 14, wherein the output interface is configured to convert into a complex discrete Fourier transform domain, to perform the upmixing operation in the complex discrete Fourier transform domain, and to perform the conversion from the complex discrete Fourier transform domain into a real-valued time domain representation.

    16. Apparatus of claim 14, wherein the output interface is configured to perform the upmixing operation based on the following equation: L ~ t , k = ( M ~ t , k ( 1 + g ~ t , b ) + r ~ t , b g norm ρ ~ t , k ) 2 and R ~ t , k = ( M ~ t , k ( 1 + g ~ t , b ) + r ~ t , b g norm ρ ~ t , k ) 2 , wherein {tilde over (M)}.sub.t,k is the transport signal for a frame t and a frequency bin k, wherein {tilde over (L)}.sub.t,k is a first channel of the two or more channels in the spectral representation for the frame t and the frequency bin k, wherein {tilde over (R)}.sub.t,k is a second channel of the two or more channels in the spectral representation for the frame t and the frequency bin k, wherein {tilde over (g)}.sub.t,b is a side gain parameter for the frame t and a subband b, wherein {tilde over (r)}.sub.t,b is a residual prediction gain parameter for the frame t and the suband b, wherein g.sub.norm is an energy adjusting factor that can be there or not, and wherein {tilde over (ρ)}.sub.t,k is a raw residual signal for the frame t and the frequency bin k.

    17. Apparatus of claim 1, wherein the first set of parameters is a direction of arrival parameter for an input frequency band, and wherein the second set of parameters comprises a side gain parameter per input frequency band, and wherein the parameter converter is configured to calculate the side gain parameter for the output frequency band using the following equation: sidegain [ b ] = sin ( azimuth [ b ] * π 180 ) cos ( elevation [ b ] * π 180 ) , wherein b is the output frequency band, wherein sidegain is the side gain parameter, wherein azimuth is an azimuth component of the direction of arrival parameter, and wherein elevation is an elevation component of the direction of arrival parameter.

    18. Apparatus of claim 17, wherein the first set of parameters additionally comprises a diffuseness parameter for the input frequency band, and wherein the parameter converter is configured to calculate the side gain parameter for the output frequency band using the following equation sidegain [ b ] = 1 - diff ( b ) sin ( azimuth [ b ] * π 180 ) cos ( elevation [ b ] * π 180 ) wherein diff(b) is the diffuseness parameter for the input frequency band b.

    19. Apparatus of claim 1, wherein the first set of parameters comprises a diffuseness parameter per input frequency band, and wherein the second set of parameters comprises a residual prediction gain parameter for an output frequency band, and wherein the parameter converter is to use, as the residual prediction gain parameter for the output parameter band, the diffuseness parameter from the input parameter band, when the input parameter band and the output parameter band are equal to each other, or to derive, from the diffuseness parameter for the input parameter band, a diffuseness parameter for the output parameter band, and to then use the diffuseness parameter for the output parameter band as the residual prediction gain parameter for the output parameter band.

    20. Apparatus of claim 14, wherein the information on the transport signal comprises a core encoded audio signal, and wherein the apparatus further comprises: a core decoder for core decoding the core encoded audio signal to acquire the transport signal.

    21. Apparatus of claim 20, wherein the core decoder is in an ACELP decoder, or wherein the output interface is configured to convert the transport signal being a low band signal into a spectral representation, to upmix the spectral representation and to convert an upmixed spectral representation in a time domain to acquire a low band representation of the two or more channels, wherein the apparatus comprises a bandwidth extension processor for generating a high band signal from the transport signal in the time domain, wherein the apparatus comprises a multichannel filler to apply a multichannel filling operation to the transport signal in the time domain, wherein the apparatus comprises an upmixer to apply a broad band panning in the time domain to the high band signal using at least one parameter from the second set of parameters, and wherein the apparatus comprises a signal combiner for combining, in the time domain, a result of the broad band panning, a result of the stereo filling, and the low band representation of the two or more channels to acquire a full band multichannel signal in the time domain as the channel representation.

    22. Apparatus of claim 1, wherein the output interface is configured to generate a raw representation of the two or more channels using the second set of parameters and the transport signal, wherein the apparatus further comprises a multichannel enhancer for generating an enhancement representation of the two or more channels using the transport signal, and wherein the apparatus further comprises a signal combiner for combining the raw representation of the two or more channels and the enhancement representation of the two or more channels to acquire the processed audio scene.

    23. Apparatus of claim 22, wherein the multichannel enhancer is configured for generating an enhancement representation of the two or more channels using the enhancement transport signal and the second set of parameters, or wherein the multichannel enhancer comprises a transport signal enhancer for generating an enhancement transport signal, and an upmixer for upmixing the enhancement transport signal.

    24. Apparatus of claim 23, wherein the transport signal is an encoded transport signal, and wherein the apparatus further comprises: a transport signal core decoder for generating a decoded raw transport signal, wherein the transport signal enhancer is configured for generating the enhancement transport signal using the decoded raw transport signal, and wherein the output interface is configured to generate the raw representation of the two or more channels using the second set of parameters and the decoded raw transport signal.

    25. Apparatus of claim 22, wherein the multichannel enhancer comprises either the upmixer or a multichannel filler or both the upmixer or the multichannel filler for generating the enhancement representation of the two or more channels using the transport signal or the enhancement transport signal and at least one parameter of the second set of parameters.

    26. Apparatus of claim 22, wherein the output interface is configured to generate a raw representation of the two or more channels using an upmix in a second domain, wherein the transport signal enhancer is configured for generating the enhancement transport signal in a first domain being different from the second domain, or wherein the multichannel enhancer is configured for generating the enhancement representation of the two or more channels using the enhancement transport signal in the first domain, and wherein the signal combiner is configured to combine the raw representation of the two or more channels and the enhancement representation of the two or more channels in the first domain.

    27. Apparatus of claim 26, wherein the first domain is a time domain and the second domain is a spectral domain.

    28. Apparatus of claim 22, wherein the transport signal enhancer or the multichannel enhancer is configured to perform at least one operation of a group of operations comprising a bandwidth extension operation, a gap filling operation, a quality enhancement operation, or an interpolation operation.

    29. Apparatus of claim 22, wherein the transport signal enhancer or the multichannel enhancer is configured to operate in parallel with the output interface when generating the raw representation, or wherein the parameter converter is configured to operate in parallel with the transport signal enhancer.

    30. Apparatus of claim 24, wherein the core decoder is configured to feed the decoded raw transport signal in two parallel branches, a first branch of the two parallel branches comprising the output interface, and a second branch of the two parallel branches comprising the transport signal enhancer or the multichannel enhancer or both, and wherein the signal combiner is configured to receive a first input to be combined from the first branch and a second input to be combined from the second branch.

    31. Apparatus of claim 1, wherein the output interface is configured to perform a conversion of a time portion of the transport signal corresponding to an output time frame into a spectral representation, to perform an upmixing operation of the spectral representation using the second set of parameters to acquire the two or more channels in the spectral representation; and to convert each channel of the two or more channels in the spectral representation into a time representation to acquire a raw time representation of the two or more channels, and wherein the signal combiner is configured for combining the raw time representation of the two or more channels and the enhancement time representation of the two or more channels.

    32. Method of processing an encoded audio scene representing a sound field related to a virtual listener position, the encoded audio scene comprising information on a transport signal and a first set of parameters related to the virtual listener position, the method comprising: converting the first set of parameters into a second set of parameters related to a channel representation comprising two or more channels for a reproduction at predefined spatial positions for the two or more channels; and generating a processed audio scene using the second set of parameters and the information on the transport signal.

    33. A non-transitory digital storage medium having a computer program stored thereon to perform the method of processing an encoded audio scene representing a sound field related to a virtual listener position, the encoded audio scene comprising information on a transport signal and a first set of parameters related to the virtual listener position, the method comprising: converting the first set of parameters into a second set of parameters related to a channel representation comprising two or more channels for a reproduction at predefined spatial positions for the two or more channels; and generating a processed audio scene using the second set of parameters and the information on the transport signal, when said computer program is run by a computer.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0032] Embodiments of the present invention will be detailed subsequently referring to the appended drawings, in which:

    [0033] FIG. 1 is a block diagram of an apparatus for processing an encoded audio scene using a parameter converter according to an embodiment;

    [0034] FIG. 2a illustrates a schematic diagram for the first set of parameters and for the second set of parameters according to an embodiment;

    [0035] FIG. 2b is an embodiment of a parameter converter or parameter processor for calculating a raw parameter;

    [0036] FIG. 2c is an embodiment of a parameter converter or parameter processor for combining raw parameters;

    [0037] FIG. 3 is an embodiment of a parameter converter or parameter processor for performing a weighted combination of raw parameters;

    [0038] FIG. 4 is an embodiment of a parameter converter for generating side gain parameters and residual prediction parameters;

    [0039] FIG. 5a is an embodiment of a parameter converter or parameter processor for calculating a smoothing factor for a raw parameter;

    [0040] FIG. 5b is an embodiment of a parameter converter or parameter processor for calculating a smoothing factor for a frequency band;

    [0041] FIG. 6 illustrates a schematic diagram of averaging a transport signal for the smoothing factor according to an embodiment;

    [0042] FIG. 7 is an embodiment of a parameter converter parameter processor for calculating a recursive smoothing;

    [0043] FIG. 8 is an embodiment of an apparatus for decoding a transport signal;

    [0044] FIG. 9 is an embodiment of an apparatus for processing an encoded audio scene using a bandwidth extension;

    [0045] FIG. 10 is an embodiment of an apparatus for obtaining a processed audio scene;

    [0046] FIG. 11 is a block diagram of an embodiment of a multichannel enhancer;

    [0047] FIG. 12 is a block diagram of a conventional DirAC stereo upmix process;

    [0048] FIG. 13 is an embodiment of an apparatus for obtaining a processed audio scene using a parameter mapping; and

    [0049] FIG. 14 is an embodiment of an apparatus for obtaining a processed audio scene using a bandwidth extension.

    DETAILED DESCRIPTION OF THE INVENTION

    [0050] FIG. 1 illustrates an apparatus for processing an encoded audio scene 130, for example, representing a sound field related to a virtual listener position. The encoded audio scene 130 comprises information on a transport signal 122, for example, a bitstream, and a first set of parameters 112, for example, a plurality of DirAC parameters also included in the bitstream, which are related to the virtual listener position. The first set of parameters 112 are input into a parameter converter 110 or parameter processor which converts the first set of parameters 112 into a second set of parameters 114, which are related to a channel representation comprising at least two or more channels. The apparatus is capable of supporting different audio formats. Audio signals can be acoustic in nature, picked up by microphones, or electrical in nature, which are supposed to be transmitted to the loudspeakers. Supported audio formats can be mono signal, a lowband signal, a highband signal, a multi-channel signal, a first-order and higher-order Ambisonics components, and audio objects. The audio scene can also be described by combining different input formats.

    [0051] The parameter converter 110 is configured to calculate the second set of parameters 114 as parametric stereo or multichannel parameters, for example, two or more channels, which are input to an output interface 120. The output interface 120 is configured to generate the processed audio scene 124 by combining the transport signal 122 or the information on the transport signal and the second set of parameters 114 to obtain a transcoded audio scene as the processed audio scene 124. Another embodiment comprises upmixing the transport signal 122 using the second set of parameters 114 into an upmix signal, comprising the two or more channels. In other words, the parameter converter 120 maps the first set of parameters 112, for example, used for the DirAC rendering, to the second set of parameters 114. The second set of parameters may comprise a side gain parameter, used for panning, and a residual prediction parameter that, when applied in the upmixing, results in an improved spatial image of the audio scene. For example, the parameters of the first set of parameters 112 may comprise at least one of a direction of arrival parameter, a diffuseness parameter, a direction information parameter related to a sphere with the virtual listening position as an origin of the sphere, and a distance parameter. For example, the parameters of the second set of parameters 114 may comprise least one of a side gain parameter, a residual prediction gain parameter, an inter-channel level difference parameter, an inter-channel time difference parameter, an inter-channel phase difference parameter and an inter-channel coherence parameter.

    [0052] FIG. 2a illustrates a schematic diagram for the first set of parameters 112 and for the second set of parameters 114 according to an embodiment. In particular, a parameter resolution for both parameters (first and second) is pictured. Each abscissa of the FIG. 2a represents time and each ordinate of the FIG. 2a represents frequency. As shown in FIG. 2a, the input timeframe 210, to which the first set of parameters 112 is related, comprises two or more input time subframes 212 and 213. Directly below, an output time frame 220, to which the second set of parameters 114 is related is shown in a corresponding diagram related to the top diagram. This indicates that the output time frame 220 is smaller compared to the input time frame 210 and that the output time frame 220 is longer compared to an input time subframe 212 or 213. It is noted, that an input time subframe 212 or 213 and an output time frame 220 may comprise a plurality of frequencies as a frequency band. The input frequency band 230 may comprise the same frequencies as the output frequency band 240. According to embodiments, the frequency bands of the input frequency band 230 and the output frequency bands 240 may not be connected or correlated to each other.

    [0053] It is to be noted that a side gain and a residual gain, which are described in FIG. 4, typically are calculated for frames so that, for each input frame 210, a single side gain and a single residual gain is calculated. In other embodiments, however, not only a single side gain and a single residual gain is calculated for each frame, but a group of side gains and a group of residual gains are calculated for an input time frame 210 where each side gain and each residual gain are related to a certain input time subframe 212 or 213, for example, of a frequency band. Thus, in embodiments, the parameter converter 110 calculates, for each frame of the first set of parameters 112 and the second set of parameters 114, a group of side gains and a group of residual gains, where the number of the side and the residual gains for an input time frame 210 is typically equal to the number of input frequency bands 230.

    [0054] FIG. 2b shows an embodiment of a parameter converter 110 for calculating 250 a raw parameter 252 of the second set of parameters 114. The parameter converter 110 calculates the raw parameter 252 for each one of the two or more input time subframes 212 and 213 in a time-subsequent way. For example, the calculation 250 derives for each input frequency band 230 and time instant (input time subframe 212, 213) a predominant direction-of-arrival (DOA) of azimuth θ and a predominant direction-of-arrival of elevation φ and diffuseness parameter ψ.

    [0055] For directional components like X, Y and Z, it is given than the first order spherical harmonics at the center position can be derived by the omni-directional component w(b,n) and the DirAC parameters using the following equations:


    W(b,n)=√{square root over ((1−ψ(b,n)))}w(b,n)


    X(b,n)=√{square root over ((1−ψ(k,n)))}w(b,n)(cos(θ(b,n))cos(φ(b,n)))


    Y(b,n)=√{square root over ((1−ψ(k,n)))}w(b,n)(sin(θ(b,n))cos(φ(b,n)))


    Z(b,n)=√{square root over ((1−ψ(b,n)))}w(b,n)(sin(φ(b,n)))

    [0056] The W channel represents a non-directional mono component of the signal, corresponding to the output of an omnidirectional microphone. The X, Y and Z channels are the directional components in three dimensions. From these four FOA channels it is able to obtain a stereo signal (stereo version, stereo output) by a decoding involving the W channel and the Y channel, using the parameter converter 110, which leads to two cardioids pointing to the azimuth angles +90 degrees and −90 degrees. Due to that fact, the following equation shows the relation of the stereo signal, left and right, in which by adding the Y channel to the W channel the left channel L is represented and in which by subtraction the Y channel from the W channel the right channel R is represented.

    [00001] { L = W + Y R = W - Y

    [0057] In other words, this decoding corresponds to a first order beamforming pointed the two directions, which can be expressed using the following equation:


    L/R=W+cos(θ)cos(φ)X+sin(θ)cos(φ)Y+sin(φ)Z

    [0058] Consequently, there is a direct link between stereo output (the left channel and the right channel) and the first set of parameters 112, namely the DirAC parameters.

    [0059] But, on the other hand the second set of parameters 114, namely the DFT parameters relies on the model of a left L channel and a right R channel based on a mid-signal M and a side signal S, which can be expressed using the following equation:

    [00002] { L = M + S R = M - S

    [0060] Here, M is the transmitted as a mono signal (channel) which corresponds to the omni-directional channel Win case of Scene Based Audio (SBA) mode. Furthermore, in the DFT stereo S is predicted from M using a side gain parameter, which is explained in the following.

    [0061] FIG. 4 shows an embodiment of the parameter converter 110 for generating the side gain parameters 455 and the residual prediction parameters 456, for example, using a calculation process 450. The parameter converter 110 advantageously processes the calculation 250 and 450 for calculating the raw parameter 252, for example, the side parameter 455 for the output frequency band 241 using the following equation:

    [00003] sidegain [ b ] = sin ( azimuth [ b ] * π 180 ) cos ( elevation [ b ] * π 180 )

    [0062] According to the equation, b is the output frequency band, sidegain is the side gain parameter 455, azimuth is an azimuth component of the direction of arrival parameter, and elevation is an elevation component of the direction of arrival parameter. As shown in FIG. 4, the first set of parameters 112 comprises the direction of arrival (DOA) parameters 456, for an input frequency band 231 as described before, and the second set of parameters 114 comprises a side gain parameter 455 per input frequency band 230. However, if the first set of parameters 112 additionally comprises the diffuseness parameter ψ453 for the input frequency band 231, the parameter converter 110 is configured to calculate 250 the side gain parameter 455 for the output frequency band 241 using the following equation:

    [00004] sidegain [ b ] = 1 - diff ( b ) sin ( azimuth [ b ] * π 180 ) cos ( elevation [ b ] * π 180 )

    [0063] According to the equation, diff(b) is the diffuseness parameter ψ453 for the input frequency band b 230. It is to be noted, that the directional parameters 456 of the first set of parameters 112 may comprise different value ranges, for example, the azimuth parameter 451 are [0;360], the elevation parameter 452 are [0;180] and the resulting side gain parameter 455 are [−1;1]. As shown in FIG. 2c, the parameter converter 110 combines, using the combiner 260, at least two raw parameters 252, so that a parameter of the second set of parameters 114, related to the output time frame 220, is derived.

    [0064] According to an embodiment, the second set of parameters 114 further comprises the residual prediction parameter 456 for an output frequency band 241 of the output frequency bands 240, which is shown in FIG. 4. The parameter converter 110 may use, as the residual prediction parameter 456 for the output frequency band 241, the diffuseness parameter ψ453 from the input frequency band 231 as illustrated by the residual selector 410. If the input frequency band 231 and the output frequency band 241 are equal to each other, then the parameter converter 110 uses the diffuseness parameter ψ453 from the input frequency band 231. From the diffuseness parameter ψ453 for the input frequency band 231, a diffuseness parameter ψ453 for the output frequency band 241 is derived, and the diffuseness parameter ψ453 is used for the output frequency band 241 as the residual prediction parameter 456 for the output frequency band 241. Then the parameter converter 110 may use the diffuseness parameter ψ453 from the input frequency band 231.

    [0065] In the DFT stereo processing, the residual of the prediction, using the residual selector 410, is supposed and expected to be incoherent and is modelled by its energy and decorrelating residual signals going to the Left L and Right R. The residual of the prediction of the side signal S with the mid-signal Mas the mono signal (channel) can be expressed as:


    R(b)=S(b)−sidegain[b]M(b)

    [0066] Its energy is modelled in the DFT stereo processing using a residual prediction gain using the following equation:


    R(b)∥.sup.2=residual prediction[b]∥M(b)∥.sup.2

    [0067] Since the residual gain represents the inter-channel incoherence component of the stereo signal and the spatial width, it is directly linked to the diffuse part modeled by the DirAC.

    [0068] Therefore, the residual energy can be rewritten as function of the DirAC diffuseness parameter:


    R(b)∥.sup.2=ψ(b)∥M(b)∥.sup.2

    [0069] FIG. 3 shows a parameter converter 110 for performing a weighted combination 310 of raw parameters 252 according to an embodiment. At least two raw parameters 252 are input to the weighted combination 310, wherein weighting factors 324 for the weighted combination 310 are derived based on an amplitude-related measure 320 of the transport signal 122 in the corresponding input time subframe 212. Furthermore, the parameter converter 110 is configured to use, as the amplitude-related measure 320, an energy or a power value of the transport signal 112 in the corresponding input time subframe 212 or 213. The amplitude-related measure 320 measures the energy or power of the transport signal 122 in the corresponding input time subframe 212 for example, so that the weighting factor 324 for that input subframe 212 is greater in case of a higher energy or power of the transport signal 122 in the corresponding input time subframe 212 compared to a weighting factor 324 for an input subframe 212 having a lower energy or power of the transport signal 122 in the corresponding input time subframe 212.

    [0070] As described before, the directional parameters, azimuth parameters and elevation parameters, have corresponding value ranges. However, the directional parameters of the first set of parameters 112 usually have a higher time resolution than the second set of parameters 114, which means that two or more azimuth and elevation values have to be used for the computation of one side gain value. According to an embodiment, the computation is based on energy-dependent weights, which can be obtained as an output of the amplitude related measure 320. For example, for all K input time subframes 212 and 213 the energy nrg of the subframe is calculated using the following equation:

    [00005] nrg [ k ] = .Math. i = 0 N x [ k ] [ i ] 2 , k = 0 , .Math. , K - 1

    [0071] where x is the time domain input signal, N the number of samples in each subframe and i the sample index. Furthermore, for each output time frame l 230 weights 324 can then be computed for the contribution of each input time subframe k 212, 213 inside each output time frame l as:

    [00006] weight [ k ] = nrg [ k ] .Math. kinl nrg [ k ]

    [0072] The side gain parameters 455 are then ultimately computed using the following equation:

    [00007] side [ l ] [ b ] = .Math. kinl weight [ k ] * 1 - diff ( b ) sin ( azi [ k ] [ b ] * π 180 ) cos ( ele [ k ] [ b ] * π 180 )

    [0073] Due to similarity between the parameters, the diffuseness parameter 453 per band is directly mapped to the residual prediction parameter 456 of all subframes in the same band. The similarity can be expressed with the following equation:


    residual prediction[l][b]=diffuseness[b]

    [0074] FIG. 5a shows an embodiment of a parameter converter 110 or parameter processor for calculating a smoothing factor 512 for each raw parameter 252 in accordance with a smoothing rule 514. Furthermore, the parameter converter 110 is configured to apply the smoothing factor 512 (a corresponding smoothing factor for one raw parameter) to the raw parameter 252 (the one raw parameter corresponding to the smoothing factor) to derive the parameter of the second set of parameters 114 for the output time frame 220, namely the parameter of the output time frame.

    [0075] FIG. 5b shows an embodiment of a parameter converter 110 or parameter processor for calculating a smoothing factor 522 for a frequency band using a compression function 540. The compression function 540 may be different for different frequency bands, so that a compression strength of the compression function 540 is stronger for a lower frequency band than for a higher frequency band. The parameter converter 110 is further configured to calculate the smoothing factor 512, 522 using a maximum bound selection 550. In other words, the parameter converter 110 may obtain the smoothing factor 512, 522 by using different maximum bounds for different frequency bands, so that a maximum bound for a lower frequency band is higher than a maximum bound for a higher frequency band.

    [0076] Both, the compression function 540 and the maximum bound selection 550 are input to the calculation 520 obtaining the smoothing factor 522 for the frequency band 522. For example, the parameter converter 110 is not limited to use two calculations 510 and 520 for calculating the smoothing factors 512 and 522, so that the parameter converter 110 is configured to calculate the smoothing factors 512, 522 using only one calculation block, which can output the smoothing factors 512 and 522. In other words, the smoothing factor is calculated bandwise (for each raw parameter 252) from the change of energies in the current frequency band. For example, by using the parameter smoothing process, the side gain parameter 455 and the residual prediction parameter 456 are smoothed over time to avoid strong fluctuations in the gains. As this requires a relatively strong smoothing most of the time but requires a quicker response whenever the audio scene 130 changes suddenly, the smoothing factor 512, 522 determining the strength of the smoothing is calculated adaptively.

    [0077] Therefore, bandwise energies nrg are computed in all subframes k using the following equation:

    [00008] nrg [ k ] [ b ] = .Math. iinb x real [ i ] 2 + x imag [ i ] 2

    [0078] where x are the frequency bins of the DFT-transformed signal (real and imaginary) and i is the bin index over all bins in the current frequency band b.

    [0079] To capture the change of energies over time two averages, one short-term average 331 and one long-term average 332, are calculated using the amplitude-related measure 320 of the transport signal 122 as shown in FIG. 3.

    [0080] FIG. 6 illustrates a schematic diagram of the amplitude-related measure 320 averaging a transport signal 122 for the smoothing factor 512 according to an embodiment. The x-axis represents time and the y-axis represents energy (of the transport signal 122). The transport signal 122 illustrates a schematic part of a sinus function 122. As depictured in FIG. 6 the second time portion 631 is shorter than the first time portion 632. The change of energies over the averages 331 and 332 are calculated for each band b according to the following equation:

    [00009] avg short [ b ] = .Math. k = 0 N short nrg [ k ] [ b ] N short and avg long [ b ] = .Math. k = 0 N long nrg [ k ] [ b ] N long

    [0081] Where N.sub.short and N.sub.long are the number of previous time subframes k over which the individual averages are calculated. For example, in this particular embodiment N.sub.short is set to the value of 3 and N.sub.long is set to the value of 10.

    [0082] Furthermore, the parameter converter or parameter processor 110 is configured to calculate, using the calculation 510, the smoothing factor 512, 522 based on a ratio between the long-term average 332 and the short-term average 331. In other words, the quotient of the two averages 331 and 332 is calculated, so that a higher short-term average indicating a recent increase in energy leads to a reduction of smoothing. The following equation shows the correlation of the smoothing factor 512 and the two averages 331 and 312.

    [00010] fac smooth [ b ] = max ( 1 , avg long [ b ] avg short [ b ] )

    [0083] Due to the fact, that higher long-term averages 332 indicating decreasing energy do not lead to a reduced smoothing, the smoothing factor 512 is set to the maximum of 1 (for now). As a result, the above formula limits the minimum of fac.sub.smooth[b] to

    [00011] N short N long

    (in this embodiment 0.3). It is, however, necessary for the factor to be close to 0 in extreme cases which is why the value is transformed from range

    [00012] [ N short N long ; 1 ]

    to range [0;1] using the following equation:

    [00013] fac smooth [ b ] = ( fac smooth [ b ] - N short N long ) ( N long N long - N short )

    [0084] In an embodiment, the smoothing is reduced excessively, compared to the smoothing illustrated before, so that the factor is compressed with a root function towards the value of 1. As stability is particularly important in the lowest bands, the 4th root is used in the frequency bands b=0 and b=1. The equation for the lowest bands is:

    [00014] fac smooth [ b ] = fac smooth [ b ] 4

    [0085] The equation for all other bands b >1 performs a compression by a square root function, using the following equation.


    fac.sub.smooth[b]=√{square root over (fac.sub.smooth[b])}

    [0086] By applying a square root function for all other bands b >1, extreme cases, in which the energy may increase exponentially, become smaller, while a less rapid increase in energy does not decrease smoothing that strongly.

    [0087] Furthermore, the maximum smoothing is set depending on the frequency band for the following equation. It is noted that a factor of 1 would simply repeat the previous value with no contribution of the current gain.


    fac.sub.smooth[b]=min(fac.sub.smooth[b],bounds[b])

    [0088] Here, bounds[b] represent a given implementation with 5 bands, that are set according to the following table:

    TABLE-US-00001 b bounds[b] (=frequency (set value per band) frequency band) 0 0.98 1 0.97 2 0.95 3 0.9 4 0.9

    [0089] The smoothing factor is calculated for each of the DFT Stereo subframe k in the current frame.

    [0090] FIG. 7 shows the parameter converter 110 according to an embodiment using a recursive smoothing 710 in which, both the side gain parameter g.sub.side[k][b] 455 and the residual prediction gain parameter g.sub.pred[k][b] 456 are recursively smoothed according to the following equations:


    g.sub.side[k][b]=fac.sub.smooth[k][b]g.sub.side[k−1][b]+(1 −fac.sub.smooth[k][b])g.sub.side[k][b]


    And


    g.sub.pred[k][b]=fac.sub.smooth[k][b]g.sub.pred[k−1][b]+(1 −fac.sub.smooth[k][b])g.sub.pred[k][b]

    [0091] By combining the parameter for a preceding output time frame 532 weighted by a first weighting value and a raw parameter 252 for the current output time frame 220 weighted by a second weighting value, a recursive smoothing 710 over time-subsequent output time frames for a current output time frame is calculated. In other words, the smoothed parameter for a current output time frame is calculated so that the first weighting value and the second weighting value are derived from the smoothing factor for the current time frame.

    [0092] These mapped and smoothed parameters (g.sub.side, g.sub.pred) are input to the DFT Stereo processing, namely the output interface 120, where the stereo signal (L/R) is generated from a downmix DMX, the residual prediction signal PRED and the mapped parameters g.sub.side and a g.sub.pred. For example, the downmix DMX is obtained from the downmix by either Enhanced Stereo Filling, using allpass-filters or by stereo filling, using a delay. The upmix is described by the following equations:


    L[k][b][i]=(1+g.sub.side[k][b])DMX[k][b][i]+g.sub.pred[k][b]g.sub.normPRED[k][b][i]


    And


    R[k][b][i]=(1−g.sub.side[k][b])DMX[k][b][i]−g.sub.pred[k][b]g.sub.normPRED[k][b][i]

    [0093] The upmix is processed for each subframe k in all bins i in frequency bands b, which is described in the previously shown table. Additionally each side gain g.sub.side is weighted by an energy normalization factor g.sub.norm computed from the energies of the downmix DMX and the residual prediction gain parameter PRED or g.sub.pred[k][b] as named above.

    [0094] The mapped and smoothed side gain 755 and the mapped and smoothed residual gain 756 are input to the output interface 120 for obtaining a smoothed audio scene. Therefore, processing an encoded audio scene using a smoothing parameter, based on the preceding description results in an improved compromise between an achievable audio quality and implementation effort.

    [0095] FIG. 8 shows an apparatus for decoding a transport signal 122 according to an embodiment. The (encoded) audio signal 816 is input to the transport signal core decoder 810 for core decoding the (core encoded) audio signal 816 to obtain the (decoded raw) transport signal 812, which is input to the output interface 120. For example, the transport signal 122 may be an encoded transport signal 812, which is output from the transport signal core encoder 810. The transport signal (decoded) 812 is input into the output interface 120, which is configured to generate the raw representation 818 of two or more channels, for example, of the left channel and the right channel, using the set of parameters 814, comprising the second set of parameters 114. For example, the transport signal core decoder 810 for decoding the core encoded audio signal to obtain the transport signal 122, is an ACELP decoder. Furthermore, the core decoder 810 is configured to feed the decoded raw transport signal 812 in two parallel branches, a first branch of the two parallel branches comprising the output interface 120, and a second branch of the two parallel branches comprising the transport signal enhancer 820 or the multichannel enhancer 990 or both. The signal combiner 940 is configured to receive a first input to be combined from the first branch and a second input to be combined from the second branch.

    [0096] As shown in FIG. 9 the apparatus for processing an encoded audio scene 130 may use a bandwidth extension processor 910. The lowband transport signal 901 is input to the output interface 120 for obtaining the two-channel lowband representation of the transport signal 972. It is to be noted that the output interface 120 processes the transport signal 901 in the frequency domain 955, for example, during the upmixing process 960, and converts the two-channel transport signal 901 in the time domain 966. This is done by the converter 970, which converts the upmixed spectral representation 962 that is presenting the frequency domain 955 into the time domain to obtain the two-channel lowband representation of the transport signal 972.

    [0097] As shown in FIG. 8, the single channel lowband transport signal 901 is input to the converter 950, performing, for example, a conversion of a time portion of the transport signal 901 corresponding to an output time frame 220 into a spectral representation 952 of the transport signal 901, i.e., from the time-domain 966 into the frequency domain 955. For example, as described in FIG. 2 the portion (of the output time frame) is shorter than the input time frame 210, in which the parameters 252 of the first set of parameters 112 are organized.

    [0098] The spectral representation 952 is input to the upmixer 960 to upmix the spectral representation 952, using, for example, the second set of parameters 114, to obtain the upmixed spectral representation 962, which is (still) processed in the frequency domain 955. As indicated before, the upmixed spectral representation 962 is input into the converter 970 for converting the upmixed spectral representation 962, namely each channel of the two or more channels, from the frequency domain 955 into the time domain 966 (time representation) to obtain the lowband representation 972. Thus, the two or more channels in the upmixed spectral representation 962 are calculated. Advantageously, the output interface 120 is configured to operate in a complex discrete Fourier transform domain, wherein the upmixing operation is performed in the complex discrete Fourier transform domain. The conversion from the complex discrete Fourier transform domain back into the real-valued time domain representation is done using the converter 970. In other words, the output interface 120 is configured to generate a raw representation of the two or more channels using the upmixer 960 in a second domain, namely the frequency domain 955, wherein the first domain represents the time domain 966.

    [0099] In an embodiment, the upmixing operation of the upmixer 960 is based on the following equation:

    [00015] L ~ t , k = ( M ~ t , k ( 1 + g ~ t , b ) + r ~ t , b g norm ρ ~ t , k ) 2 and R ~ t , k = ( M ~ t , k ( 1 - g ~ t , b ) - r ~ t , b g norm ρ ~ t , k ) 2 ,

    [0100] wherein {tilde over (M)}.sub.t,k is the transport signal 901 for the frame t and the frequency bin k, wherein {tilde over (g)}.sub.t,b is the side gain parameter 455 for the frame t and the subband b, wherein {tilde over (r)}.sub.t,b is the residual prediction gain parameter 456 for the frame t and the subband b, wherein g.sub.norm is an energy adjusting factor that can be there or not, and wherein {tilde over (ρ)}.sub.t,k is a raw residual signal for the frame t and the frequency bin k.

    [0101] The transport signal 902, 122 is processed in the time domain 966, in contrast to the lowband transport signal 901. The transport signal 902 is input to the bandwidth extension processor (BWE processor) 910 for generating a highband signal 912, and is input to the multichannel filter 930 for applying a multichannel filling operation. The highband signal 912 is input to the upmixer 920 for upmixing the highband signal 912 into an upmixed highband signal 922 using the second set of parameters 144, namely the parameter of the output time frame 262, 532. For example, the upmixer 920 may apply a broad band panning process in the time domain 966 to the high band signal 912 using at least one parameter from the second set of parameters 114.

    [0102] The lowband representation 972, the upmixed highband signal 922 and the multichannel filled transport signal 932 are input to the signal combiner 940 for combining, in the time domain 966, a result of the broad band panning 922, a result of the stereo filling 932 and the lowband representation of the two or more channels 972. This combining results in a full-band multichannel signal 942 in the time domain 966 as the channel representation. As outlined earlier, the converter 970 converts each channel of the two or more channels in the spectral representation 962 into a time representation to obtain a raw time representation of the two or more channels 972. Hence, the signal combiner 940 combines the raw time representation of the two or more channels and the enhancement time representation of the two or more channels.

    [0103] In an embodiment, only the lowband (LB) transport signal 901 is input in the output interface 120 (the DFT Stereo) processing while the highband (HB) transport signal 912 is upmixed (using the upmixer 920) separately in the time domain. Such a process is implemented via for a panning operation using the BWE processor 910 plus a time domain stereo filling, using the multichannel filler 930 for generating an ambience contribution. The panning process comprises a broad-band panning that is based on the mapped side gains, for example, a mapped and smoothed side gain 755 per frame. Here, there is only a single gain per frame covering the complete highband frequency region which simplifies the calculation of the left and right highband channels from the downmix channel that is based on the following equations:


    HB.sub.left[k][i]=HB.sub.dmx[k][i]+g.sub.side,hb[k]*HB.sub.dmx[k][i]


    and


    HB.sub.right[k][i]=HB.sub.dmx[k][i]−sidegain.sub.hb[k]*HB.sub.dmx[k][i]

    [0104] for every sample i in each subframe k.

    [0105] The highband stereo filling signal PRED.sub.hb, namely the multichannel filled transport signal 932 is obtained by delaying the HB.sub.dmx and weighting same by g.sub.side,hb and additionally using an energy normalization factor a norm as described in the following equations:


    PRED.sub.hb,left[i]=g.sub.pred,hb*g.sub.norm*HB.sub.dmx[i−d]


    and


    PRED.sub.hb,right[i]=−g.sub.pred,hb*g.sub.norm*HB.sub.dmx[i−d]

    [0106] for every sample i in the current time frame (done on a full time frame 210, not on time subframes 213 and 213). d is the number of samples by which the highband downmix is delayed for the generating the filling signal 932 obtained by the multichannel filler 930. Other ways for generating the filling signal apart from delaying can be performed such as a more advanced decorrelation processing or the usage of a noise signal or any other signal derived from the transport signal in a different way compared to a delay.

    [0107] Both the panned stereo signal 972 and 922 and the generated stereo filling signal 932 are combined (mixed back) to the core signal after the DFT synthesis using the signal combiner 940.

    [0108] This described process of the ACELP highband is also in contrast to the higher-delay DirAC processing where the ACELP core and the TCX frames are artificially delayed so as to be aligned with the ACELP highband. There, the CLDFB (analysis) is performed on the complete signal, which means, the upmix of the ACELP highband is also done in the CLDFB domain (frequency domain).

    [0109] FIG. 10 shows an embodiment of an apparatus for obtaining a processed audio scene 124. The transport signal 122 is input to the output interface 120 for generating the raw representation of two or more channels 972, using the second set of parameters 114 and the multichannel enhancer 990 for generating an enhancement representation 992 of the two or more channels. For example, the multichannel enhancer 990 is configured to perform at least one operation of a group of operations comprising a bandwidth extension operation, a gap filling operation, a quality enhancement operation, or an interpolation operation. Both, the raw representation of the two or more channels 972 and the enhancement representation 992 of the two or more channels are input to the signal combiner 940 to obtain the processed audio scene 124.

    [0110] FIG. 11 shows a block diagram of an embodiment of a multichannel enhancer 990 for generating the enhancement representation 992 of the two or more channels comprising a transport signal enhancer 820, an upmixer 830 and the multichannel filler 930. The transport signal 122 and/or the decoded raw transport signal 812 are/is input the transport signal enhancer 820 generating an enhancement transport signal 822, which is input to the upmixer 830 and the multichannel filler 930. For example, the transport signal enhancer 820 is configured to perform at least one operation of a group of operations comprising a bandwidth extension operation, a gap filling operation, a quality enhancement operation, or an interpolation operation.

    [0111] As seen in FIG. 9 the multichannel filler 930 generates the multichannel filled transport signal 932 using the transport signal 902 and at least one parameter 532. In other words, the multichannel enhancer 990 is configured for generating an enhancement representation of the two or more channels 992 using the enhancement transport signal 822 and the second set of parameters 114 or using the enhancement transport signal 822 and the upmixed enhancement transport signal 832. For example, multichannel enhancer 990 comprises either the upmixer 830 or a multichannel filler 930 or both the upmixer 830 or the multichannel filler 930 for generating the enhancement representation 992 of the two or more channels using the transport signal 122 or the enhancement transport signal 933 and at least one parameter of the second set of parameters 532. In an embodiment the transport signal enhancer 820 or the multichannel enhancer 990 is configured to operate in parallel with the output interface 120 when generating the raw representation 972 or the parameter converter 110 is configured to operate in parallel with the transport signal enhancer 820.

    [0112] I In FIG. 13, the bitstream 1312 that is transmitted from an encoder to a decoder may be the same as in the DirAC-based upmixing scheme shown in the FIG. 12. The single transport channel 1312 derived from a DirAC-based spatial downmixing process is input into the core decoder 1310 and decoded with the core decoder, for example an EVS or IVAS mono decoder, and is transmitted along with the corresponding DirAC side parameters 1313.

    [0113] In this DFT Stereo approach for processing an audio scene with no extra delay, the initial decoding in the mono core decoder (IVAS mono decoder) of the transport channel also remains unchanged. Instead of going through a CLDFB filterbank 1220 from FIG. 12, the decoded downmix signal 1314 is input to the DFT analysis 1320, for transforming the decoded mono signal 1314 to SIFT domain (frequency domain), such as by using windows with very short overlap. Thus, the DFT analysis 1320 does not cause any additional delay with respect to the targeted system delay of 32 ms only using the remaining headroom between overall delay and what is already caused by the MDCT analysis/synthesis of the core decoder.

    [0114] The DirAC side parameters 1313 or the first set of parameters 112 are input to the parameter mapping 1360, which for example can comprise the parameter converter 110 or parameter processor for obtaining the DFT Stereo side parameters, namely the second set of parameters 114. The frequency domain signal 1322 and the DFT side parameters 1362 are input to the DFT Stereo decoder 1330 for generating a stereo upmix signal 1332, for example, by using the upmixer 960 described in FIG. 9. The two channels of the stereo upmix 1332 are input to the DFT synthesis, for converting the stereo upmix 1332 from the frequency domain into the time domain, for example, using the converter 970 described in FIG. 9 resulting in the output signal 1342, which may represent the processed audio scene 124.

    [0115] FIG. 14 shows an embodiment for processing an encoded audio scene using a bandwidth extension 1470. The bitstream 1412 is input into an ACELP core or lowband decoder 1410 instead of an IVAS mono decoder as described in FIG. 13 for generating a decoded lowband signal 1414. The decoded lowband signal 1414 is input to the DFT analysis 1420 for converting the signal 1414 into a frequency domain signal 1422, for example, the spectral representation 952 of the transport signal 901 from FIG. 9. The DFT Stereo decoder 1430 may represent the upmixer 960, which generates an LB stereo upmix 1432 using the decoded lowband signal 1442 in frequency domain and the DFT Stereo side parameters 1462 from the parameter mapping 1460. The generated LB stereo upmix 1432 is input to the DFT synthesis block 1440 for performing a conversion into the time domain, using for example, the converter 970 of the FIG. 9. The low band representation 972 of the transport signal 122, namely the output signal 1442 of the DFT synthesis stage 1440 is input to the signal combiner 940 combining the upmixed highband stereo signal 922 and the multichannel filled highband transport signal 932 and the lowband representation of the transport signal 972 resulting in the fullband multichannel signal 942.

    [0116] The decoded LB signal 1414 and the parameters 1415 for the BWE 1470 are input into the ACELP BWE decoder 910 for generating a decoded highband signal 912. The mapped side gains 1462, for example, the mapped and smoothed side gains 755 for the lowband spectral region are input to the DFT Stereo block 1430, and the mapped and smoothed single side gain for the whole highband are forwarded to the highband upmix block 920 and the stereo filling block 930. The HB upmix block 920 for upmixing the decoded HB signal 912 using the highband side gain 1472, such as the parameters 532 of the output time frame 262 from the second set of parameters 114 generates the upmixed highband signal 922. The Stereo filling block 930 for filling the decoded highband transport signal 912, 902 uses the parameters 532, 456 of the output time frame 262 from the second set of parameters 114 and generates the highband filled transport signal 932.

    [0117] To conclude, embodiments according to the invention create a concept for processing an encoded audio scene using a parameter conversion, and/or using a bandwidth extension and/or using a parameter smoothing that result in an improved compromise between an overall delay, achievable audio quality, and implementation effort.

    [0118] Subsequently, further embodiments of the inventive aspects and particularly of a combination of the inventive aspects are illustrated. The proposed solution to achieve a low-delay upmix is by using a parametric stereo approach e.g. the approach described in [4] using Short-Time Fourier Transform (STFT) filterbanks rather than the DirAC renderer. In this “DFT-Stereo” approach an upmix of one downmix channel into a stereo output is described. The advantage of this method is that windows with very short overlaps are used for the DFT analysis at the decoder that allow to stay within much lower overall delay needed for communications codecs like EVS [3] or the upcoming IVAS codec (32 ms). Also, unlike the DirAC CLDFB, the DFT Stereo processing is not a post-processing step to the core coder but runs in parallel with a part of the core processing, namely the bandwidth extension (BWE) of the Algebraic Code-Excited Linear Prediction (ACELP) speech coder without exceeding this already given delay. In relation to the 32 ms delay of EVS the DFT Stereo processing can therefore be called delay-less as it operates at the same overall coder delay. DirAC, on the other hand, can be seen as a post-processor that causes 5 additional ms of delay due to the CLDFB extending the overall delay to 37 ms.

    [0119] Generally, a gain in delay is achieved. A low-delay is coming from a processing step that happens in parallel with the core processing, whereas an exemplary CLDFB version is a post processing step to do the needed rendering that comes after the core coding.

    [0120] Unlike DirAC, DFT Stereo makes use of the artificial delay of 3.25 ms for all components except the ACELP BWE by only transforming those components into DFT domain using windows with a very short overlap of 3.125 ms that fit into the available headroom without causing more delay. Thus, only TCX and ACELP without BWE are upmixed in frequency domain, while the ACELP BWE is upmixed in time domain by a separate delay-less processing step called Inter-Channel Bandwidth Extension (ICBWE) [5]. In the special stereo output case of the given embodiment this time-domain BWE processing is slightly altered which will be described towards the end of the embodiment.

    [0121] The transmitted DirAC parameters cannot be used directly for a DFT Stereo upmix. A mapping of the given DirAC parameters to corresponding DFT Stereo parameters becomes therefore necessary. While DirAC uses azimuth and elevation angles for spatial placement along with a diffuseness parameter, DFT Stereo has a single side gain parameter used for panning and a residual prediction parameter that is closely related to the stereo width and therefore to the diffuseness parameter of DirAC. In terms of parameter resolution each frame is divided in to two subframes and several frequency bands per subframe. Side and residual gain as used in DFT Stereo are described in [6].

    [0122] DirAC parameters are derived from the band-wise analysis of the audio scene originally in B-format or FOA. It then derives for each band k and time instant n a predominant direction-of-arrival of azimuth θ(b.n) and of elevation φ(b,n) and diffuseness factor ψ(b,n). For directional components it is given than the first order spherical harmonics at the center position can be derived by the omni-directional component w(b,n) and the DirAC parameters:


    W(b,n)=√{square root over ((1−ψ(b,n)))}w(b,n)


    X(b,n)=√{square root over ((1−ψ(k,n)))}w(b,n)(cos(θ(b,n))cos(φ(b,n)))


    Y(b,n)=√{square root over ((1−ψ(k,n)))}w(b,n)(sin(θ(b,n))cos(φ(b,n)))


    Z(b,n)=√{square root over ((1−ψ(b,n)))}w(b,n)(sin(φ(b,n)))

    [0123] Moreover, from the FOA channels it is possible to get a stereo version by a decoding involving W and Y, which leads to two cardioids pointing to the azimuth angles +90 and −90 degrees.

    [00016] { L = W + Y R = W - Y

    [0124] This decoding correspond to a first order beamforming pointed the two directions.


    L/R=W+cos(θ)cos(φ)X+sin(θ)cos(φ)Y+sin(φ)Z

    [0125] Consequently, there is a direct link between stereo output and DirAC parameters. On the other hand DFT parameters relies on the model of a L and R channels based on a mid-signal M and a side signal S.

    [00017] { L = M + S R = M - S

    [0126] M is the transmitted as mono channel and corresponds to the omni-directional channel W in case of SBA mode. In DFT stereo S is predicted from M using a side gain, which can then be expressed using DirAC parameters as follows:

    [00018] sidegain [ b ] = sin ( azimuth [ b ] * π 180 ) cos ( elevation [ b ] * π 180 ) ,

    [0127] In DFT stereo, the residual of the prediction is supposed and expected to be incoherent and is modelled by its energy and decorrelating residual signals going to the Left and Right. The residual of the prediction of S with M can expressed as:


    R(b)=S(b)−sidegain[b]M(b)

    [0128] And its energy is modelled in DFT stereo using a prediction gains as follows:


    R(b)∥.sup.2=respred[b]∥M(b)∥.sup.2

    [0129] Since the residual gain represents the inter-channel incoherence component of the stereo signal and the spatial width, it is directly linked to the diffuse part modeled by DirAC. Therefore, the residual energy can be rewritten as function of the DirAC diffuseness parameter:


    R(b)∥.sup.2=ψ(b)∥M(b)∥.sup.2

    [0130] As the band configuration normally used DFT Stereo is not the same as for DirAC it has to be adapted to cover the same frequency ranges as the DirAC bands. For those bands the directional angles of DirAC can then be mapped to a side gain parameter of DFT Stereo via

    [00019] sidegain [ b ] = 1 - diff ( b ) sin ( azimuth [ b ] * π 180 ) cos ( elevation [ b ] * π 180 )

    [0131] where b is the current band and parameter ranges are [0; 360] for the azimuth, [0; 180] for the elevation and [−1; 1] for the resulting side gain value. However, the directional parameters of DirAC usually have a higher time resolution than DFT Stereo which means that 2 or more azimuth and elevation values have to be used for the computation of one side gain value. One way would be to do an averaging between the subframes but in this implementation the computation is based on energy-dependent weights. For all K DirAC subframes the energy of the subframe is calculated as

    [00020] nrg [ k ] = .Math. i = 0 N x [ k ] [ i ] 2 , k = 0 , .Math. , K - 1

    [0132] where x is the time domain input signal, N the number of samples in each subframe and i the sample index. For each DFT Stereo subframe l weights can then be computed for the contribution of each DirAC subframe k inside l as

    [00021] weight [ k ] = nrg [ k ] .Math. k in l nrg [ k ]

    [0133] The side gains are then ultimately computed as

    [00022] side [ i ] [ b ] = .Math. k in l weight [ k ] * 1 - diff ( b ) sin ( azi [ k ] [ b ] * π 180 ) cos ( ele [ k ] [ b ] * π 180 )

    [0134] Due to similarity between the parameters the one diffuseness value per band is directly mapped to the residual prediction parameter of all subframes in the same band


    respred[l][b]=diffuseness[b]

    [0135] Additionally, the parameters are smoothed over time to avoid strong fluctuations in the gains. As this requires a relatively strong smoothing most of the time but requires a quicker response whenever the scene changes suddenly, the smoothing factor determining the strength of the smoothing is calculated adaptively. This adaptive smoothing factor is calculated bandwise from the change of energies in the current band. Therefore, bandwise energies have to be computed in all subframes k first:

    [00023] nrg [ k ] [ b ] = .Math. i in b x real [ i ] 2 + x imag [ i ] 2

    [0136] where x are the frequency bins of the DFT-transformed signal (real and imaginary) and i is the bin index over all bins in the current band b.

    [0137] To capture the change of energies over time 2 averages, one short-term and one long-term, are then computed for each band b according to

    [00024] avg short [ b ] = .Math. k = 0 N short nrg [ k ] [ b ] N short and avg long [ b ] = .Math. k = 0 N long nrg [ k ] [ b ] N long

    [0138] Where N.sub.short and N.sub.long are the number of previous subframes k over which the individual averages are calculated. In this particular implementation N.sub.short is set to 3 and N.sub.long is set to 10. The smoothing factor is then calculated from the quotient of the averages so that a higher short-term average indicating recent increase in energy leads to a reduction of smoothing:

    [00025] fac smooth [ b ] = max ( 1 , avg long [ b ] avg short [ b ] )

    [0139] Higher long-term averages indicating decreasing energy do not lead to reduced smoothing so the smoothing factor is set to the maximum of 1 for now.

    [0140] The above formula limits the minimum of fac.sub.smooth[b] to

    [00026] N short N long

    (in this implementation 0.3). It is, however, necessary for the factor to be close to 0 in extreme cases which is why the value is transformed from range

    [00027] [ N short N long ; 1 ]

    to range [0; 1] via

    [00028] fac smooth [ b ] = ( fac smooth [ b ] - N short N long ) ( N long N long - N short )

    [0141] For less extreme cases, the smoothing is now reduced excessively, so the factor is compressed with a root function towards value 1. As stability is particularly important in the lowest bands, the 4.sup.th root is used in bands b=0 and b=1:

    [00029] fac smooth [ b ] = fac smooth [ b ] 4

    [0142] while all other bands b >1 are compressed by a square root


    fac.sub.smooth[b]=√{square root over (fac.sub.smooth[b])}

    [0143] This way extreme cases remain close to 0 while a less rapid increase in energy does not decrease smoothing so strongly.

    [0144] Finally, the maximum smoothing is set depending on the band (a factor of 1 would simply repeat the previous value with no contribution of the current gain):


    fac.sub.smooth[b]=min(fac.sub.smooth[b],bounds[b])

    [0145] where bounds[b] in the given implementation with 5 bands are set according to the following table

    TABLE-US-00002 b bounds[b] 0 0.98 1 0.97 2 0.95 3 0.9 4 0.9

    [0146] The smoothing factor is calculated for each DFT Stereo subframe k in the current frame.

    [0147] In the last step, both side gain and residual prediction gain are recursively smoothed according to


    g.sub.side[k][b]=fac.sub.smooth[k][b]g.sub.side[k−1][b]+(1−fac.sub.smooth[k][b])g.sub.side[k][b]


    And


    g.sub.pred[k][b]=fac.sub.smooth[k][b]g.sub.pred[k−1][b]+(1−fac.sub.smooth[k][b])g.sub.pred[k][b]

    [0148] These mapped and smoothed parameters are now fed to the DFT Stereo processing where a stereo signal L/R is generated from the downmix DMX, the residual prediction signal PRED (obtained from the downmix by either “Enhanced Stereo Filling” using allpass-filters [7] or by regular stereo filling using a delay) and the mapped parameters g.sub.side and g.sub.pred. The upmix is described in general by the following formulas [6]:


    L[k][b][i]=(1+g.sub.side[k][b])DMX[k][b][i]+g.sub.pred[k][b]g.sub.normPRED[k][b][i]


    And


    R[k][b][i]=(1−g.sub.side[k][b])DMX[k][b][i]−g.sub.pred[k][b]g.sub.normPRED[k][b][i]

    [0149] for each subframe k all bins i in bands b. Additionally each side gain g.sub.side is weighted by an energy normalization factor g.sub.norm computed from the energies of DMX and PRED.

    [0150] Finally, the upmixed signal is transformed back to time domain via IDFT to be played back on the given stereo setup.

    [0151] As the “time domain bandwidth extension” (TBE) [8] which is used in ACELP generates its own delay (in the implementation this embodiment is based on exactly 2.3125 ms) it cannot be transformed to DFT domain while staying within 32 ms overall delay (where 3.25 ms are left for the stereo decoder of which the STFT already uses 3.125 ms). Thus, only the lowband (LB) is put into the DFT Stereo processing indicated by 1450 in FIG. 14, while the highband (HB) has to be upmixed separately in time domain as shown in block 920 in FIG. 14. In regular DFT Stereo, this is done via Inter-Channel Bandwidth Extension (ICBWE) [5] for panning plus time domain stereo filling for ambience. In the given case the stereo filling in block 930 is computed in the same way as in regular DFT Stereo. The ICBWE processing, however, is skipped entirely due to missing parameters and replaced by a low resources requiring broad-band panning in block 920 based on the mapped side gains 1472. In the given embodiment there is only a single gain covering the complete HB region which simplifies the calculation of the left and right HB channels in block 920 from the downmix channel to


    HB.sub.left[k][i]=HB.sub.dmx[k][i]+g.sub.side,hb[k]*HB.sub.dmx[k][i]


    And


    HB.sub.right[k][i]=HB.sub.dmx[k][i]−sidegain.sub.hb[k]*HB.sub.dmx[k][i]

    [0152] for every sample i in each subframe k.

    [0153] The HB stereo filling signal PRED.sub.hb is obtained in block 930 by delaying HB.sub.dmx and weighting by g.sub.side,hb and an energy normalization factor g.sub.norm as


    PRED.sub.hb,left[i]=g.sub.pred,hb*g.sub.norm*HB.sub.dmx[i−d]


    and


    PRED.sub.hb,right[i]=−g.sub.pred,hb*g.sub.norm*HB.sub.dmx[i−d]

    [0154] for every sample i in the current frame (done on full frame, not on subframes) and where d is the number of samples by which the HB downmix is delayed for the filling signal.

    [0155] Both the panned stereo signal and the generated stereo filling signal are eventually mixed back to the core signal after the DFT synthesis in combiner 940.

    [0156] This special treatment of the ACELP HB is also in contrast to the higher-delay DirAC processing where the ACELP core and TCX frames are artificially delayed so as to be aligned with the ACELP HB. There, the CLDFB is performed on the complete signal, i.e. the upmix of the ACELP HB is also done in the CLDFB domain.

    Advantages of the Proposed Method

    [0157] No additional delay allows the IVAS codec to stay within the same overall delay as in EVS (32 ms) for this particular case of SBA input to Stereo output.

    [0158] Much lower complexity of parametric stereo upmix via DFT than spatial DirAC rendering due to an overall simpler, more straightforward processing.

    Further Embodiments

    [0159] 1. Apparatus, method or computer program for encoding or decoding as described before.

    [0160] 2. Apparatus or method for encoding or decoding or related computer program, comprising: [0161] system where input is encoded with a model based on the spatial audio representation of a sound scene with a first set of parameters and is decoded at the output with a stereo model for 2 output channels or a multichannel model for more than 2 output channels with a second set of parameters; and/or [0162] mapping of spatial parameters to stereo parameters; and/or [0163] conversion from input representation/parameters based on one frequency domain to output representation/parameters based on another frequency domain; and/or [0164] conversion of parameters with higher time resolution to lower time resolution; and/or [0165] lower output delay due to shorter window overlap of second frequency transform; and/or [0166] mapping of DirAC parameters (directional angles, diffuseness) to DFT Stereo parameters (side gain, residual prediction gain) to output SBA DirAC encoded content as stereo; and/or [0167] conversion from CLDFB based input representation/parameters to DFT based output representation/parameters; and/or [0168] conversion of parameters with 5 ms resolution to parameters with 10 ms; and/or [0169] Benefit: lower output delay due to shorter window overlap of DFT compared to CLDFB.

    [0170] It is to be mentioned here that all alternatives or aspects as discussed before and all aspects as defined by independent claims in the following claims can be used individually, i.e., without any other alternative or object than the contemplated alternative, object or independent claim. However, in other embodiments, two or more of the alternatives or the aspects or the independent claims can be combined with each other and, in other embodiments, all aspects, or alternatives and all independent claims can be combined to each other.

    [0171] It is to be outlined that different aspects of the invention relate to a parameter conversion aspect, a smoothing aspect, and a bandwidth expansion aspect. These aspects can be implemented separately or independently from each other, or any two aspects of the at least three aspects can be combined or all three aspects can be combined in an embodiment as described above.

    [0172] An inventively encoded signal can be stored on a digital storage medium or a non-transitory storage medium or can be transmitted on a transmission medium such as a wireless transmission medium or a wired transmission medium such as the Internet.

    [0173] Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus.

    [0174] Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed.

    [0175] Some embodiments according to the invention comprise a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.

    [0176] Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may for example be stored on a machine readable carrier.

    [0177] Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier or a non-transitory storage medium.

    [0178] In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.

    [0179] A further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein.

    [0180] A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet.

    [0181] A further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein.

    [0182] A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.

    [0183] In some embodiments, a programmable logic device (for example a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are performed by any hardware apparatus.

    [0184] While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations and equivalents as fall within the true spirit and scope of the present invention.

    BIBLIOGRAPHY OR REFERENCES

    [0185] [1] V. Pulkki, M.-V. V. J. Laitinen, J. Ahonen, T. Lokki and T. Pihlajamäki, “Directional audio coding-perception—based reproduction of spatial sound,” in INTERNATIONAL WORKSHOP ON THE PRINCIPLES AND APPLICATION ON SPATIAL HEARING, 2009. [0186] [2] G. Fuchs, O. Thiergart, S. Korse, S. Döhla, M. Multrus, F. Küch, Bouthéon, A. Eichenseer and S. Bayer, “Apparatus, method and computer program for encoding, decoding, scene processing and other procedures related to dirac based spatial audio coding using low-order, mid-order and high-order components generators”. WO Patent 2020115311A1, 11 Jun. 2020. [0187] [3] 3GPP TS 26.445, Codec for Enhanced Voice Services (EVS); Detailed algorithmic description. [0188] [4] S. Bayer, M. Dietz, S. Döhla, E. Fotopoulou, G. Fuchs, W. Jaegers, G. Markovic, M. Multrus, E. Ravelli and M. Schnell, “APPARATUS AND METHOD FOR ESTIMATING AN INTER-CHANNEL TIME DIFFERENCE”. Patent WO17125563, 27 Jul. 2017. [0189] [5] V. S. C. S. Chebiyyam and V. Atti, “Inter-channel bandwidth extension”. WO Patent 2018187082A1, 11 Oct. 2018. [0190] [6] J. Büthe, G. Fuchs, W. Jägers, F. Reutelhuber, J. Herre, E. Fotopoulou, M. Multrus and S. Korse, “Apparatus and method for encoding or decoding a multichannel signal using a side gain and a residual gain”. WO Patent WO2018086947A1, 17 May 2018. [0191] [7] J. Büthe, F. Reutelhuber, S. Disch, G. Fuchs, M. Multrus and R. Geiger, “Apparatus for Encoding or Decoding an Encoded Multichannel Signal Using a Filling Signal Generated by a Broad Band Filter”. WO Patent WO2019020757A2, 31 Jan. 2019. [0192] [8] V. A. e. al., “Super-wideband bandwidth extension for speech in the 3GPP EVS codec,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brisbane, 2015.