Apparatus and method for reducing noise in an audio signal

11664040 · 2023-05-30

Assignee

Inventors

Cpc classification

International classification

Abstract

An apparatus for processing an audio signal includes an audio signal analyzer and a filter. The audio signal analyzer is configured to analyze an audio signal to determine a plurality of noise suppression filter values for a plurality of bands of the audio signal, wherein the analyzer is configured to determine a noise suppression filter value so that a noise suppression filter value is greater than or equal to a minimum noise suppression filter value and so that the minimum noise suppression value depends on a characteristic of the audio signal. The filter is configured for filtering the audio signal, wherein the filter is adjusted based on the noise suppression filter values.

Claims

1. Apparatus for processing an audio signal, comprising: an audio signal analyzer configured for analyzing an audio signal to determine a plurality of noise suppression filter values for a plurality of bands of the audio signal, wherein the audio signal analyzer is configured to determine the noise suppression filter values so that a noise suppression filter value is greater than or equal to a minimum noise suppression filter value, and so that the minimum noise suppression filter value depends on a characteristic of the audio signal; and a filter configured for filtering the audio signal, wherein the filter is adjusted based on the noise suppression filter values, wherein the audio signal analyzer is configured to calculate a gain value from a frame of the audio signal as the characteristic of the audio signal, and wherein the audio signal analyzer is configured to calculate the minimum noise suppression filter value so that the minimum noise suppression filter value decreases with an increasing gain value.

2. Apparatus according to claim 1, wherein the audio signal analyzer is configured to determine the noise suppression filter values using a maximum decision based on a plurality of unconstrained noise suppression filter values and the minimum noise suppression filter value, the minimum noise suppression filter value being equal for the plurality of bands of the audio signal.

3. Apparatus according to claim 1, wherein the audio signal analyzer is configured to calculate the minimum noise suppression filter value based on a predetermined noise suppression filter value, and the gain value.

4. Apparatus according to claim 1, wherein the audio signal analyzer is configured to calculate the minimum noise suppression filter value using a minimum decision dependent on a predetermined noise suppression filter value and a quotient of a predetermined noise suppression filter value and the gain value.

5. Apparatus according to claim 1, wherein the audio signal analyzer is configured to analyze a band of the plurality of bands of the audio signal to determine, whether the band exhibits a first characteristic of the audio signal or a second characteristic of the audio signal, wherein the first characteristic of the audio signal is different from the second characteristic of the audio signal, and to determine the noise suppression filter values, when the second characteristic of the audio signal has been determined for the band, so that the noise suppression filter values are equal to a product of a predetermined noise suppression filter value and the gain value, when the gain value is between 0 and 1, or so that the noise suppression filter values are equal to the predetermined noise suppression filter value, when the gain value is between 1 and the product of the predetermined noise suppression filter value and a predetermined distortion limit, or so that the noise suppression filter values are equal to the quotient of the gain value and the predetermined distortion limit, when the gain value is between the product of the predetermined noise suppression filter value and the predetermined distortion limit, or so that the noise suppression filter values are equal to 1, when the gain value is greater than the predetermined distortion limit.

6. Apparatus according to claim 1, wherein the audio signal analyzer is configured to calculate, for a first frame of the audio signal, the gain value resulting in a first minimum noise suppression filter value, to calculate, for a second frame of the audio signal, a second gain value resulting in a non-smoothed second minimum noise suppression filter value, wherein the second frame follows the first frame in time, and to calculate a smoothed minimum noise suppression filter value for the second frame using the non-smoothed second minimum noise suppression filter value and the first minimum noise suppression filter value.

7. Apparatus according to claim 1, comprising: a time-frequency converter providing a frequency-domain representation of the audio signal providing the plurality of bands of the audio signal, and wherein the audio signal analyzer is configured to calculate a noise suppression filter value for a band of the plurality of bands of the audio signal based on one or more bands of the plurality of bands of the audio signal, and the minimum noise suppression filter value, wherein the minimum noise suppression filter value is based on a predetermined noise suppression filter value which is equal for each band of the plurality of bands of the audio signal, or a predetermined distortion limit which is equal for a plurality of bands of the audio signal, and a value derived from the characteristic of the audio signal, the value being equal for each band of the plurality of bands of the audio signal.

8. Apparatus according to claim 1, wherein the audio signal analyzer is configured to calculate an amplitude information of the audio signal, and to calculate a gain value, as a characteristic of the audio signal, based on the amplitude information and a predetermined target value, to which the audio signal is adjusted to by means of the gain value.

9. Apparatus according to claim 1, wherein the audio signal analyzer comprises a voice activity detection unit providing a first voice activity information of a first frame of the audio signal, and a second voice activity information of a second frame of the audio signal, and a memory unit to store a previous gain value, and wherein the audio signal analyzer is configured to: estimate the gain value based on the second frame of the audio signal in which voice has been detected according to the second voice activity information, or keep the gain value of the first frame if no voice activity has been detected in the second frame according to the second voice activity information, when voice has been detected in the first frame based on the first voice activity information, wherein the second frame follows the first frame in time.

10. Apparatus according to claim 1, wherein the audio signal analyzer is configured to calculate the minimum noise suppression filter value for a current frame based on a value derived from the characteristic of the audio signal calculated for a current frame, and to analyze the audio signal for determining the value derived from the characteristic of the audio signal, and wherein the filter comprises a first filter stage and a second filter stage, and wherein the first filter stage is adjusted using the value derived from the characteristic of the audio signal, and wherein the second filter stage is adjusted according to the noise suppression filter values.

11. Apparatus according to claim 1, wherein the audio signal analyzer is configured to calculate the minimum noise suppression filter value for a second frame based on a value derived from the characteristic of the audio signal, calculated for a first frame, and wherein the filter comprises a first filter stage and a second filter stage, wherein the first filter stage is adjusted according to the noise suppression filter values, and wherein the second filter stage is adjusted using the value derived from the characteristic of the audio signal, and wherein the audio signal analyzer is configured to analyze an output of the first filter stage for determining the value derived from the characteristic of the audio signal, wherein the second frame follows the first frame in time.

12. Apparatus according to claim 1, wherein the audio signal analyzer is configured to determine the gain value based on a voice activity information and the audio signal, or to determine a voice activity information and the audio signal after being filtered by the noise suppression filter values, and to acquire the voice activity information based on the audio signal, or to acquire the voice activity information based on the audio signal after being filtered by the filter, or so that a voice activity information indicating no speech present is used to decrease the gain value.

13. Apparatus according to claim 1, wherein the audio signal analyzer is configured to analyze the audio signal in a sequence of frames comprising a first frame and a second frame following the first frame in time, to determine, for the first frame, a first plurality of noise suppression filter values, and to determine, for the second frame, a second plurality of noise suppression filter values, to determine the first plurality of noise suppression filter values so that the noise suppression filter values of the first plurality of noise suppression filter values are greater than or equal to a first minimum noise suppression filter value determined for the first frame, and so that the first minimum noise suppression filter value depends on a first characteristic of the first frame of the audio signal, to determine the second plurality of noise suppression filter values so that the noise suppression filter values of the second plurality of noise suppression filter values are greater than or equal to a second minimum noise suppression filter value determined for the second frame, and so that the second minimum noise suppression filter value depends on a second characteristic of the second frame of the audio signal; and wherein the filter is configured for filtering the audio signal in the sequence of frames, wherein a first filter for the first frame is adjusted based on the first plurality of noise suppression filter values, and wherein a second filter for the second frame is adjusted based on the second plurality of noise suppression filter values, and to filter the first frame of the audio signal with the first filter and to filter the second frame of the audio signal with the second filter.

14. Method for processing an audio signal, comprising: analyzing an audio signal to determine a plurality of noise suppression filter values for a plurality of bands of the audio signal, determining the noise suppression filter values so that a noise suppression filter value is greater than or equal to a minimum noise suppression filter value, and so that the minimum noise suppression filter value depends on a characteristic of the audio signal; and filtering the audio signal based on the noise suppression filter values, wherein the analyzing comprises calculating a gain value from a frame of the audio signal as the characteristic of the audio signal, and calculating the minimum noise suppression filter value so that the minimum noise suppression filter value decreases with an increasing gain value.

15. A non-transitory digital storage medium having a computer program stored thereon to perform, when said computer program is run by a computer, a method for processing an audio signal, said method comprising: analyzing an audio signal to determine a plurality of noise suppression filter values for a plurality of bands of the audio signal, determining the noise suppression filter values so that a noise suppression filter value is greater than or equal to a minimum noise suppression filter value, and so that the minimum noise suppression filter value depends on a characteristic of the audio signal; and filtering the audio signal based on the noise suppression filter values, wherein the analyzing comprises calculating a gain value from a frame of the audio signal as the characteristic of the audio signal, and calculating the minimum noise suppression filter value so that the minimum noise suppression filter value decreases with an increasing gain value.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) Embodiments of the present invention will be detailed subsequently referring to the appended drawings, in which:

(2) FIG. 1 shows a block diagram of an embodiment according to the invention;

(3) FIG. 2 shows a block diagram of an audio signal analyzer of the embodiment of the apparatus in accordance with FIG. 1;

(4) FIG. 3 shows a block diagram of an embodiment of an apparatus according to the invention;

(5) FIG. 4 shows a block diagram of an embodiment of an apparatus according to the invention;

(6) FIG. 5 shows a block diagram of the filter value selection stage of the audio signal analyzer according to FIG. 2;

(7) FIG. 6 shows a block diagram of the filter value selection stage of the audio signal analyzer according to FIG. 2;

(8) FIG. 7 shows a block diagram of the filter value selection stage of the audio signal analyzer according to FIG. 2;

(9) FIG. 8 shows a block diagram of an advantageous embodiment according to the invention;

(10) FIG. 9 shows a block diagram of an advantageous embodiment according to the invention;

(11) FIG. 10 shows a diagram of an overall system response;

(12) FIG. 11 shows a diagram of the minimum noise suppression filter value in dependence on a gain value;

(13) FIG. 12 shows graphs of a signal before and after signal processing;

(14) FIG. 13 shows a block diagram of a full-duplex speech communication scenario;

(15) FIG. 14 shows a block diagram of the receiver or the transmitter side of a full-duplex speech communication scenario;

(16) FIG. 15 shows a block diagram according to an aspect of the invention;

(17) FIG. 16 shows a block diagram according to an aspect of the invention;

(18) FIG. 17 shows a block diagram according to an aspect of the invention;

(19) FIG. 18 shows a block diagram according to an advantageous embodiment according to the invention; and

(20) FIG. 19 shows a block diagram according to an advantageous embodiment according to the invention.

DETAILED DESCRIPTION OF THE INVENTION

(21) FIG. 1 depicts a block diagram of an apparatus 100 according to an embodiment of the invention for processing an audio signal 110, wherein the audio signal 110 can be provided in a spectral representation, with a filter 120 adjusted according to noise suppression filter values provided by an audio signal analyzer 130. The noise suppression filter values are determined 130a in the audio signal analyzer, so that they are greater than a minimum noise suppression filter value 130b′. The minimum noise suppression filter value 130b′ is determined in 130b based on a characteristic of the audio signal 130c′ which is determined in the audio signal analyzer 130 in 130c. In addition, the estimation is based on unconstrained noise suppression filter values 130d′ which are estimated in 130d for a plurality of bands of the audio signal. Furthermore, the characteristic of the audio signal 130c′ is equal for a plurality of bands of the audio signal. The unconstrained noise suppression filter values 130d′ can be estimated for example according to an optimal filter like the Wiener filter, based on the Power Spectral density (PSD) σ.sub.x.sup.2(m, k) of the audio signal 110, for example the input audio signal, and the PSD of the noise σ.sub.n.sup.2(m, k) contained in the audio signal 110

(22) H NR , Wiener ( m , k ) = σ x 2 ( m , k ) - σ n 2 ( m , k ) σ x 2 ( m , k ) ,
where for example m is the time frame index and k is the spectral subband index. The Wiener filter H.sub.NR,Wiener(m, k) extracts a desired signal from a noisy signal, computed as described above. In practice, the PSDs have to be estimated for a Wiener filter.

(23) An enhanced signal can be obtained in the frequency domain by multiplying the plurality of bands of the audio signal, for example an input spectrum, with the above filter H.sub.NR,Wiener(m, k), for example on a frame-by-frame basis.

(24) By observing that the SNR can be defined as

(25) SNR ( m , k ) = σ x 2 ( m , k ) - σ n 2 ( m , k ) σ n 2 ( m , k )
the equation for the Wiener filter H.sub.NR,Wiener(m, k) can be reformulated as

(26) H NR , Wiener ( m , k ) = SNR ( m , k ) 1 + SNR ( m , k ) .

(27) Hence, the Wiener Filter H.sub.NR,Wiener(m, k) takes the value zero for SNR(m, k)=0, and converges to one for large SNR values, which is the desired behavior to attenuate the noise while preserving the desired signal components. Alternatively, filters of different types like the spectral amplitude estimator [4] can be used for estimating the unconstrained noise suppression filter values. Moreover, the unconstrained noise suppression filter values can be based on a heuristic function.

(28) The audio signal 100 can comprise a desired component, for example speech, and some undesired component, for example background noise. The filter 120 is adjusted by the signal analyzer 130, so that for example a speech signal component of the audio signal 110 will be more intelligible after filtering the audio signal 110 with the filter 120. In addition, an undesired component of the audio signal 110 can be suppressed after filtering the audio signal 110 with the filter 120. A minimum noise suppression filter value, which acts as a constraint on the unconstrained noise suppression filter values, enables a signal enhancement and avoids speech distortion or musical tones.

(29) The apparatus 100 facilitates enhancement of a desired signal component of an audio signal 110, while offering a tradeoff between signal enhancement and noise suppression. This tradeoff is characterized by the minimum noise suppression filter value acting as a constraint, as it can be adjusted to either remove more of the undesired signal component or reduce the removal of undesired signal components in order to avoid signal distortions.

(30) FIG. 2 shows a block diagram of the audio signal analyzer 130 of an embodiment of the invention according to apparatus 100 as depicted in FIG. 1. The audio signal analyzer 130 performs an unconstrained noise suppression filter value estimation 210, based on a plurality of bands of the audio signal 215. For each band of the plurality of bands of the audio signal 215, an unconstrained noise suppression filter value 220 is estimated in the audio signal analyzer 130. In addition, a minimum noise suppression value estimation 230 is performed, based on a value derived from a characteristic of the audio signal 232 (e.g. a gain value) and a predetermined noise suppression value 234. The unconstrained noise suppression filter values 220 and the minimum noise suppression filter value 240 are used to determine the noise suppression filter values 250. This can be done for example by performing a maximum operation, so that a plurality of noise suppression filter values 260 for a plurality of bands of the audio signal 215 are obtained. The noise suppression filter values 260 which are obtained by the maximum operation 250 are ensured to be greater than the minimum noise suppression filter value 240, thereby small values or values equal to 0 of the noise suppression filter values can be avoided. By avoiding small values or values equal to 0 of the noise suppression filter values 260, the achievable noise suppression is limited by the minimum noise suppression filter 240 value, avoiding potential distortion due to aggressive noise suppression.

(31) FIG. 3 shows a block diagram of an apparatus 300 according to an advantageous embodiment of the invention. The apparatus 300 comprises an audio signal analyzer 130 and a filter 310. Furthermore, the apparatus 300 comprises a first time-frequency converter 320a and a second time-frequency converter 320b. Moreover, the apparatus 300 allows application of a gain value to the audio signal 110 before or after filtering the audio signal 110 with the filter 310. This optionality is indicated by the switches 330a and 330b. Moreover, the apparatus 300 comprises another switch 330c, which enables to calculate a value derived from a characteristic of the audio signal 110 (e.g. a gain value), before or after filtering the audio signal 110 with the filter 310. Furthermore, the audio signal analyzer 130 comprises a voice activity detection 340, a psychoacoustic filter 342 and a memory unit 346. Depending on the result of the voice activity detection 340, a characteristic of the audio signal 348a, for example an amplitude information, is computed 348 based on the audio signal 110 filtered by the psychoacoustic filter 342, when voice has been detected.

(32) Moreover, when voice has been detected by the voice activity detection 340, a new gain value is computed 350 based on the amplitude information 348a and a target value. Furthermore, a switch 352 enables usage of an old gain value, kept in the memory unit 346, when no voice has been detected by the voice activity detection 340. In contrast, when voice has been detected by the voice activity detection 340, the old gain value in the memory 346 will be overwritten by the gain value of the current frame 350a.

(33) Moreover, the audio signal analyzer 130 is configured to compute unconstrained noise suppression filter values 356, based on a plurality of bands of the audio signal 354, for example based on a Wiener filter. In addition, the audio signal analyzer 130 is configured to estimate a minimum noise suppression filter value 358, which is based on a predetermined noise suppression value g.sub.des 234, for example a noise attenuation limit g.sub.lim, or a predetermined distortion limit 358a and a value derived from a characteristic of the audio signal, for example the gain value. If no voice activity has been detected by the voice activity detection 340, in the current frame, the minimum noise suppression filter value estimation 358 can rely for computation of the minimum noise suppression value 358c on a gain value which is stored in the memory unit 346. If voice is active in the current frame, the current gain value can be employed for the minimum noise suppression value estimation 358, the choice between old and new gain value is facilitated by a switch 358b.

(34) The minimum noise suppression filter value 358c, obtained in the minimum noise suppression filter value estimation 358, can be subject to optional smoothing 360. The smoothed or non-smoothed minimum noise suppression filter value 360a, which is equal for a plurality of bands of the audio signal 354, and a plurality of unconstrained noise suppression filter values 356a, obtained by the unconstrained noise suppression filter value estimation 356, are subject to a maximum operation 362. The maximum operation 362 provides noise suppression filter values 364, for a plurality of bands of the audio signal 354, for adjusting the filter 310.

(35) In its simplest form, a constant minimum noise suppression value is applied. The Wiener filter H.sub.NR,Wiener(m, k) takes the value zero for SNR (m, k)=0, and converges to one for large SNR values, which is the desired behavior to attenuate the undesired signal components, e.g. noise, while preserving the desired signal components, e.g. speech, of the audio signal. A constant minimum noise suppression filter value g.sub.lim=g.sub.des can be employed to avoid aggressive noise reduction. Therefore, the noise suppression filter values are limited to a maximum noise attenuation amount as follows:

(36) G NR , Wiener ( m , k ) = max { H NR , Wiener ( m , k ) ; g lim } = max { H NR , Wiener ( m , k ) ; g d e s } ,
described here for the Wiener filter based unconstrained noise suppression filter values H.sub.NR,Wiener(m, k), but can accordingly also be applied to differently obtained unconstrained noise suppression filter values H.sub.NR(m, k). The noise attenuation limit g.sub.lim is defined as 0≤g.sub.lim≤1. It corresponds to the maximum noise attenuation of the filter G.sub.NR,Wiener(m, k), which can also be interpreted as the desired amount of noise attenuation during speech pauses, i.e. g.sub.lim=g.sub.des. It is typically chosen between −20 dB and −10 dB. As other filtering rules can also be employed instead of the Wiener filter the above described equation can be generalized as follows:
G.sub.NR(m,k)=max{H.sub.NR(m,k);g.sub.des},
where H.sub.NR(m, k) refers to arbitrary unconstrained noise suppression filter values, based on an arbitrary noise reduction rule.

(37) The filter 310 applies to each band of the audio signal 354a-d an appropriate value of the noise suppression filter values 364a-d. By filtering the plurality of bands of the audio signal 354 with the filter 310 a second plurality of bands 366 is obtained. The second plurality of bands 366 can be transformed into the time domain with the second time-frequency converter 320b, so that an audible signal is obtained.

(38) In addition, the multiplication with a gain value before or after filtering the audio signal 110, indicated with the switches 330a and 330b, enables the apparatus 300 to compensate a lower level of a desired signal component in the audio signal 110. Furthermore, the apparatus 300 provides by filtering the audio signal 110 in the frequency domain with the filter 310, a power saving due to operation in the frequency domain compared to a time domain-based convolution.

(39) For a given AGC gain value G.sub.AGC(m), as a value derived from a characteristic of the audio signal 110, the joint NR+AGC task is considered as a filtering problem where the desired signal is not anymore the desired signal component of the audio signal 110, for example a speech signal, itself, but the desired signal component scaled by the AGC gain. Deriving for example the Wiener filter extracting the scaled desired signal component, for example a speech signal, from a noisy input signal, we obtain the following filtering rule:

(40) H N R + AGC , Wiener ( m , k ) = σ x 2 ( m , k ) - σ n 2 ( m , k ) σ x 2 ( m , k ) G A G C ( m ) ,
which can be reformulated as function of the Wiener filter H.sub.NR,Wiener(m, k), as described above for noise reduction:
H.sub.NR+AGC,Wiener(m,k)=H.sub.NR,Wiener(m,k)G.sub.AGC(n),
where G.sub.AGC(m) is a gain value, e.g. an AGC scaling factor.

(41) As described before, a noise attenuation limit g.sub.lim=g.sub.des to limit the signal distortion is introduced:

(42) G N R + AGC , Wiener ( m , k ) = max { H N R + AGC , Wiener ( m , k ) ; g des } = max { H NR , Wiener ( m , k ) G A G C ( m ) ; g d e s } = max { H NR , Wiener ( m , k ) ; g d e s G A G C ( m ) } × G A G C ( m ) .

(43) Therefore, from inspection of G.sub.NR+AGC,Wiener(m, k), it is visible that performing NR and AGC jointly is equivalent to applying the AGC scaling factor G.sub.AGC(m) at the output of a Wiener filter (or equivalently at its input), subject to a minimum noise suppression value, e.g. a noise attenuation limit, which is proportional to the AGC gain.

(44) Moreover, the above described equation for G.sub.NR+AGC,Wiener(m, k) can be generalized to arbitrary optimum or heuristic filtering rules, yielding

(45) G N R + A G C ( m , k ) = G ˜ N R ( m , k ; G A G C ) × G A G C ( m ) , where G ˜ N R ( m , k ; G A G C ) = max { H N R ( m , k ) ; g d e s G AGC ( m ) } .

(46) In addition, the minimum noise suppression value and therefore the noise suppression filter values can be estimated by performing AGC and NR processing in a joint manner as this allows to better control the level of a desired signal component of the audio signal 110, for example speech, and noise levels at the output. A VAD(=Voice Activity Detection) is exploited to trigger the level estimation and gain computation steps, but multiplication of the NR output signal with AGC gain is carried out for each frame, regardless of the speech activity. According to an aspect of the invention, the filtering does not rely on a fixed minimum noise suppression value, e.g. a fixed noise attenuation limit. In contrast, a value derived from a characteristic of the audio signal 110, e.g. an AGC-gain, dependent (and hence for example time-varying) minimum noise suppression filter value, e.g. noise attenuation limit {tilde over (g)}.sub.lim.sup.[UC](m; G.sub.AGC) is applied, yielding the NR filter
{tilde over (G)}.sub.NR.sup.[UC](m,k;G.sub.AGC)=max{H.sub.NR(m,k);{tilde over (g)}.sub.lim.sup.[UC](m;G.sub.AGC)},
where {tilde over (g)}.sub.lim.sup.[UC](m; G.sub.AGC) is adapted on a frame-by-frame basis as a function of the desired noise attenuation g.sub.des (0≤g.sub.des≤1) and the AGC gain. The superscript [UC] refers to the unconstrained case, in contrast to the constrained case presented later on.

(47) According to an aspect of the invention, the gain value dependent minimum noise suppression filter value, e.g. a noise attenuation limit, {tilde over (g)}.sub.lim.sup.[UC](m; G.sub.AGC) can be obtained according to g.sub.des/G.sub.AGC(m). According to a further aspect, the minimum noise suppression value, e.g. a unconstrained noise attenuation limit, is defined in a different way to obtain a better attenuation of the noise when the AGC attenuates the signal (i.e. G.sub.AGC(m)<1):

(48) g ˜ lim [ UC ] ( m ; G A G C ) = min { g d e s ; g d e s G AGC ( m ) } .

(49) The AGC gain is not used to scale the NR gains as a function of the AGC gain. Instead the AGC gain is directly included in the NR filter {tilde over (G)}.sub.NR.sup.[UC](m, k; G.sub.AGC) design via the minimum noise suppression filter value {tilde over (g)}.sub.lim.sup.[UC](m; G.sub.AGC), e.g. the noise attenuation limit.

(50) To illustrate the advantage of using the time-varying noise attenuation limit {tilde over (g)}.sub.lim.sup.[UC](m; G.sub.AGC) instead of a fixed limit g.sub.lim=g.sub.des, the response of the overall filter
{tilde over (G)}.sub.NR+AGC.sup.[UC](m,k)={tilde over (G)}.sub.NR.sup.[UC](m,k;G.sub.AGC)×G.sub.AGC(m)
is derived for time-frequency regions dominated either by the speech (high SNR) or by the noise (low SNR): Case G.sub.AGC(m)≥1 In low-SNR time-frequency regions dominated by the noise, we can assume that the NR filter {tilde over (G)}.sub.NR.sup.[UC](m, k; G.sub.AGC) reaches its minimum {tilde over (g)}.sub.lim.sup.[UC](m; G.sub.AGC) and hence the overall system response {tilde over (G)}.sub.NR+AGC.sup.[UC](m, k) becomes:
{tilde over (G)}.sub.NR+AGC.sup.[UC](m,k)|.sub.SNR=0≈{tilde over (g)}.sub.lim.sup.[UC](m;G.sub.AGC)×G.sub.AGC(m)=g.sub.des, which shows that segments dominated by the noise are scaled by the desired amount of noise reduction, regardless of the AGC gain. In high-SNR time-frequency regions dominated by the speech, we can assume that the NR filter leaves the speech mostly unchanged, i.e. {tilde over (G)}.sub.NR.sup.[UC](m, k; G.sub.AGC)≈1, and hence the total response becomes:
G.sub.NR+AGC.sup.[UC](m,k)|.sub.SNR>>1≈G.sub.AGC(m), which shows that segments dominated by the speech are scaled by the AGC gain as desired, regardless of the desired amount of noise reduction. Case G.sub.AGC (m)<1 Using the same reasoning as above, we can write
G.sub.NR+AGC.sup.[UC](m,k)|.sub.SNR≈0≈{tilde over (g)}.sub.lim.sup.[UC](mG.sub.AGC(m)=g.sub.des×G.sub.AGC(m),
G.sub.NR+AGC.sup.[UC](m,k)|.sub.SNR>>1≈G.sub.AGC(m), which shows that the speech segments are scaled by the AGC gain G.sub.AGC(m) as expected, and the noise is at least attenuated by the desired noise attenuation amount g.sub.des.

(51) It is therefore visible that adapting the noise attenuation limit as a function of the desired noise attenuation and the AGC gain according to

(52) g ˜ lim [ UC ] ( m ; G A G C ) = min { g d e s ; g d e s G AGC ( m ) }
provides full control over the speech and noise levels at the system output for positive AGC gains. Therefore, consistent speech and noise levels can be achieved and noise pumping effects can be avoided, as depicted in graph 1250.

(53) When the AGC attenuates the input signal, i.e. G.sub.AGC (m)<1, we see from
G.sub.NR+AGC.sup.[UC](m,k)|.sub.SNR≈0≈{tilde over (g)}.sub.lim.sup.[UC](mG.sub.AGC(m)=g.sub.des×G.sub.AGC(m),
that the noise is not amplified at the output compared to the input, and a minimum amount of noise attenuation is ensured. In this case, note that this introduces a low-level but time-varying noise floor caused by the time-varying AGC attenuation. However, it can be assumed in practice that the input speech level remains relatively constant. Provided that the VAD can detect the speech presence accurately, the AGC gain will hence fluctuate only slowly after convergence, and the absolute noise level at the system output will vary only slowly, which avoids the noise pumping effect.

(54) As presented before, the minimum noise suppression filter value 360a is derived as a function of the desired noise attenuation and the AGC gain. This can be achieved for instance based on

(55) 0 g ˜ lim [ UC ] ( m ; G A G C ) = min { g d e s ; g d e s G AGC ( m ) } .

(56) This approach can produce an arbitrarily small noise attenuation limit for large AGC gains G.sub.AGC(m). When applying aggressive noise reduction audible artefacts can occur in practice. Typical artefacts occurring are: speech distortions, especially at high frequencies where speech is the weakest, musical tones characterized by a highly non-stationary coloration of the background noise.

(57) To obtain a less aggressive noise reduction, i.e. a moderate noise reduction, for large AGC gains and hence to mitigate the noise reduction artefacts, a constraint can be imposed on the noise attenuation limit. According to one aspect of the invention, the minimum noise suppression filter value 360a is computed as a function of the AGC gain G.sub.AGC(m), the predetermined noise suppression value g.sub.des 234, e.g. the desired amount of noise attenuation, and a distortion limit g.sub.DL 358a, yielding

(58) g ˜ lim [ D C ] ( m ; G A G C ) = min { g d e s ; min { 1 G A G C ( m ) ; max { 1 g D L ; g d e s G A G C ( m ) } } } ,
where the superscript [DC] denotes distortion-constrained case, in contrast to the aforementioned case denoted by the superscript [UC]. This approach is illustrated in more detail in FIG. 7 and FIG. 9.

(59) The NR filter in the distortion-constrained case is obtained in a similar way as described before, i.e.
{tilde over (G)}.sub.NR.sup.[DC](m,k;G.sub.AGC)=max{H.sub.NR(m,k);{tilde over (g)}.sub.lim.sup.[DC](m;G.sub.AGC)},
which leads to the overall filter performing NR and AGC:
G.sub.NR+AGC.sup.[DC](m,k)={tilde over (G)}.sub.NR.sup.[DC](m,k;G.sub.AGC)×G.sub.AGC(m).

(60) The distortion limit g.sub.DL 358a is a constant which may satisfy g.sub.DL≥1/g.sub.des≥1. It can also be understood as the amount of SNR improvement allowed by the system. A low g.sub.DL provides a good protection against noise reduction artefacts, but at the cost of a poorer attenuation of the noise. This is depicted in graph 1260, where the noise level increases as the speech gets amplified. It can be easily verified that a very large distortion limit g.sub.DL 358a basically alleviates the constraint and {tilde over (g)}.sub.lim.sup.[DC](m; G.sub.AGC) becomes equivalent to its unconstrained counterpart {tilde over (g)}.sub.lim.sup.[UC](m; G.sub.AGC). The distortion limit is typically chosen between 15 dB and 25 dB.

(61) Additionally, processing tools like temporal smoothing can be used for {tilde over (g)}.sub.lim.sup.[DC](m; G.sub.AGC) or

(62) g ˜ lim [ UC ] ( m ; G A G C ) = min { g d e s ; g d e s G AGC ( m ) }
to smoothen the noise attenuation limit, i.e. the minimum noise suppression filter value, over time.

(63) FIG. 4 illustrates a block diagram of an apparatus 400 according to an advantageous embodiment of the invention. The apparatus 400 comprises an audio signal analyzer 130 as described in FIG. 3 for apparatus 300. In addition, the apparatus 400 comprises a first time-frequency converter 320a, which is configured to provide a plurality of bands of the audio signal 354 to the audio signal analyzer 130. Furthermore, the apparatus 400 comprises a second time-frequency converter 320b which is configured to provide a time domain representation of the noise suppression filter values 364. The second time-frequency converter 320b provides a time domain representation of the noise suppression filter values 464. Moreover, the apparatus 400 comprises a filter 410, which is adjusted according to the time domain representation of the noise suppression filter values 464.

(64) The filter 410 is configured to perform a time domain convolution of the audio signal 110 and the time domain representation of the noise suppression filter values 464. Similar to the apparatus 300, the apparatus 400 offers the possibility to apply voice activity detection 340 in the audio signal analyzer based on the audio signal 110 before filtering with the filter 410 or after filtering with the filter 410, indicated by the switch 320c. In addition, the gain value can be applied to the audio signal before filtering with the filter 410 or after filtering with the filter 410, indicated by the switches 330a and 330b. The apparatus 400 offers through its time domain based filtering, a lower delay when compared to a frame-wise processing in the frequency domain as described for apparatus 300.

(65) FIG. 5 illustrates the noise suppression filter value determination of the audio signal analyzer 130. In a first step 510 a quotient between the predetermined noise suppression value g.sub.des 234 and the gain value G.sub.AGC(m) is computed, thereby determining the minimum noise suppression filter value 358c. In a next step 520, the unconstrained noise suppression filter values H.sub.NR(m, k) 356a are each compared to the minimum noise suppression value, so that values of the unconstrained noise suppression filter values 356a that are smaller than the minimum noise suppression filter value 358c are set to the minimum noise suppression filter value 358c. This can be described by:

(66) G ˜ N R ( m , k ; G A G C ) = max { H N R ( m , k ) ; g d e s G AGC ( m ) }
thereby, the noise suppression filter values 364 are obtained. The described lower bounding of the noise suppression filter values can be advantageous in avoiding distortions due to too aggressive noise reduction.

(67) FIG. 6 illustrates the noise suppression filter value selection in the audio signal analyzer 130 according to an advantageous embodiment of the invention. In a first step 510, a quotient between the predetermined noise suppression value 234 and the gain value is computed. In a next step, a minimum decision 620 is made between the quotient of the predetermined noise suppression value 234 and the gain value, and the predetermined noise suppression value 234. Thereby, a large minimum noise suppression filter value 358c can be avoided when the gain value is small, due to the minimum decision upper bounding the minimum noise suppression filter value 358c to the predetermined noise suppression value 234. In other words, a minimum noise suppression filter value 358c is obtained which is upper bounded by the predetermined noise suppression value 234. The selection of the minimum noise suppression filter value 358c can be summarized in the following equation:

(68) g ˜ lim [ UC ] ( m ; G A G C ) = min { g d e s ; g d e s G AGC ( m ) } .

(69) In a final step, the minimum noise suppression filter value 358c is compared to the unconstrained noise suppression filter values 356a so that based on a maximum decision 630, noise suppression filter values 364 are obtained which are lower bounded by the minimum noise suppression filter value 358c. The described estimation ensures noise suppression although a small gain value G.sub.AGC(m) is provided, thereby a noise reduction is obtained beyond the overall signal attenuation, achieved by the small gain value.

(70) In FIG. 7 a minimum noise suppression filter value determination is described as performed in the audio signal analyzer 130 according to an advantageous embodiment of the invention. In a first step, a quotient of the predetermined noise suppression value 234 and the gain value is computed. The quotient between the predetermined noise suppression value 234 and the gain value is subject to a first maximum decision 710 with an inverse of a predetermined distortion limit 358a. The result of the first maximum decision 710 is subject to a first minimum decision 720 with respect to an inverse of the gain value 705. Furthermore, the result of the first minimum 720 decision is subject to a second minimum decision 730 with respect to the predetermined noise suppression 234 value. Thereby, the minimum noise suppression filter value 358c is obtained as the result of the second minimum decision 730. This procedure yields the so-called distortion constrained minimum noise suppression filter value, e.g. distortion constrained noise attenuation limit. To better understand the meaning of the constraint, it can be reformulated as follows:

(71) g ˜ lim [ D C ] ( m ; G A G C ) = { g d e s if 0 < G A G C ( m ) 1 g d e s / G A G C ( m ) if 1 < G A G C ( m ) g d e s × g D L 1 / g D L if g d e s × g D L < G A G C ( m ) g D L 1 / G A G C ( m ) othe r w i s e .

(72) The update rule for the noise attenuation limit computed as described above, can be formulated equivalently as

(73) g ˜ lim [ D C ] ( m ; G A G C ) = min { g d e s ; min { 1 G A G C ( m ) ; max { 1 g D L ; g d e s G A G C ( m ) } } } ,
and is illustrated in the graph in FIG. 11 with the solid line labeled as “Distortion-constrained”.

(74) In a second maximum decision 740, the minimum noise suppression filter value 358c is compared to each individual unconstrained noise suppression filter value 356a, so that unconstrained noise suppression values smaller than the minimum noise suppression filter value 358c are set to the minimum noise suppression filter value 358c, thereby obtaining the noise suppression filter values 364. The noise suppression filter value determination as described above is beneficial for avoiding signal distortions due to aggressive noise reduction.

(75) FIG. 8 illustrates a block diagram of an apparatus 800 according to an advantageous embodiment of the invention, offering joint NR/AGC processing with automatic gain control of the unconstrained noise attenuation limit {tilde over (g)}.sub.lim.sup.[UC](m; G.sub.AGC).

(76) The apparatus 800 comprises an audio signal analyzer 830 and a filter 820. Furthermore, an input signal is provided to the filter 820 and processed by a first filter stage 822 to apply noise reduction. Moreover, the output of the first filter stage 822 is provided to the audio signal analyzer 830 and a second filter stage 824 of the filter 820, wherein a gain value is applied.

(77) Furthermore, the filter provides an output signal. The output signal of the first filter stage 822 is used in the audio signal analyzer 830 to compute a voice activity detection 840. Based on the result of the voice activity detection 840 a decision 842 is made to either forward a signal to compute a signal level 842, as a characteristic of the audio signal, which is used to compute a new AGC gain 844 based on the signal level and a target level, or to keep an old AGC gain 846. The decision on whether to compute a new gain or keep an old gain is based on a speech presence in the signal provided to the voice activated detector 840.

(78) The determined gain value is then provided to the second filter stage 840 where it is applied to the signal. Furthermore, the gain value is used in the audio signal analyzer 830 for computing an unconstrained noise attenuation limit, i.e. a minimum noise suppression filter value, based on the gain value and a desired noise attenuation, i.e. a predetermined noise suppression value 234. In addition, using the unconstrained noise attenuation limit, the input signal and the AGC gain, the noise suppression filter values are determined 862 and provided to the first filter stage 822 of the filter 820.

(79) When the AGC triggers a signal amplification (rather than an attenuation), it is also possible to apply the AGC gain during speech periods only, similar to FIG. 17. The AGC gain is then temporarily decreased or directly set to unity during speech pauses. Since the AGC gain is taken into account in the computation of the noise attenuation limit

(80) g ˜ lim [ UC ] ( m ; G A G C ) = min { g d e s ; g d e s G AGC ( m ) } ,
it is guaranteed that the noise pumping effect is avoided, even though the AGC gain strongly fluctuates. The described approach has the advantage of ensuring noise reduction even for large AGC gains. Moreover, the described approach avoids a noise pumping effect, from which other approaches suffer, which leads to a quick increase of the noise floor at speech onsets and a rapid decrease at speech offsets.

(81) The estimation of the noise suppression filter values 862 can for example be performed according to FIG. 5 or as described in FIG. 6. The described apparatus 800 is suitable for achieving a predetermined noise suppression and amplifying or attenuating a signal as needed to increase intelligibility.

(82) FIG. 9 illustrates a block diagram of an apparatus 900 according to an advantageous embodiment of the invention, where joint NR and AGC processing with automatic control of the noise attenuation limit under a distortion constraint is performed. Alternatively, the computation of the AGC gain can be carried out based on the unprocessed audio input signal, i.e. before applying noise reduction. The apparatus 900 comprises much of the same functionality as the apparatus 800 in FIG. 8, but for estimation of the noise suppression filter values 862 an additional parameter is considered a distortion limit 358a, or more generally a predetermined distortion limit. The apparatus 900 is especially suitable for avoiding signal distortions like speech distortion or musical tones due to aggressive noise suppression introduced by a small minimum noise suppression value, potentially caused by a large AGC gain.

(83) FIG. 10 shows a diagram of system responses when the input signal to the system is characterized primarily as noise. In other words, the total noise response as a function of the AGC gain when applying NR and AGC with constrained or unconstrained noise attenuation limit (solid and dashed lines, respectively) is shown.

(84) The line labeled unconstrained relates for example to the advantageous embodiment as described in apparatus 800 as an advantageous embodiment of the invention as described in FIG. 8. Furthermore, the line distortion-constrained relates for example to apparatus 900 as an advantageous embodiment of the invention as described in FIG. 9. The system responses in FIG. 10 are displayed in logarithmic values in dependence on a gain value given in logarithmic values. FIG. 10 shows that for low gain values (gain values smaller than 0 dB) actually an attenuation is realized for the overall system response, due to the joint noise reduction and gain control. When the gain value is between 0 dB and the product of the predetermined noise suppression value and the distortion limit, a constant noise suppression is realized by the unconstrained and distortion-constrained apparatuses equally, for example apparatus 800 and apparatus 900, respectively. When the gain value is between the product of the predetermined noise suppression value and the predetermined distortion limit, and the predetermined distortion limit, the overall system response of the distortion-constrained graph increases to 0 dB, for example linearly. Furthermore, the “unconstrained” labeled graph remains constant at the value of the predetermined noise suppression value, when the gain value is between the product of the predetermined noise suppression value and the predetermined distortion limit, and the distortion limit. Moreover, the graph labeled “distortion-constrained” remains constant, for gain values larger than the predetermined distortion limit, at 0 dB. Furthermore, the graph labeled “unconstrained” remains constant at the value of the predetermined noise suppression value, for gain values greater than the predetermined distortion limit. In other words, for the distortion-constrained case the overall system response, for an audio signal primarily characterized as noise, can be written as:

(85) G NR + AGC [ D C ] ( m , k ) | SNR 0 = { G A G C ( m ) × g des if 0 < G A G C ( m ) 1 g des if 1 < G A G C ( m ) g d e s × g D L G AGC ( m ) / g D L if g d e s × g D L < G A G C ( m ) g D L 1 othe r w i s e .

(86) In summary, FIG. 10 describes with the graph labeled “unconstrained”, relating for example to apparatus 800, and with the graph labeled “distortion-constrained”, relating for example to apparatus 900, that noise is not amplified by both apparatuses in situations where the input signal is only characterized by noise. Thereby, an uncomfortable noise amplification can be avoided.

(87) FIG. 11 illustrates a graph with two lines, one labeled “unconstrained” and another labeled “distortion-constrained”, which relate to the minimum noise suppression filter value as described in FIG. 6 or FIG. 7, respectively. In other words, a noise attenuation limit as a function of the AGC gain for the constrained and unconstrained cases (solid and dashed lines, respectively), is shown.

(88) The minimum noise suppression filter value can for example be a noise attenuation limit, given here in logarithmic values. Furthermore, the graphs are depicted in dependence on the gain value in logarithmic values. The graph labeled “unconstrained” is constant at the predetermined noise suppression value for gain values smaller than 0. Moreover, the graph labeled “unconstrained” decreases for gain values greater than 0 dB, for example linearly. Moreover, the graph labeled “distortion-constrained” is constant at a value of the predetermined noise suppression value for gain values smaller than 0 dB, and decreases, for example linearly, for gain values greater than 0 dB and smaller than the product of the predetermined noise suppression value and the predetermined distortion limit, from the predetermined noise suppression value to the inverse of the predetermined distortion limit. Furthermore, the graph labeled “distortion-constrained” remains constant at a value of the inverse of the predetermined distortion limit value, for gain values between the product of the predetermined noise suppression value and the predetermined distortion limit, and the predetermined distortion limit. In addition, the graph labeled “distortion-constrained” decreases, for example linearly, for gain values greater than the predetermined distortion limit value. For the distortion constrained case, this can be described equivalently as:

(89) g ˜ lim [ D C ] ( m ; G A G C ) = { g d e s if 0 < G A G C ( m ) 1 g d e s / G A G C ( m ) if 1 < G A G C ( m ) g d e s × g D L 1 / g D L if g d e s × g D L < G A G C ( m ) g D L 1 / G AGC ( m ) othe r w i s e .

(90) For comparison, the unconstrained case and the constrained case are shown as a dashed and a solid line, respectively. It can be observed that the distortion-constrained noise attenuation limit behaves like its unconstrained counterpart for low to moderate AGC gains G.sub.AGC(m)≤g.sub.des×g.sub.DL. As the AGC gain increases, {tilde over (g)}.sub.lim.sup.[DC](m; G.sub.AGC) decreases down to 1/g.sub.DL and remains at this level as long as G.sub.AGC(m)≤g.sub.DL. Therefore the distortion constraint is met for AGC gains up to the distortion limit g.sub.DL only. Above that, the noise reduction limit starts again to decrease. This is to ensure that the noise is not amplified at the output compared to the input, which becomes apparent if we derive the overall system response G.sub.NR+AGC.sup.[DC](m, k) depicted in FIG. 10 for noise segments characterized by a low SNR. In this case, we can assume that the NR filter {tilde over (G)}.sub.NR.sup.[DC](m, k; G.sub.AGC) reaches its minimum {tilde over (g)}.sub.lim.sup.[DC](m; G.sub.AGC). Hence, the total noise response can be written as:

(91) 0 G NR + AGC [ D C ] ( m , k ) | SNR 0 = { G A G C ( m ) × g des if 0 < G A G C ( m ) 1 g des if 1 < G A G C ( m ) g d e s × g D L G AGC ( m ) / g D L if g d e s × g D L < G A G C ( m ) g D L 1 othe r w i s e ,
where it is visible that the total noise response increases for increasing AGC gains, but it remains below one to ensure that the noise is not amplified. The total noise response is represented as a function of the AGC gain in FIG. 10 as a solid line. The unconstrained noise response is shown as a dashed line for comparison in FIG. 10.

(92) FIG. 11 illustrates an advantageous dependence of the minimum noise suppression filter value on the gain value to enable a flexible noise reduction according to an applied (AGC) gain. Furthermore, the graph labeled “distortion-constrained” and the graph labeled “unconstrained” exhibit the ability to keep the minimum noise suppression filter value substantially above 0, therefore avoiding signal distortions.

(93) FIG. 12 illustrates signal levels after various processing for example with the apparatuses 100, 300, 400, 800 or 900. Moreover, speech and noise levels before NR+AGC (1210) and after NR/AGC processing (1220, 1230, 1240, 1250, 1260) are depicted.

(94) Graph 1210 is an illustrative example of an audio signal, describing for example the audio signal 110. Furthermore, graph 1210 shows a constant noise level over time and two phases in which speech is active. The speech, when active, has a higher signal level than the noise, resulting in a positive signal-to-noise ratio (SNR). In addition, graph 1210 shows a dashed line labeled target level, to which for example a speech signal is supposed to be adjusted to enable a comfortable listening experience.

(95) Graph 1220 shows the signal as displayed in graph 1210 after being processed by some noise reduction and gain control, for example some basic automatic gain control scheme. A higher SNR is obtained in periods of speech activity. In addition, the noise level is also amplified towards the target level, resulting in an uncomfortable noise amplification.

(96) Graph 1230 displays the output levels of a signal, for example a signal as described in graph 1210 after processing, where for example for the processing an automatic gain control exploits a voice activity detection to assist the automatic gain control update. Therefore, in the first time interval the noise level is not amplified towards the target level, an amplification only starts after speech activity is detected.

(97) Graph 1240 shows the output levels of a signal, for example the input signal as described in graph 1210, after signal processing, where the processing for example comprises a noise reduction and an automatic gain control, wherein the automatic gain control exploits a voice activity detection to apply the automatic gain control on speech only phases.

(98) Graph 1250 shows output levels of an input signal, for example as depicted in graph 1210, after signal processing, where the signal processing for example comprises unconstrained noise reduction and automatic gain control as described for example in FIG. 8 for apparatus 800. Thereby, a large increase in SNR is observable in phases of speech activity. Furthermore, the noise level is at a substantially constant level and reduced when compared to graph 1210.

(99) Graph 1260 shows output levels, for example of an input signal as depicted in graph 1210 after signal processing, wherein the signal processing comprises joint noise reduction automatic gain control under a distortion constraint as described for example in FIG. 9 for apparatus 900. A large signal-to-noise ratio increase compared to graph 1210 can be obtained. Furthermore, the noise level is at a substantially constant level. Moreover, a distortion constraint avoids uncomfortable signal distortions in the output of the processing.

(100) FIG. 13 illustrates a block diagram of a two-way, full-duplex speech communication system 1300 according to an advantageous embodiment of the invention. The system comprises a near-end and a far-end side, and a transmission in between. Furthermore, the near-end side and the far-end side each comprise a loudspeaker and a microphone, as well as an audio signal processing unit, wherein the audio signal processing unit can comprise one of the apparatuses 100, 300, 400, 800, 900.

(101) On the near-end side a person speaks into the microphone and receives audio information through the loudspeaker. Additionally, on the far-end side another person speaks into the microphone and receives audio information transmitted from the near-end side through the loudspeaker, potentially concurrently since it is a full-duplex system. The system 1300 facilitates a comfortable listening experience and improves speech intelligibility of a speech communication taking place between the near-end and far-end sides. Especially, for a hands-free scenario, where the distance between a user and the microphone can vary, the described embodiment can be suitable to improve intelligibility.

(102) FIG. 14 illustrates a block diagram of a signal processing chain that can be employed as a near-end or far-end side of a speech communication system, e.g. the speech communication system 1300.

(103) FIG. 15 illustrates a block diagram of a signal processing chain, it shows a basic configuration of applying NR and AGC processing independently. First an input signal is subject to a noise reduction which is based on a predetermined noise suppression value, here, a desired noise attenuation, the resulting signal after noise reduction is used to compute a signal level and to compute a gain value, for example an automatic gain control gain, based on a computed signal level and a predetermined target level. In a next step, the computed gain value, for example, the computed AGC gain, is applied to the signal after noise reduction is performed.

(104) Automatic gain control can be applied at the output of the noise reduction module for example on a frame-by-frame basis using the three-step procedure depicted in FIG. 15 and detailed below: 1. Level computation: The signal level, denoted by L(m), is computed at the AGC input (here the noise reduction (NR) output). A measure for the signal level can be a mere variance. Alternatively, spectral weighting can be applied to mimic the human auditory system, yielding a measure of the perceived loudness. 2. Gain computation: A scalar gain is derived by comparing the current input signal level, denoted by L(m), with a pre-defined target speech level L.sub.tar, as described for apparatuses 300 and 400 as target value. This can be achieved as follows:

(105) G A G C ( m ) = β G A G C ( m - 1 ) + ( 1 - β ) L t a r L ( m ) , ( 6 ) where G.sub.AGC(m) is the AGC gain computed at frame m and β is a forgetting factor used to temporally smooth the AGC gain (with 0≤β<1). 3. Gain multiplication: The last step comprises a multiplication of the input signal with the AGC gain. This can be done equivalently either in the time domain or in the frequency domain.

(106) The above procedure results in an amplification of the input audio signal when the AGC input level L(m) is below the target level L.sub.tar. In contrast, some attenuation is applied when the signal level L(m) is above the target level L.sub.tar. Hence the AGC gain is automatically adjusted over time and is therefore time varying. Furthermore, the described gain computation can be in part or completely used in the according modules of the described apparatuses 300, 400, 800 and 900. Moreover, for usage in the mentioned apparatuses modifications to the described methods can also be applied, for example based on employing a voice activity detection. Furthermore, note the absence of interaction between the AGC and NR modules, which is emphasized by the dashed horizontal line in FIG. 15.

(107) When, for instance, the background noise level after NR filtering is lower than the speech level, the drawback of this approach is that it causes a decrease of the measured level L(m) at the AGC input, which in turn causes an increase of the AGC gain during speech pauses, followed by a decrease of the AGC gain at the speech onsets. This phenomenon is illustrated in FIG. 12, where the graph 1210 shows the level of the speech and noise components in an input audio signal (before NR). Graph 1220 shows the speech and noise levels after applying NR and AGC according to the above procedure. Despite the constant speech and noise levels at the input, we see that this method produces a time-varying speech level, which is not the desired behavior for an AGC. Moreover, it produces a time-varying noise level, which results in a very unpleasant noise pumping effect in the output signal. To solve these problems, a Voice Activity Detection (VAD) is used, as explained for FIGS. 3, 4, 8, 9, 16 and 17.

(108) FIG. 16 illustrates a block diagram of an apparatus for processing a signal, it shows separate NR and AGC processing with voice activity detection triggering the gain update. In a first step, an input signal is subject to a noise reduction, which is based on a predetermined noise suppression value, for example a desired noise attenuation. In a next step, the input signal after being subject to noise reduction is used to compute a voice activity detection, on which a speech activity decision is based. When speech has been detected, a signal level is computed based on the input signal after noise reduction. In a further step, assuming speech activity, a gain value, for example a new automatic gain control gain, is determined based on the computed signal level and a predetermined target level. When no speech has been detected by the voice activity detection a gain value from a previous time instance is employed. In a final step, the gain value, either the gain value from a previous time instance or the gain value computed from the current time instance, is applied to the signal after noise reduction, thereby providing an output signal.

(109) To avoid a noise pumping effect and provide a consistent speech level, a VAD(=Voice Activity Detection) can be applied to bypass the gain update during speech pauses, as shown in FIG. 16. Provided that speech activity can be detected reliably, the AGC gain can then be adjusted during active speech segments only, while keeping the AGC gain constant during speech pauses. As depicted in graph 1230, this method produces a consistent speech level and avoids the noise pumping effect (constant noise level after convergence). However, it can cause a significant increase of the absolute noise level for large AGC gains, which becomes especially noticeable during speech pauses in practice.

(110) FIG. 17 illustrates a block diagram for signal processing which is analog to the block diagram described in FIG. 16 with separate NR and AGC processing with VAD triggering the entire AGC processing. Furthermore, the block diagram in FIG. 17 describes setting a gain value to 1 when no speech has been detected. To void noise amplification during speech pauses, as depicted in Graph 1230, the AGC gain is only applied during speech periods, as presented in FIG. 17 (applying unity during speech pauses is equivalent to not apply an AGC gain). This approach provides a low speech level and prevents amplification of the noise during speech pauses. However, it results again in a time-varying scaling of the noise (see graph 1240), which is perceived as an annoying noise pumping effect in practice.

(111) FIG. 18 illustrates a near-end or a far-end side of a communication system according to an advantageous embodiment of the invention, e.g. the speech communication system 1300 as described in FIG. 13. The far-end or the near-end side can be realized with a similar structure. Therefore, only one side is described but all functionalities can also be available on the other side.

(112) The considered side comprises a loudspeaker 1810 for delivering audio content to a listener and a microphone 1820 to pick up a desired signal, e.g. a speech signal from an talking person. In addition, an echo control system 1830 suppresses echoes in the microphone signal based on the loudspeaker signal. After echo control 1830, a joint noise reduction and gain control 1840 processes the signal. The joint noise reduction and gain control 1840 can be realized for example by apparatuses 100, 300, 400, 800 and 900.

(113) In addition, a comfort noise system 1850 applies a comfort noise to a signal after joint noise reduction and gain control 1840, to enable a comfortable listening experience to a user on the far-end, for example when no desired signal component is present in the acquired signal of the microphone (i.e. far-end only activity). In summary, the system described in FIG. 18 offers a signal processing so that a signal which is for example transmitted to a far-end side offers an intelligible speech component of the transmitted signal and a comfortable listening experience for a user on the far-end side.

(114) FIG. 19 illustrates a block diagram of a far-end side of a communication system according to an advantageous embodiment of the invention. The system in FIG. 19 comprises a loudspeaker 1810 configured to deliver audio content to a listener and a microphone 1820 configured to enable recording an audio signal which for example contains speech content. Furthermore, the system described in FIG. 19 comprises a joint noise reduction and gain control 1840 for the signal delivered to the loudspeaker 1810. In addition, the signal recorded by the microphone 1820 is subject to an echo control 1830 which is based on the signal delivered to the loudspeaker 1810 and a comfort noise system 1850. The echo control 1830 and comfort noise system 1850 comprise the same functionality as described in FIG. 18. Moreover, the joint noise reduction and gain control 1840 can for example be realized by apparatuses 100, 300, 400, 800 or 900. Thereby, the system described in FIG. 19 provides an intelligible speech signal when the audio signal delivered to the loudspeaker comprises a speech component. Furthermore, due to the noise reduction part a comfortable listening experience is realized.

(115) Further embodiments rely on a processing of the audio signal in a sequence of frames. The audio signal analyzer (130; 830; 930) is configured to analyze the audio signal in the sequence of frames comprising a first frame and a second frame following the first frame in time, to determine, for the first frame, a first plurality of noise suppression values, and for the second frame, a second plurality of noise suppression values. The analyzer is configured to determine the first plurality of noise suppression filter values so that the noise suppression filter values of the first plurality of noise suppression filter values are greater than or equal to a first minimum noise suppression filter value (130b′; 240; 358c, 360a) determined for the first frame, and so that the first minimum noise suppression filter value depends on a first characteristic of the first frame of the audio signal (130c′). The analyzer is furthermore configured to determine the second plurality of noise suppression filter values so that the noise suppression filter values of the second plurality of noise suppression filter values are greater than or equal to a second minimum noise suppression filter value (130b′; 240; 358c, 360a) determined for the second frame, and so that the second minimum noise suppression filter value depends on a second characteristic of the second frame of the audio signal (130c′). The filter (120; 310; 410; 820) is configured for filtering the audio signal in the sequence of frames, wherein a first filter for the first frame is adjusted based on the first plurality of noise suppression values, and wherein a second filter for the second frame is adjusted based on the second plurality of noise suppression values. The filter (120; 310; 410; 820) is furthermore configured to filter the first frame of the audio signal with the first filter and to filter the second frame of the audio signal with the second filter.

(116) Concluding, some embodiments of the present inventions can be summarized in a list. In an advantageous embodiment NR is applied first and comprises the following steps: 1. Receive an audio input signal. 2. Determine a noise attenuation limit based on an AGC gain determined in the previous time frame, a desired noise attenuation amount, and optionally based also on a distortion limit. 3. Determine a noise reduction filter based on the audio input signal and the noise attenuation limit. 4. Determine an AGC gain based on a target signal level, an optional voice activity information, and an audio signal a) the audio signal being the audio input signal, or b) the audio signal being a noise-reduced audio signal obtained by applying the noise reduction filter to the audio input signal, the optional voice activity information being used to optionally decrease the AGC gain during speech pauses. 5. Generate an output audio signal by applying the noise reduction filter and the AGC gain obtained in the previous frame to the audio input signal.

(117) Furthermore, another advantageous embodiment according to the invention is characterized as applying AGC first is performed according to: 1. Receive an audio input signal. 2. Determine an AGC gain based on a target signal level, an optional voice activity information, and the audio input signal, the optional voice activity information being used to optionally decrease the AGC gain during speech pauses. 3. Determine a noise attenuation limit a) based on a desired noise attenuation amount and the current AGC gain, or b) based on a desired noise attenuation amount, a distortion limit, and the current AGC gain. 4. Determine a noise reduction filter based on the audio input signal and the noise attenuation limit. 5. Generate an output audio signal by applying the noise reduction filter and the current AGC gain to the audio input signal.

(118) Although the present invention has been described in the context of block diagrams where the blocks represent actual or logical hardware components, the present invention can also be implemented by a computer-implemented method. In the latter case, the blocks represent corresponding method steps where these steps stand for the functionalities performed by corresponding logical or physical hardware blocks.

(119) Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus. Some or all of the method steps may be executed by (or using) a hardware apparatus, like for example, a microprocessor, a programmable computer or an electronic circuit. In some embodiments, some one or more of the most important method steps may be executed by such an apparatus.

(120) Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example a floppy disc, a DVD, a Blu-Ray, a CD, a ROM, a PROM, and EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. Therefore, the digital storage medium may be computer readable.

(121) Some embodiments according to the invention comprise a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.

(122) Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may, for example, be stored on a machine readable carrier.

(123) Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier.

(124) In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.

(125) A further embodiment of the inventive method is, therefore, a data carrier (or a non-transitory storage medium such as a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein. The data carrier, the digital storage medium or the recorded medium are typically tangible and/or non-transitory.

(126) A further embodiment of the invention method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may, for example, be configured to be transferred via a data communication connection, for example, via the internet.

(127) A further embodiment comprises a processing means, for example, a computer or a programmable logic device, configured to, or adapted to, perform one of the methods described herein.

(128) A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.

(129) A further embodiment according to the invention comprises an apparatus or a system configured to transfer (for example, electronically or optically) a computer program for performing one of the methods described herein to a receiver. The receiver may, for example, be a computer, a mobile device, a memory device or the like. The apparatus or system may, for example, comprise a file server for transferring the computer program to the receiver.

(130) In some embodiments, a programmable logic device (for example, a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are advantageously performed by any hardware apparatus.

(131) While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations and equivalents as fall within the true spirit and scope of the present invention.

REFERENCES

(132) [1] E. Hãnsler and G. Schmidt: “Hands-free telephones—Joint Control of Echo Cancellation and Postfiltering”, Signal Processing, Volume: 80, Issue: 11, pp. 2295-2305, September 2000. [2] F. Kürch, E. Mabande and G. Enzner, “State-space architecture of the partitioned-block-based acoustic echo controller,” in Proc. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), May 2014. [3] A. Favrot, C. Faller, M. Kallinger, F. Kürch, and M. Schmidt, “Acoustic Echo Control Based on Temporal Fluctuations of Short-Time Spectra,” in Proc. International Workshop on Acoustic Echo and Noise Control (IWAENC), September 2008. [4] Y. Ephraim, D. Malah, “Speech enhancement using a minimum mean-square error short-time spectral amplitude estimator,” IEEE Trans. Acoust. Speech Signal Process, Vol. 32, pp. 1109-1121, December 1984. [5] Guangji Shi and Changxue Ma, “Subband Comfort Noise Insertion for an Acoustic Echo Suppressor,” in Proc. 133rd Audio Engineering Society Convention, October 2012. [6] M. Matsubara, K. Nomoto. “Audio signal processing device and noise suppression processing method in automatic gain control device.” Patent publication No. US 2008/0147387 A1.