Dual-microphone methods for reverberation mitigation

11322168 · 2022-05-03

Assignee

Inventors

Cpc classification

International classification

Abstract

A dual microphone signal processing arrangement for reducing reverberation is described. Time domain microphone signals are developed from a pair of sensing microphones. These are converted to the time-frequency domain to produce complex value spectra signals. A binary gain function applies frequency-specific energy ratios between the spectra signals to produce transformed spectra signals. A sigmoid gain function based on an inter-microphone coherence value between the transformed spectra signals is applied to the transformed spectra signals to produce coherence adapted spectra signals. And an inverse time-frequency transformation is applied to the coherence adjusted spectra signals to produce time-domain reverberation-compensated microphone signals with reduced reverberation components.

Claims

1. A method of dual microphone signal processing to reduce reverberation, the method comprising: developing time domain microphone signals from a pair of sensing microphones, the microphone signals having sound source components and reverberation components; converting the microphone signals to time-frequency domain to produce complex value spectra signals; determining frequency-specific energy ratios between the spectra signals; applying a binary gain function to the spectra signals based on the energy ratios to produce transformed spectra signals; determining an inter-microphone coherence value between the transformed spectra signals; applying a sigmoid gain function to the transformed spectra signals based on the inter-microphone coherence value to produce coherence adapted spectra signals; and applying an inverse time-frequency transformation to the coherence adjusted spectra signals to produce time-domain reverberation-compensated microphone signals with reduced reverberation components.

2. The method according to claim 1, wherein applying a sigmoid gain function includes using a two-dimensional enhancement image filter to produce an edge image enhanced sigmoid gain function that is applied to the transformed spectra signals.

3. The method according to claim 1, wherein applying the binary gain function includes comparing individual frequency-specific energy ratios to a selected threshold value to reduce the reverberation components.

4. The method according to claim 1, wherein the inter-microphone coherence value is a function of power spectral densities of the transformed spectra signals.

5. The method according to claim 1, wherein the sigmoid gain function is a tunable two-parameter logistic sigmoid function.

6. The method according to claim 1, wherein the sigmoid gain function is adaptively determined for specific frequencies in the transformed spectra signals.

7. The method according to claim 1, further comprising: generating an audio output signal from the reverberation-compensated microphone signals.

8. A signal processing system for processing dual microphone signals to reduce reverberation, the system comprising: a pair of sensing microphones that develop time domain microphone signals having sound source components and reverberation components; a spectral converter that converts the microphone signals to time-frequency domain to produce complex value spectra signals; a binary masking module that determines frequency-specific energy ratios between the spectra signals and applies a binary gain function to the spectra signals based on the energy ratios to produce transformed spectra signals; a soft masking module that determines an inter-microphone coherence value between the transformed spectra signals and applies a sigmoid gain function to the transformed spectra signals based on the inter-microphone coherence value to produce coherence adapted spectra signals; and a time domain transform module that applies an inverse time-frequency transformation to the coherence adjusted spectra signals to produce time-domain reverberation-compensated microphone signals with reduced reverberation components.

9. The system according to claim 8, wherein the soft masking module uses a two-dimensional enhancement image filter to produce an edge image enhanced sigmoid gain function that is applied to the transformed spectra signals.

10. The system according to claim 8, wherein the binary masking module compares individual frequency-specific energy ratios to a selected threshold value to reduce the reverberation components.

11. The system according to claim 8, wherein the soft masking module determines the inter-microphone coherence value as a function of power spectral densities of the transformed spectra signals.

12. The system according to claim 8, wherein the binary masking module determines the sigmoid gain function as a tunable two-parameter logistic sigmoid function.

13. The system according to claim 8, wherein the soft masking module adaptively determines the sigmoid gain function for specific frequencies in the transformed spectra signals.

14. The system according to claim 8, further comprising: an audio output module that generates an audio output signal from the reverberation-compensated microphone signals.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 shows a schematic illustration of an embodiment of the present invention based on dual microphone arrangements.

(2) FIG. 2 shows instantaneous ER values (dB) calculated between the two microphones along with the preset threshold value fixed at −10 dB (dashed line) for the frequency bin centered at 1,500 Hz.

(3) FIG. 3 shows a functional block diagram of the signal processing arrangement according to an embodiment of the present invention.

(4) FIG. 4 shows a functional block diagram of the signal processing arrangement according to another embodiment of the present invention.

(5) FIG. 5A shows magnitude of coherence values obtained inside an anechoic environment.

(6) FIG. 5B shows magnitude of coherence values obtained inside a reverberant environment.

(7) FIG. 6 is a plot of the gain curve representing a typical sigmoidal function defined for different values of parameter γ which controls the sigmoidal slope.

(8) FIG. 7 is an illustration of the fit of the sigmoid function to the cumulative density function calculated from the coherence values between the two microphone sensors.

(9) FIG. 8 shows a functional block diagram of the signal processing arrangement according to another embodiment of the present invention.

(10) FIG. 9 is a spectrogram of a 1.0 second phrase recorded inside an acoustical enclosure with reverberation time equal to 0.6 seconds.

(11) FIG. 10 is a spectrogram of the same phrase after processing with the reverberation mitigation signal processing scheme.

DETAILED DESCRIPTION

(12) Various embodiments of the present invention are directed to techniques for dual microphone signal processing to reduce reverberation. Although the following is described in the specific context of two microphone signals, it will be understood that the invention is not limited in that regard, and may equally be applied to contexts with more than two microphones. For example, as shown in FIG. 1: Mic A=Mic 1 and Mic B=Mic 3, referred to as a front-left and rear-left configuration Mic A=Mic 2 and Mic B=Mic 4, referred to as the front-right and rear-right configuration Mic A=Mic 1 and Mic B=Mic 2, referred to as the front-left and front-right configuration Mic A=Mic 3 and Mic B=Mic 4, referred to as the rear-left and rear-right configuration Mic A=Mic 1 and Mic B=Mic 4, referred to as the front-left and rear-right configuration Mic A=Mic 2 and Mic B=Mic 3, referred to as the front-right and rear-left configuration.

(13) Two time domain microphone signals x.sub.A(n) and x.sub.B(n) from first and the second sensing microphones are transformed to the time-frequency (T-F) domain by using short-time Fourier transform (STFT) to produce complex valued spectra X.sub.A(ω,k) and X.sub.B(ω,k), where ω represents the frequency band and k denotes the time frame. The concept of time-frequency analysis is well-known within the art.

(14) A selection criterion can be based on frequency-specific energy ratios (ER) between the two sensing microphones (which may be, for example, placed on each side of the head in the case of hearing devices) and can be defined by utilizing the time-frequency transformations of the inputs to the first and second microphone signals. This criterion can be computed separately for each specific frequency bin as follows:

(15) E R ( ω , k ) = 1 0 log 1 0 .Math. X A ( ω , k ) .Math. 2 .Math. X B ( ω , k ) .Math. 2 ( 1 )
In the specific case of two sensing microphones placed on opposite sides of the head, the ER criterion exploits the energy difference between the two sides arising due to the acoustic shadow of the head. In the case of the two sensor elements being placed in the same audio processor (e.g., end-fire array), the ER criterion relies on the energy difference due to sound propagation, meaning that the signal from the rear microphone needs to be appropriately delayed by the time that it takes the sound to travel between the two microphone elements. In both cases, by utilizing the ER metric described in Eq. (1), only T-F regions corresponding to signals originating from the front of the listener are retained. In consequence, this produces T-F units with a higher overall SNR.

(16) To remove the additive reverberation components present in the microphone signal recorded from the two sensing microphones, a comparison of the individual frequency-specific ER values against an empirically determined threshold value, T, can be carried out. FIG. 2 shows a specific example of the instantaneous ER values (dB) calculated between the two microphones along with the preset threshold value fixed at −10 dB (dashed line) for the frequency bin centered at 1500 Hz.

(17) According to an embodiment as shown in FIG. 3, the two complex-value spectra signals from the two sensing microphones (microphone A and B) are processed by a binary time-frequency mask or equivalently a binary gain function denoted by G.sub.1(ω,k). This mask (or gain) takes the value of one when ER(ω,k)>T, and is zero otherwise and is equal to:

(18) G 1 ( ω , k ) = { 1 , E R ( ω , k ) > T 0 , otherwise ( 2 )
where T represents the threshold value, expressed in dB. The threshold parameter T may be a scalar or a vector containing frequency-specific thresholds. In the one specific embodiment shown in FIG. 3, the calculated gain G.sub.1(ω,k) is applied to the time-frequency distribution of the first microphone spectra signal X.sub.A(ω,k) and the time-frequency distribution of the second microphone spectra signal X.sub.B(ω,k). This produces a new set of transformed spectra signals that can subsequently be transformed back to the time-domain using an inverse time-frequency transformation.

(19) In another embodiment of the present invention illustrated in FIG. 4, the inter-microphone coherence can be obtained from the first microphone signal and the second microphone signal. From the time-frequency complex valued spectra X.sub.A(ω,k) and X.sub.B(ω,k) of the first and second sensing microphones (microphones A and B), the inter-microphone coherence function can be written as a function of the power spectral densities:

(20) Γ A B ( ω , k ) = .Math. Φ A B ( ω , k ) .Math. Φ A A ( ω , k ) Φ B B ( ω , k ) ( 3 )
where Φ.sub.AA(ω,k), Φ.sub.BB(ω,k) and Φ.sub.AB(ω,k) are the exponentially weighted short-term auto-power and cross-power spectral density functions defined as:
Φ.sub.AA(ω,k)=αΦ.sub.AA(τ−1,ω)+(1−α)|X.sub.A(ω,k)|.sup.2  (4)
Φ.sub.BB(ω,k)=αΦ.sub.BB(τ−1,ω)+(1−α)|X.sub.B(ω,k)|.sup.2  (5)
Φ.sub.AB(ω,k)=αΦ.sub.AB(ω,k)+(1−α)X.sub.B(ω,k)X*.sub.A(ω,k)  (6)
and where 0≤α≤1 is the smoothing parameter and * denotes the complex conjugate.

(21) These coherence values yield values between 0 (for fully incoherent signals) and 1 (for fully coherent signals). The coherence value is a function of frequency with values between 0 and 1 that indicates how well the signal recorded at microphone A corresponds to the signal recorded at microphone B in each separate frequency bin. An example of the magnitude of coherence values between the first and second microphone signals recorded inside an anechoic environment is provided in FIG. 5A. An example of the magnitude of coherence values between the first and second microphone signals recorded inside a reverberant environment with reverberation time equal to 0.6 seconds is provided in FIG. 5B.

(22) After the coherence values have been obtained, a sigmoidal mapping stage is subsequently applied to the coherence values to construct a coherence-to-gain mapping through the use of a tunable two-parameter logistic sigmoid gain function denoted by G.sub.2(ω,k) which is equal to:

(23) G 2 ( ω , k ) = 1 1 + exp [ - γ ( ω ) ( Γ A B ( ω , k ) - β ( ω ) ) ] ( 7 )
where parameter γ controls the sigmoidal slope and parameter β denotes the offset along the horizontal axis. A steep sigmoidal function characterized by a large value of γ will suppress samples with low coherence by applying a low gain factor and retain samples with high coherence values. A less aggressive mapping function defined by a relatively small value, for example γ=1, will apply a more linear mapping and attenuate samples with low coherence values relative to the attenuation applied to samples with high coherence values. FIG. 6 plots the gain curve representing different versions of the sigmoid function described in Eq. (7) defined for different values of parameter γ. Note that in this example parameter β that represents the inflection point of the sigmoid is kept constant at 0.5.

(24) Both parameters γ and β may be determined adaptively for each frequency band ω. First, to adaptively determine the optimal values for γ and β in Eq. (7), the upper 3rd quartile inter-microphone coherence values are calculated for each frequency band, such as:
{circumflex over (Γ)}.sub.AB(ω)=Q.sub.3[Γ.sub.AB(ω,k)]  (8)
Next, nonlinear least squares regression is used to fit the shape of a Gaussian cumulative distribution function to the upper 3rd quartile (Q.sub.3) of the inter-microphone coherence. FIG. 7 illustrates the fit of the sigmoid function to the density function calculated from the inter-microphone coherence values.

(25) In the specific embodiment illustrated in FIG. 4, the calculated gain G.sub.2(ω,k) is applied to the time-frequency distribution of the first microphone spectra signal X.sub.A(ω,k) and the time-frequency distribution of the second microphone spectra signal X.sub.B(ω,k). This produces a new set of transformed spectra signals that are subsequently transformed back to the time-domain using an inverse time-frequency transformation.

(26) FIG. 8 shows a functional block diagram of the signal processing arrangement according to another embodiment of the present invention. A pair of sensing microphones develop time domain microphone signals x.sub.A(n) and x.sub.B(n) having sound source components and reverberation components. A spectral converter converts the microphone signals to time-frequency domain to produce complex value spectra signals X.sub.A(ω,k) and X.sub.B(ω,k), where ω represents the frequency band and k denotes the time frame. A binary masking module determines frequency-specific energy ratios ER between the spectra signals and applies a binary gain function gain G.sub.1(ω,k) (as described in Eq. (2)) to the microphone spectra signals based on the energy ratios to produce transformed spectra signals. That is, the binary gain function gain G.sub.1(ω,k) is applied the T-F distribution of the first microphone spectra signal X.sub.A(ω,k) and the T-F distribution of the second microphone spectra signal X.sub.B(ω,k). This will provide a new set of transformed spectra signals that are subsequently referred to as Y.sub.A(ω,k) and Y.sub.B(ω,k) whose spectrogram is depicted in FIG. 9.

(27) A soft masking module then determines an inter-microphone coherence value of the transformed spectra signals Y.sub.A(ω,k) and Y.sub.B(ω,k) (as described in Eqs. (3)-(6) and above) and for each separate T-F unit applies a sigmoid gain function G.sub.2(ω,k) (as described in Eqs. (7)-(8) and above) to the transformed spectra signals Y.sub.A(ω,k) and Y.sub.B(ω,k) based on the inter-microphone coherence value to produce coherence adapted spectra signals. The soft decision mask denoted by G.sub.2(ω,k) may further be post-processed via a 2-D enhancement image filter derived from a basic image processing technique called “spatial sharpening” or unsharp masking. Unsharp masking produces an edge image E(ω,k) from the input image G.sub.2(ω,k) via the transformation:
E(ω,k)=G.sub.2(ω,k)−G.sub.smooth(ω,k)  (9)
where G.sub.smooth(ω,k) is a smoothed version of the original image G.sub.2(ω,k). The edge image can then be used for sharpening by adding it back into the original image, such that:
custom character(ω,k)=G.sub.2(ω,k)+k*E(ω,k)  (10)
where parameter k denotes a scaling constant which typically varies between 0.2 and 0.7, with larger values providing increasing amounts of sharpening.

(28) The post-processed gain custom character(ω,k) is applied to the T-F distribution of the first microphone signal Y.sub.A(ω,k) and the T-F distribution of the second microphone digital audio signal Y.sub.B(ω,k) to produce another set of signals that are subsequently transformed by a time domain transform module back to the time-domain using an inverse time-frequency transformation. The estimated enhanced signals such as the one plotted in FIG. 10 can then be routed to an audio output module that generates an output perceivable to the wearer of a hearing instrument or any other audio device.

(29) The gain estimation stages discussed herein do not require access to a theoretical clean or an uncorrupted signal, and therefore the present approach is ‘blind’ and generalizable to any acoustical environment. The statistical parameters necessary to form either the hard or soft decision masks can be easily adapted based on information extracted exclusively from the microphone signal outputs. Algorithms can be easily integrated in existing audio processors equipped with two spaced-apart external microphones and can operate in parallel or in conjunction with a beamforming module to enhance the acoustic input. Such embodiments provide a robust technique for suppression of room reverberation inherent in the signals recorded by two spatially separated microphones, and also can provide adequate suppression of background noise from a number of interfering speakers.

(30) Embodiments of the invention may be implemented in part in any conventional computer programming language such as VHDL, SystemC, Verilog, ASM, etc. Alternative embodiments of the invention may be implemented as pre-programmed hardware elements, other related components, or as a combination of hardware and software components.

(31) Embodiments can be implemented in part as a computer program product for use with a computer system. Such implementation may include a series of computer instructions fixed either on a tangible medium, such as a computer readable medium (e.g., a diskette, CD-ROM, ROM, or fixed disk) or transmittable to a computer system, via a modem or other interface device, such as a communications adapter connected to a network over a medium. The medium may be either a tangible medium (e.g., optical or analog communications lines) or a medium implemented with wireless techniques (e.g., microwave, infrared or other transmission techniques). The series of computer instructions embodies all or part of the functionality previously described herein with respect to the system. Those skilled in the art should appreciate that such computer instructions can be written in a number of programming languages for use with many computer architectures or operating systems. Furthermore, such instructions may be stored in any memory device, such as semiconductor, magnetic, optical or other memory devices, and may be transmitted using any communications technology, such as optical, infrared, microwave, or other transmission technologies. It is expected that such a computer program product may be distributed as a removable medium with accompanying printed or electronic documentation (e.g., shrink wrapped software), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server or electronic bulletin board over the network (e.g., the Internet or World Wide Web). Of course, some embodiments of the invention may be implemented as a combination of both software (e.g., a computer program product) and hardware. Still other embodiments of the invention are implemented as entirely hardware, or entirely software (e.g., a computer program product).

(32) Although various exemplary embodiments of the invention have been disclosed, it should be apparent to those skilled in the art that various changes and modifications can be made which will achieve some of the advantages of the invention without departing from the true scope of the invention.