Signal processing apparatus, method and computer program for dereverberating a number of input audio signals
09830926 · 2017-11-28
Assignee
Inventors
Cpc classification
G10L19/008
PHYSICS
International classification
H04B3/20
ELECTRICITY
Abstract
A signal processing apparatus for dereverberating a number of input audio signals, where the signal processing apparatus includes a processor configured to transform the number of input audio signals into a transformed domain to obtain input transformed coefficients, the input transformed coefficients being arranged to form an input transformed coefficient matrix, determine filter coefficients upon the basis of eigenvalues of a signal space, the filter coefficients being arranged to form a filter coefficient matrix, convolve input transformed coefficients of the input transformed coefficient matrix by filter coefficients of the filter coefficient matrix to obtain output transformed coefficients, and the output transformed coefficients being arranged to form an output transformed coefficient matrix.
Claims
1. A signal processing apparatus for dereverberating a number of input audio signals, comprising: a memory; and a processor coupled to the memory and configured to: transform the number of input audio signals into a transformed domain to obtain input transformed coefficients, wherein the input transformed coefficients being arranged to form an input transformed coefficient matrix; determine filter coefficients upon the basis of eigenvalues of a signal space, wherein the filter coefficients being arranged to form a filter coefficient matrix; convolve the input transformed coefficients of the input transformed coefficient matrix by the filter coefficients of the filter coefficient matrix to obtain output transformed coefficients, wherein the output transformed coefficients being arranged to form an output transformed coefficient matrix; and inversely transform the output transformed coefficient matrix from the transformed domain to obtain a number of output audio signals.
2. The signal processing apparatus of claim 1, wherein the processor is further configured to determine the signal space upon the basis of an input auto correlation matrix of the input transformed coefficient matrix.
3. The signal processing apparatus of claim 1, wherein the processor is further configured to transform the number of input audio signals into frequency domain to obtain the input transformed coefficients.
4. The signal processing apparatus of claim 1, wherein the processor is further configured to transform the number of input audio signals into the transformed domain for a number of past time intervals to obtain the input transformed coefficients.
5. The signal processing apparatus of claim 4, wherein the processor is further configured to: determine input auto coherence coefficients upon the basis of the input transformed coefficients, wherein the input auto coherence coefficients indicating a coherence of the input transformed coefficients associated to a current time interval and a past time interval, and wherein the input auto coherence coefficients being arranged to form an input auto coherence matrix; and determine the filter coefficients upon the basis of the input auto coherence matrix.
6. The signal processing apparatus of claim 1, wherein the processor is further configured to determine the filter coefficient matrix according to the equation H=Φ.sub.xx.sup.−1Γ.sub.xS.sub.
7. The signal processing apparatus of claim 6, wherein the processor is further configured to: generate a number of auxiliary audio signals upon the basis of the number of input audio signals; and transform the number of auxiliary audio signals into the transformed domain to obtain auxiliary transformed coefficients, wherein the auxiliary transformed coefficients being arranged to form the auxiliary transformed coefficient matrix.
8. The signal processing apparatus of claim 1, wherein the processor is further configured to determine the filter coefficient matrix according to the equation H=Φ.sub.xx.sup.−1{circumflex over (Γ)}.sub.sS.Math.({circumflex over (Γ)}.sub.sS.sup.HΦ.sub.xx.sup.−1{circumflex over (Γ)}.sub.sS).sup.−1, wherein the H denotes the filter coefficient matrix, wherein the x denotes the input transformed coefficient matrix, wherein the Φ.sub.xx denotes an input auto correlation matrix of the input transformed coefficient matrix, wherein the {circumflex over (Γ)}.sub.sS denotes an estimate auto coherence matrix, and wherein the {circumflex over (Γ)}.sub.sS.sup.H denotes Hermitian transpose of the {circumflex over (Γ)}.sub.sS.
9. The signal processing apparatus of claim 8, wherein the processor is further configured to determine the estimate auto coherence matrix according to the equation {circumflex over (Γ)}.sub.sS(k,n):=(I.sub.MU.sup.−1).Math.Γ.sub.xX.Math.U, wherein the {circumflex over (Γ)}.sub.sS denotes the estimate auto coherence matrix, wherein the x denotes the input transformed coefficient matrix, wherein the Γ.sub.xX denotes an input auto coherence matrix of the input transformed coefficient matrix, wherein the I.sub.M denotes an identity matrix of matrix dimension M, wherein the U denotes an eigenvector matrix of an eigenvalue decomposition performed upon the basis of the input auto coherence matrix, and wherein the
denotes a Kronecker product.
10. The signal processing apparatus of claim 1, wherein the processor is further configured to determine channel transformed coefficients upon the basis of the input transformed coefficients of the input transformed coefficient matrix and the filter coefficients of the filter coefficient matrix, wherein the channel transformed coefficients being arranged to form a channel transformed matrix.
11. The signal processing apparatus of claim 10, wherein the processor is further configured to determine the channel transformed matrix according to the equation Ĝ(k,n)=(H.sup.Hx(k,n)diag{X.sub.1(k,n), X.sub.2(k,n), . . . , X.sub.P(k,n)}.sup.−1).sup.−1, wherein the Ĝ denotes the channel transformed matrix, wherein the x denotes the input transformed coefficient matrix, wherein the H denotes the filter coefficient matrix, wherein the H.sup.H denotes Hermitian transpose of the H, and wherein the X.sub.1 to X.sub.P denote the input transformed coefficients.
12. The signal processing apparatus of claim 1, wherein the number of input audio signals comprise audio signal portions being associated to a number of audio signal sources, and wherein the signal processing apparatus is configured to separate the number of audio signal sources upon the basis of the number of input audio signals.
13. A signal processing method for dereverberating a number of input audio signals, comprising: transforming the number of input audio signals into a transformed domain to obtain input transformed coefficients, wherein the input transformed coefficients being arranged to form an input transformed coefficient matrix; determining filter coefficients upon the basis of eigenvalues of a signal space, wherein the filter coefficients being arranged to form a filter coefficient matrix; convolving the input transformed coefficients of the input transformed coefficient matrix by the filter coefficients of the filter coefficient matrix to obtain output transformed coefficients, wherein the output transformed coefficients being arranged to form an output transformed coefficient matrix; and inversely transforming the output transformed coefficient matrix from the transformed domain to obtain a number of output audio signals.
14. The signal processing method of claim 13, further comprising determining the signal space upon the basis of an input auto correlation matrix of the input transformed coefficient matrix.
15. A computer program, comprising a program code for performing a signal processing method when executed on a computer, wherein the signal processing method comprises: transforming a number of input audio signals into a transformed domain to obtain input transformed coefficients, wherein the input transformed coefficients being arranged to form an input transformed coefficient matrix; determining filter coefficients upon the basis of eigenvalues of a signal space, wherein the filter coefficients being arranged to form a filter coefficient matrix; convolving the input transformed coefficients of the input transformed coefficient matrix by the filter coefficients of the filter coefficient matrix to obtain output transformed coefficients, wherein the output transformed coefficients being arranged to form an output transformed coefficient matrix; and inversely transforming the output transformed coefficient matrix from the transformed domain to obtain a number of output audio signals.
Description
BRIEF DESCRIPTION OF DRAWINGS
(1) Further embodiments of the disclosure will be described with respect to the following figures.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
DETAILED DESCRIPTION OF EMBODIMENTS
(10)
(11) The signal processing apparatus 100 comprises a transformer 101 being configured to transform the number of input audio signals into a transformed domain to obtain input transformed coefficients, the input transformed coefficients being arranged to form an input transformed coefficient matrix, a filter coefficient determiner 103 being configured to determine filter coefficients upon the basis of eigenvalues of a signal space, the filter coefficients being arranged to form a filter coefficient matrix, a filter 105 being configured to convolve input transformed coefficients of the input transformed coefficient matrix by filter coefficients of the filter coefficient matrix to obtain output transformed coefficients, the output transformed coefficients being arranged to form an output transformed coefficient matrix, and an inverse transformer 107 being configured to inversely transform the output transformed coefficient matrix from the transformed domain to obtain a number of output audio signals.
(12)
(13) The signal processing method 200 comprises the following steps.
(14) Step 201: Transforming the number of input audio signals into a transformed domain to obtain input transformed coefficients.
(15) Further, the input transformed coefficients being arranged to form an input transformed coefficient matrix.
(16) Step 203: Determining filter coefficients upon the basis of eigenvalues of a signal space.
(17) Further, the filter coefficients being arranged to form a filter coefficient matrix.
(18) Step 205: Convolving input transformed coefficients of the input transformed coefficient matrix by filter coefficients of the filter coefficient matrix to obtain output transformed coefficients.
(19) Further, the output transformed coefficients being arranged to form an output transformed coefficient matrix.
(20) Step 207: Inversely transforming the output transformed coefficient matrix from the transformed domain to obtain a number of output audio signals.
(21) The signal processing method 200 can be performed by the signal processing apparatus 100. Further features of the signal processing method 200 can directly result from the functionality of the signal processing apparatus 100 as described above and below in further detail.
(22)
(23) The transformer 101 can be a SIFT transformer. The filter coefficient determiner 103 can perform an algorithm. The filter 105 can be characterized by a filter coefficient matrix H. The inverse transformer 107 can be an inverse STFT (ISTFT) transformer. The auxiliary audio signal generator 301 can provide an initial guess, e.g. using a delay-and-sum technique and/or spot microphone audio signals. The other transformer 303 can be a STFT transformer. The post-processor 305 can provide post-processing capabilities, e.g. an automatic speech recognition (ASR), and/or an up-mixing.
(24) A number Q of input audio signals can be provided to the transformer 101 and the auxiliary audio signal generator 301. The auxiliary audio signal generator 301 can provide a number of P auxiliary audio signals to the other transformer 303. The other transformer 303 can provide a number P of rows or columns of an auxiliary transformed coefficient matrix to the filter coefficient determiner 103. The filter 105 can provide a number P of rows or columns of an output transformed coefficient matrix to the inverse transformer 107. The inverse transformer 107 can provide a number P of output audio signals to the post-processor 305 yielding a number P of post-processed audio signals.
(25) The diagram shows an overall architecture of the apparatus 100. The input to the apparatus 100 can be microphone signals. These can optionally be preprocessed by an algorithm offering spatial selectivity, e.g. a delay-and-sum beamformer. The preprocessed signals and/or microphone signals can be analyzed by an STFT. The microphone signals can then be stored in a buffer with optionally variable size for the different frequency bins. The algorithms can calculate filter coefficients based on the buffered audio signal time intervals or frames. The buffered signal can be filtered in each frequency bin with a calculated complex filter. The output of the filtering can be transformed back to the time domain. The processed audio signals can optionally be fed into the post-processor 305, such as for ASR or up-mixing.
(26) Some implementation forms can relate to blind single-channel and/or multi-channel minimization of an acoustical influence of an unknown room. They can be employed in multi-channel acquisition systems in telepresence for enhancing the ability of the systems to focus onto a part of a captured acoustic scene, speech and signal enhancement for mobiles and tablets, in particular by dereverberation of signals in a hands-free mode, and also for up-mixing of mono signals.
(27) For this purpose, an approach for blind dereverberation and/or source separation can be used. The approach can be specialized to a single-channel case and can be used as a blind source separation post-processing stage.
(28) The propagation of sound waves from a sound source to a predefined measurement point under typical conditions can be described by convolving the sound source signal with a Green's function which can solve an inhomogeneous wave equation under given boundary conditions. The boundary conditions, however, may not be controllable and may result in undesired acoustic characteristics such as long reverberation time which can cause insufficient intelligibility. In advanced communication systems which are able to synthesize a user defined acoustic environment, it can be desirable to mitigate the influence of the recording room and to maintain only a clean excitation signal to integrate it properly in the desired virtual acoustic environment.
(29) In the case of multiple sound sources, e.g. speakers, captured by a distributed microphone array in a recording room, dereverberation can offer original clean source signals separated and free of the recording room influence, e.g. speech signals as would be recorded by a microphone next to the mouth of a single speaker in an anechoic chamber.
(30) Dereverberation techniques can aim at minimizing the effect of the late part of the room impulse response. However, a full deconvolution of the microphone signals can be challenging and the output can be a less reverberant mixture of the source signals but not separated source signals.
(31) Dereverberation techniques can be classified into single-channel and multi-channel techniques. Due to theoretical limits, an ideal deconvolution can typically be achieved in the multi-channel case where the number of recording microphones Q can be higher than the number of active sound sources P, e.g. speakers.
(32) Multi-channel dereverberation techniques can aim at inverting an MIMO FIR, system between the sound sources and the microphones wherein each acoustic path between a sound source and a microphone can be modelled by an FIR filter of length L. The MIMO system can be presented in time domain as a matrix that can be invertible if it is square and regular. Hence, an ideal inversion can be performed if the following two conditions hold.
(33) First, the length L′ of a finite inverse filter fulfils the following equation:
(34)
(35) Second, the individual filters of the MIMO system do not exhibit common roots in the z-domain.
(36) An approach to estimate an ideal inverse system can be employed. The approach can be based on exploiting a non-Gaussianity, a non-whiteness, and a non-stationarity of the source signals. The approach can feature a minimum distortion on the cost of a high computational complexity for the computation of higher order statistics. Moreover, since it can aim at solving an ideal inversion problem, it may require from the system to have more microphones than sound sources and may not be applicable for a single channel problem.
(37) Another approach to dereverberate a multi-channel recording can be based on estimating a signal subspace. Ambient and direct parts of the audio signal can be estimated separately. Late reverberations can be estimated and can be treated as noise. Therefore, the approach may require an accurate estimation of the ambient part, i.e. the late reverberations, to be able to cancel it. The approaches based on estimating a multi-channel signal subspace can be dedicated to reduce the reverberance and not to de-mix, i.e. to separate, the sound sources. The approaches are typically applied to multi-channel setups and may not be used to solve a single channel dereverberation problem. Additionally, heuristic statistical models to estimate the reverberation and to reduce the ambient part can be employed. These models may be based on training data and may suffer from a high complexity.
(38) A further approach to estimate diffuse and direct components in the spectral domain can be employed. The short-time spectra of a multi-channel signal can be down-mixed into X.sub.1(k,n) and X.sub.2 (k,n), where k and n denote a frequency bin index and a time interval or frame index. A real coefficient H(k,n) can be derived to extract the direct components Ŝ.sub.1(k,n) and Ŝ.sub.2 (k,n) from the down-mix according to the following equations:
Ŝ.sub.1(k,n)=H(k,n).Math.X.sub.1(k,n)
Ŝ.sub.2(k,n)=H(k,n).Math.X.sub.2(k,n).
(39) Under the assumption that direct and diffuse components in the down-mix are mutually uncorrelated and the diffuse components in the down-mix have equal power, the real coefficient H(k,n) can be calculated based on a Wiener optimization criterion according to the following equation:
(40)
where P.sub.S and P.sub.A are the sums of the short-time power spectral estimates of the direct and diffuse components in the down-mix. P.sub.S and P.sub.A can be derived based on the cross-correlation of the down-mix as Re(E{X.sub.1X.sub.2*}). These filters can further be applied to multi-channel audio signals to generate the corresponding direct and ambient components. This approach can be based on a multi-channel setup and may not solve a single channel dereverberation problem. Moreover, it may introduce a high amount of distortion and may not perform a de-mixing.
(41) Single channel dereverberation solutions can be based on the minimum statistics principle. Therefore, they may estimate the ambient and the direct part of the audio signal separately. An approach that incorporates a statistical system model can be employed which can be based on training data. Another approach can be applied on a single channel setup offering limited performance in complex sound scenes, especially with respect to the audio signal quality since the approach can be optimized for automatic speech recognition and not for a high quality listening experience.
(42) Some implementation forms can relate to single-channel and multi-channel dereverberation techniques. In order to obtain a dry output audio signal, an M-taps MIMO FIR filter in the STFT domain with P outputs, i.e. number of audio signal sources, and Q inputs, i.e. number of input audio signals, number of microphones, or number of outputs of a preprocessing stage such as a beamformer, e.g. a delay-and-sum beamformer, can be applied. The filter 105 can be designed in a way that each output audio signal can be coherent to its own history within a predefined set of consequent time intervals or frames and can be orthogonal to the history of the other audio source signals.
(43) In the following, a mathematical setup and a signal model is introduced used to derive the dereverberation approach. The input audio signal x.sub.q at a time instant t can be given as a convolution of a dry excitation audio source signal s(t):=[s.sub.1(t), s.sub.2(t), . . . , s.sub.P(t)].sup.T convolved with Green's functions for the p.sup.th source to the q.sup.th input or microphone g.sub.q(t):=[g.sub.1q, g.sub.2q, . . . , g.sub.Pq].sup.T:
(44)
(45) By considering this equation in the short time Fourier domain, it can be approximated as:
X.sub.q(k,n)≈[S.sub.1,S.sub.2, . . . ,S.sub.P].Math.[G.sub.1q,G.sub.2q, . . . ,G.sub.Pq].sup.H, (3)
wherein k denotes a frequency bin index and the time interval or frame is indexed by n, [•].sup.H denotes a Hermitian transpose, and the dependencies of both the audio signal source signals and the Green's functions on (n, k) are avoided for clarity of notation. For a complete multi-channel representation, it can be written for the MIMO system:
(46)
(47) A dereverberation can be performed using an FIR filter in the SIFT domain, for example based on applying an FIR filter according to:
(48)
with h.sub.pq(k,n):=[H.sub.pq(k,n), H.sub.pq(k,n−1), . . . , H.sub.pq(k,n−M+1)].sup.T in the SIFT domain on the input audio signal
{circumflex over (S)}(k,n):=H.sup.H(k,n)x(k,n), (9)
wherein a sequence of M consecutive SIFT domain time intervals or frames of the input audio signal is defined as:
x.sub.q(k,n):=[X.sub.q(k,n),X.sub.q(k,n−1), . . . ,X.sub.q(k,n−M+1)].sup.T (10)
and
x(k,n):=[x.sub.1.sup.T(k,n),x.sub.2.sup.T(k,n), . . . ,x.sub.q.sup.T(k,n), . . . ,x.sub.Q.sup.T(k,n)].sup.T, (11)
{circumflex over (S)}(k,n):=[Ŝ.sub.1(k,n),Ŝ.sub.2(k,n), . . . ,Ŝ.sub.P(k,n)].sup.T. (12)
(49) Note that M can be chosen individually for each frequency bin. For example, for a speech signal using a sampling frequency of 16 kilohertz (kHz), a SIFT window size of 320, a SIFT length of 512, an overlapping factor of 0.5, and a reverberation time of approximately 1 second, M can be set to 4 for the lower 129 bins, and can be set to 2 for the higher 128 bins.
(50) The filter coefficient matrix H can approximate the largest eigenvectors of the auto correlation matrix of the unknown dry audio source signal. It can be desirable to obtain a distortion less estimate of the dry audio source signal. This can mean that the FIR filter exhibits fidelity to the coherent part of the dry audio source signal.
(51) The input audio signal can be decomposed into a part which is coherent with an initial estimation of the dry audio source signal x.sub.c, and an incoherent part x.sub.i according to:
x(k,n)=x.sub.c(k,n)+x.sub.i(k,n), (13)
with
x.sub.c(k,n):=Γ.sub.xS(k,n).Math.S(k,n), (14)
wherein a cross coherence matrix of the dry audio source signal can be defined as a normalized correlation matrix by:
Γ.sub.xS(k,n):={circumflex over (ε)}{x(k,n)S.sup.H(k,n)}.Math.(φ.sub.SS(k,n)).sup.−1, (15)
wherein {circumflex over (ε)}{•} denotes an estimation of an expectation value, and with the estimation of the expectation of auto correlation matrix
φ.sub.SS(k,n):={circumflex over (ε)}{S(k,n)S.sup.H(k,n)}. (16)
(52) The cross coherence matrix Γ.sub.xS can be understood as enforced eigenvectors matrix of the auto correlation matrix of the input audio signal.
(53) The estimation of the expectation value can be calculated iteratively by
{circumflex over (ε)}{x(k,n)S.sup.H(k,n)}=α{circumflex over (ε)}{x(k,n−1)S.sup.H(k,n−1)}+(1−α)x(k,n)S.sup.H (17)
{circumflex over (ε)}{S(k,n)S.sup.H(k,n)}=α{circumflex over (ε)}{S(k,n−1)S.sup.H(k,n−1)}+(1−α)S(k,n)S.sup.I (18)
wherein α denotes a forgetting factor.
(54) Hence, a condition for the dereverberation filter can be set as:
H.sup.H{circumflex over (ε)}{x(k,n)S.sup.H(k,n)}=φ.sub.SS (19)
(55) By rearranging, the following expression can be obtained:
H.sup.HΓ.sub.xS=I.sub.P×P, (20)
wherein I denotes a unity matrix. Therefore, the filter coefficient matrix H can be coincident to the basis vectors Γ.sub.xS of the signal subspace.
(56) An optimal dereverberation FIR filter in the STFT domain can be derived. To obtain an optimal filter, the following cost function which can be constrained by (20) can be set:
J=H.sup.HΦ.sub.xxH+λ(H.sup.HΓ.sub.xS−I.sub.P×P), (21)
wherein
Φ.sub.xx:={circumflex over (ε)}{xx.sup.H} (22)
wherein λ denotes a Lagrange multipliers matrix. At a minimum of this cost function, the gradient can be zero, and the optimal expression of the filter can be obtained as:
H=Φ.sub.xx.sup.−1Γ.sub.xS.Math.(Γ.sub.xS.sup.HΦ.sub.xx.sup.−1Γ.sub.xS).sup.−1. (23)
(57) The filter can maximize the entropy of the dry audio signal under the given condition.
(58) The cross coherence matrix can be approximated. In the following, two possibilities to deal with the missing unknown dry audio source signal are proposed.
(59)
(60) The diagram shows the audio signal acquisition scenario 400 with three audio signal sources 401, 403, 405 or speakers, a microphone array 407 with the ability of achieving high sensitivity in dedicated directions, e.g. using beamforming, e.g. a delay-and-sum beamformer, and a spot microphone 413 next to one audio signal source. Separated audio sources 401, 403, 405 with a minimized room influence can be desired. The output of the beamformer and the auxiliary audio signal of the spot microphone 413 can be used to calculate or estimate the cross coherence matrix Γ.sub.xS.
(61) The algorithm can handle the output of the beamformer and of the spot microphone, i.e. the auxiliary audio signals, as an initial guess, enhance the separation and minimize the reverberation of the input audio signal or microphone array signal to provide a clean version of the three audio source signals or speech signals.
(62) For calculating the derived filter coefficient matrix, a computation of a cross coherence matrix can be performed. Therefore, a pre-processing stage can be employed, e.g. a source localization stage combined with beamforming, providing an initial guess of the dry audio source signals s.sub.0.sub.
(63) For the filter, the following expression can be obtained
H=Φ.sub.xx.sup.−1Γ.sub.xS.sub.
wherein F.sub.xS.sub.
(64)
(65)
(66) In the case P=Q, the condition in (20) can be modified for coherence of the output audio signals according to:
H.sup.HΓ.sub.sS=I.sub.P×P. (25)
(67) For the case P=Q, it can be assumed that each source of the dry audio source signal is coherent with regard to its own history. Based on the assumptions, Γ.sub.sS can be used instead of Γ.sub.xS. Reverberations and interfering signals can be incoherent.
(68) The auto coherence matrix of the audio source signal can be defined as
Γ.sub.sS(k,n):={circumflex over (ε)}{s(k,n)S.sup.H(k,n)}.Math.(φ.sub.SS(k,n)).sup.−1, (26)
wherein the quantity Φ.sub.SS can have a similar definition as (16):
φ.sub.SS(k,n):={circumflex over (ε)}{S(k,n)S.sup.H(k,n)}. (27)
(69) The auto coherence matrix Γ.sub.sS of the audio sources can be block diagonal. Furthermore, in the spirit of Γ.sub.xS an auto coherence matrix of the input audio signal can be introduced as:
Γ.sub.xX(k,n):={circumflex over (ε)}{x(k,n)X.sup.H(k,n)}.Math.(φ.sub.XX(k,n)).sup.−1, (28)
wherein the quantity φ.sub.XX can have a similar definition as (16):
φ.sub.XX(k,n):={circumflex over (ε)}{X(k,n)X.sup.H(k,n)}. (29)
(70) By assuming the Green's functions in (4) to be constant for the considered M time intervals or frames, it can be seen that:
Γ.sub.xX(k,n)={circumflex over (ε)}{x(k,n)S.sup.H(k,n)}.Math.(φ.sub.SX(k,n)).sup.−1, (30)
with
φ.sub.SX:={circumflex over (ε)}{S(k,n)X.sup.H(k,n)}. (31)
(71) In order to obtain an expression for Γ.sub.sS, approximations can be made by assuming the audio source signals to be independent, i.e. φ.sub.SS can be diagonal and {circumflex over (ε)}{s(k,n)S.sup.H(k,n)} can be block diagonal, and by taking into account the relation (30) for P=Q:
Γ.sub.xX(k,n)=I.sub.MG*.Math.{circumflex over (ε)}{s(k,n)S.sup.H(k,n)}.Math.(φ.sub.SX(k,n)).sup.−1, (32)
wherein denotes a Kronecker product. Hence, in order to approximate Γ.sub.sS, we can use σ.sub.xX and can set the off diagonal blocks to zero. This can be achieved by setting a square, non-necessarily symmetric, intermediate matrix C whose rows are the (j.Math.M+1).sup.th row of the auto coherence matrix of the input audio signal, with jε{0, . . . , P−1}. Note, that the order may be maintained.
(72) An eigenvalue decomposition can allow to write C as a product U.Math.C.Math.U.sup.−1, wherein C can be diagonal. An estimate Γ.sub.sS(k,n) for the block diagonal form for Γ can be obtained as:
{circumflex over (Γ)}.sub.sS(k,n):=(I.sub.MU.sup.−1).Math.Γ.sub.xX.Math.U. (33)
(73) To obtain a filter coefficient matrix that provides the coherent part of the audio signal sources, the following can be set similarly to Eq. (24):
H=Φ.sub.xx.sup.−1{circumflex over (Γ)}.sub.sS.Math.({circumflex over (Γ)}.sub.sS.sup.HΦ.sub.xx.sup.−1{circumflex over (Γ)}.sub.sS).sup.−1. (34)
(74) In addition, a blind channel estimation can be performed. An expression of the estimated inverse channel can be obtained by the following considerations for X.sub.P(k,n)≠0:
{circumflex over (S)}(k,n)=H.sup.Hx(k,n)diag{X.sub.1(k,n),X.sub.2(k,n), . . . ,X.sub.P(k,n)}.sup.−1.Math.diag{X.sub.1(k,n),X.sub.2(k,n), . . . ,X.sub.P(k,n)}, (35)
wherein the operator diag{.} creates a diagonal square matrix with an argument vector on the main diagonal. Comparing this equation to the assumed channel model in the STFT domain in (3) leads to:
{circumflex over (G)}(k,n)=(H.sup.Hx(k,n)diag{X.sub.1(k,n),X.sub.2(k,n), . . . ,X.sub.P(k,n)}.sup.−1).sup.−1. (36)
(75)
(76) The spectrogram 701 can further relate to a reverberant microphone signal and the spectrogram 703 can further relate to an estimated dry audio source signal. In this example for a single channel, the spectrogram 701 of the reverberant signal is smeared out. Comparatively, the spectrogram 703 of the estimated dry audio source signal by applying the dereverberation algorithm exhibits a structure of a typical dry speech signal.
(77)
(78) The transformer 101 can be a STFT transformer. The filter coefficient determiner 103 can perform an algorithm. The filter 105 can be characterized by a filter coefficient matrix H. The inverse transformer 107 can be an ISTFT transformer. The auxiliary audio signal generator 301 can provide an initial guess, e.g. using a delay-and-sum technique and/or spot microphone audio signals. The post-processor 305 can provide post-processing capabilities, e.g. an ASR, and/or an up-mixing.
(79) A number Q of input audio signals can be provided to the auxiliary audio signal generator 301. The auxiliary audio signal generator 301 can provide a number P of auxiliary audio signals to the transformer 101. The transformer 101 can provide a number P of rows or columns of an input transformed coefficient matrix to the filter coefficient determiner 103 and the filter 105. The filter 105 can provide a number P of rows or columns of an output transformed coefficient matrix to the inverse transformer 107. The inverse transformer 107 can provide a number P of output audio signals to the post-processor 305 yielding a number P of post-processed audio signals.
(80) Embodiments of the disclosure may have several advantages. They can be used for post-processing for audio source separation achieving an optimal separation even with a low complexity solution for an initial guess. This can be used for enhanced sound-field recordings. It can further be used even for a single-channel dereverberation which can be a benefit to speech intelligibility for hands-free application using mobiles and tablets. They can further be used for up-mixing for multi-channel reproduction even from a mono recording and for pre-processing for ASR.
(81) Some implementation forms can relate to a method to modify a multi- or single-channel audio signal obtained by recording one or multiple audio signal sources in a reverberant acoustic environment, the method comprises minimizing the influence of the reverberations caused by the room and separating the recorded audio sound sources. The recording can be done by a combination of a microphone array with the ability to perform pre-processing as localization of the audio signal sources and beamforming, e.g. delay-and-sum, and distributed microphones, e.g. spot microphones, next to a subgroup of the audio signal sources.
(82) The non-preprocessed input audio signals or array signals and the pre-processed signals together with available distributed spot microphones can be analyzed using a STFT and can be buffered. The length of the buffer, e.g. length M, can be chosen individually for each frequency band. The buffered input audio signals can be combined in the short time Fourier transformation domain to obtain 2-multidimensional complex filters for each sub-band that can exploit the inter time interval or inter-frame statistics of the audio signals. The dry output audio signals, i.e. the separated and/or dereverbed input audio signals, can be obtained by performing a multi-dimensional convolution of the input audio signals or array microphone signals with those filters. The convolution can be performed in the short time Fourier transformation domain.
(83) The filters can be designed to fulfill the condition of maximum entropy of the output audio signals in the STFT domain constrained by maintaining the coherence, e.g. normalized cross correlation, between the pre-processed audio signal and the distributed spot microphones on one side and the input audio signals or array microphone signals on the other side according to:
H=Φ.sub.xx.sup.−1Γ.sub.xS.sub.
(84) Some implementation forms can further relate to a method wherein a pre-processing stage can be unavailable and the filters can be designed to maintain the coherence of each audio source signal to its own history and the independence of the audio signal sources in the STFT domain according to:
H=Φ.sub.xx.sup.−1{circumflex over (Γ)}.sub.sS.Math.({circumflex over (Γ)}.sub.sS.sup.HΦ.sub.xx.sup.−1{circumflex over (Γ)}.sub.sS).sup.−1.
(85) An estimate of an auto coherence matrix of the audio source signals can be calculated by means of an eigenvalue decomposition of a square matrix whose rows can be selected from the rows of an auto coherence of the input audio signals or microphone signals. The number of rows can be determined by the number of separable audio signal sources which may maximally be the number of inputs or microphones. The matrix U containing in its columns the eigenvectors of the so-constructed matrix C can be inverted and the estimate of the audio source auto coherence matrix can be calculated by:
{circumflex over (Γ)}.sub.sS(k,n):=(I.sub.MU.sup.−1).Math.Γ.sub.xX.Math.U.
(86) Some implementation forms can further relate to a method to estimate acoustic transfer functions based on the calculated optimal 2-dimensional filters according to:
{circumflex over (G)}(k,n)=(H.sup.Hx(k,n)diag{X.sub.1(k,n),X.sub.2(k,n), . . . ,X.sub.P(k,n)}.sup.−1).sup.−1.
(87) Some implementation forms can allow for a processing in the SIFT domain. It can provide high system tracking capabilities because of an inherent batch block processing and high scalability, i.e. the resolution in time and frequency domain can freely be chosen using suitable windows. The system can approximately be decoupled in the SIFT domain. Therefore, the processing can be parallelized for each frequency bin. Furthermore, different sub-bands can be treated independently, e.g. different filter orders for dereverberation for different sub-bands can be used.
(88) Some implementation forms can use a multi-tap approach in the STFT domain. Therefore, inter time interval or inter-frame statistics of the dry audio signals can be exploited. Each dry audio signal can be coherent to its own history. Therefore, it can be statistically represented over a predefined time by only one eigenvector. The eigenvectors of the audio source signals can be orthogonal.