Rendering system
10659901 · 2020-05-19
Assignee
Inventors
Cpc classification
H04S2400/09
ELECTRICITY
H04S2420/01
ELECTRICITY
H04S2420/13
ELECTRICITY
H04S2400/15
ELECTRICITY
H04S2400/11
ELECTRICITY
H04S2420/11
ELECTRICITY
International classification
Abstract
A rendering system including a plurality of loudspeakers, at least one microphone and a signal processing unit. The signal processing unit is configured to determine at least some components of a loudspeaker-enclosure-microphone transfer function matrix estimate describing acoustic paths between the plurality of loudspeakers and the at least one microphone using a rendering filters transfer function matrix using which a number of virtual sources is reproduced with the plurality of loudspeakers.
Claims
1. A rendering system, comprising: plurality of loudspeakers; at least one microphone; a signal processing unit; wherein using a rendering filters transfer function matrix a number of virtual sources is reproduced with the plurality of loudspeakers; and wherein the signal processing unit is configured to determine at least some components of a loudspeaker-enclosure-microphone transfer function matrix estimate describing acoustic paths between the plurality of loudspeakers and the at least one microphone using said rendering filters transfer function matrix; wherein the signal processing unit is configured to estimate at least some components of a source-specific transfer function matrix describing acoustic paths between the number of virtual sources and the at least one microphone; and wherein the processing unit is configured to determine the loudspeaker-enclosure-microphone transfer function matrix estimate using the estimated source-specific signal transfer function matrix; wherein the signal processing unit is configured to determine at least some components of the loudspeaker-enclosure-microphone transfer function matrix estimate based on the equation
=.sub.SH.sub.D.sup.+, wherein represents the loudspeaker-enclosure-microphone transfer function matrix estimate, wherein .sub.S represents the estimated source-specific transfer function matrix, wherein H.sub.D represents the rendering filters transfer function matrix, and wherein H.sub.D.sup.+ represents an approximate inverse of the rendering filters' transfer function matrix H.sub.D.
2. The rendering system according to claim 1, wherein the signal processing unit is configured to adaptively estimate the source-specific transfer function matrix by minimizing a cost function derived from a difference between a recorded signal of the at least one microphone and an estimated signal of the at least one microphone obtained using the estimated source-specific transfer function matrix.
3. The rendering system according to claim 1, wherein the signal processing unit is configured to determine the components of the loudspeaker-enclosure-microphone transfer function matrix estimate which are sensitive to a column space of the rendering filters transfer function matrix.
4. The rendering system according to claim 1, wherein in response to a change of at least one out of a number of virtual sources and a position of at least one of the virtual sources, the signal processing unit is configured to update at least some components of the loudspeaker-enclosure-microphone transfer function matrix estimate using a rendering filters transfer function matrix corresponding to the changed virtual sources.
5. The rendering system according to claim 1, wherein the signal processing unit is configured to update at least some components of the loudspeaker-enclosure-microphone transfer function matrix estimate based on the equation
(|)=.sup.(|1)+.sub.S(|)H.sub.D.sup.+() wherein 1 denotes a previous time interval, wherein denotes a current time interval, wherein between the previous time interval and the current time interval at least one out of a number of virtual sources and a position of at least one of the virtual sources is changed, wherein (|) represents a loudspeaker-enclosure-microphone transfer function matrix estimate, .sup.(|1) represents components of the loudspeaker-enclosure-microphone transfer function matrix estimate which are not sensitive to the column space of the rendering filters transfer function matrix, .sub.S (|) represents an estimated source-specific transfer function matrix, and wherein H.sub.D.sup.+() represents an inverse rendering filters transfer function matrix.
6. The rendering system according to claim 4, wherein the signal processing unit is configured to update at least some components of the loudspeaker-enclosure-microphone transfer function matrix estimate based on the equation
(|)=(|1)+(.sub.S(|).sub.S(|1))H.sub.D.sup.+() in order to reduce an average load of the signal processing unit; wherein 1 denotes a previous time interval, wherein denotes a current time interval, wherein between the current time interval and the previous time interval at least one out of a number of virtual sources and a position of at least one of the virtual sources is changed, wherein (|) represents a loudspeaker-enclosure-microphone transfer function matrix estimate, wherein (|1) represents a loudspeaker-enclosure-microphone transfer function matrix estimate, .sub.S(|) represents an estimated source-specific transfer function matrix, wherein (|1) represents a loudspeaker-enclosure-microphone transfer function matrix estimate, and wherein H.sub.D.sup.+() represents an inverse rendering filters transfer function matrix.
7. The rendering system according to claim 4, wherein the signal processing unit is configured to update at least some components of the loudspeaker-enclosure-microphone transfer function matrix estimate based on the distributedly evaluated equation
(|)=(1|2)+.sub.S.sup.(1)H.sub.D.sup.+(1) as part of an initialization of a following interval's estimated source-specific transfer function matrix by
.sub.S(+1|)=((1|2)+.sub.S.sup.(1)H.sub.D.sup.+(1))H.sub.D(+1)+.sub.S.sup.()H.sub.T.sup.(,+1) in order to reduce a peak load of the signal processing unit; wherein 2 denotes a second previous time interval, wherein 1 denotes a previous time interval, wherein denotes a current time interval, wherein 1 denotes a following time interval, wherein between the time intervals at least one out of a number of virtual sources and a position of at least one of the virtual sources is changed, wherein (|1) represents a loudspeaker-enclosure-microphone transfer function matrix estimate, .sub.S (+1|) represents an estimated source-specific transfer function matrix, wherein (1|2) represents a loudspeaker-enclosure-microphone transfer function matrix estimate, wherein .sub.S.sup.(1) represents an update of an estimated source-specific transfer function matrix, H.sub.D.sup.+(1) represents an inverse rendering filters transfer function matrix, H.sub.D(+1) represents a rendering filters transfer function matrix, .sub.S.sup.() represents an update of an estimated source-specific transfer function matrix, and wherein H.sub.T.sup.(,+1) represents a transition transform matrix which describes an update of an estimated source-specific transfer function matrix of the current time interval to the following time interval, such that only a contribution of .sub.S.sup.()H.sub.T.sup.(,+1) is computed between two time intervals.
8. The rendering system according to claim 1, wherein a number of virtual sources is smaller than a number of loudspeakers.
9. The rendering system according to claim 1, wherein the signals of the virtual sources are statistically independent.
10. A method, comprising: determining at least some components of a loudspeaker-enclosure-microphone transfer function matrix estimate describing acoustic paths between a plurality of loudspeakers and at least one microphone using a rendering filters transfer function matrix, wherein using said rendering filters transfer function matrix a number of virtual sources is reproduced with the plurality of loudspeakers; and estimating at least some components of a source-specific transfer function matrix describing acoustic paths between the number of virtual sources and the at least one microphone, wherein the loudspeaker-enclosure-microphone transfer function matrix estimate is determined using the estimated source-specific signal transfer function matrix; wherein at least some components of the loudspeaker-enclosure-microphone transfer function matrix estimate are determined based on the equation
=.sub.SH.sub.D.sup.+, wherein represents the loudspeaker-enclosure-microphone transfer function matrix estimate, wherein .sub.S represents the estimated source-specific transfer function matrix, wherein H.sub.D represents the rendering filters transfer function matrix, and wherein H.sub.D.sup.+, represents an approximate inverse of the rendering filters' transfer function matrix H.sub.D.
11. A non-transitory digital storage medium having a computer program stored thereon to perform the method comprising: determining at least some components of a loudspeaker-enclosure-microphone transfer function matrix estimate describing acoustic paths between a plurality of loudspeakers and at least one microphone using a rendering filters transfer function matrix, wherein using said rendering filters transfer function matrix a number of virtual sources is reproduced with the plurality of loudspeakers; and estimating at least some components of a source-specific transfer function matrix describing acoustic paths between the number of virtual sources and the at least one microphone, wherein the loudspeaker-enclosure-microphone transfer function matrix estimate is determined using the estimated source-specific signal transfer function matrix; wherein at least some components of the loudspeaker-enclosure-microphone transfer function matrix estimate are determined based on the equation
=.sub.SH.sub.D.sup.+ wherein represents the loudspeaker-enclosure-microphone transfer function matrix estimate, wherein .sub.S represents the estimated source-specific transfer function matrix, wherein H.sub.D represents the rendering filters transfer function matrix, and wherein H.sub.D.sup.+ represents an approximate inverse of the rendering filters' transfer function matrix H.sub.D.
12. A rendering system, comprising: plurality of loudspeakers; at least one microphone; a signal processing unit; wherein using a rendering filters transfer function matrix a number of virtual sources is reproduced with the plurality of loudspeakers; and wherein the signal processing unit is configured to determine at least some components of a loudspeaker-enclosure-microphone transfer function matrix estimate describing acoustic paths between the plurality of loudspeakers and the at least one microphone using said rendering filters transfer function matrix; wherein the signal processing unit is configured to update at least some components of the loudspeaker-enclosure-microphone transfer function matrix estimate based on the equation
(|)=.sup.(|1)+.sub.S(|)H.sub.D.sup.+() wherein 1 denotes a previous time interval, wherein denotes a current time interval, wherein between the previous time interval and the current time interval at least one out of a number of virtual sources and a position of at least one of the virtual sources is changed, wherein (|) represents a loudspeaker-enclosure-microphone transfer function matrix estimate, .sup.(|1) represents components of the loudspeaker-enclosure-microphone transfer function matrix estimate which are not sensitive to the column space of the rendering filters transfer function matrix, .sub.S (|) represents an estimated source-specific transfer function matrix, and wherein H.sub.D.sup.+() represents an inverse rendering filters transfer function matrix.
13. A rendering system, comprising: plurality of loudspeakers; at least one microphone; a signal processing unit; wherein using a rendering filters transfer function matrix a number of virtual sources is reproduced with the plurality of loudspeakers; and wherein the signal processing unit is configured to determine at least some components of a loudspeaker-enclosure-microphone transfer function matrix estimate describing acoustic paths between the plurality of loudspeakers and the at least one microphone using said rendering filters transfer function matrix; wherein in response to a change of at least one out of a number of virtual sources and a position of at least one of the virtual sources, the signal processing unit is configured to update at least some components of the loudspeaker-enclosure-microphone transfer function matrix estimate using a rendering filters transfer function matrix corresponding to the changed virtual sources; wherein the signal processing unit is configured to update at least some components of the loudspeaker-enclosure-microphone transfer function matrix estimate based on the equation
(|)=(|1)+(.sub.S(|).sub.S(|1))H.sub.D.sup.+() in order to reduce an average load of the signal processing unit; wherein 1 denotes a previous time interval, wherein denotes a current time interval, wherein between the current time interval and the previous time interval at least one out of a number of virtual sources and a position of at least one of the virtual sources is changed, wherein (|) represents a loudspeaker-enclosure-microphone transfer function matrix estimate, wherein (|1) represents a loudspeaker-enclosure-microphone transfer function matrix estimate, .sub.S(|) represents an estimated source-specific transfer function matrix, wherein (|1) represents a loudspeaker-enclosure-microphone transfer function matrix estimate, and wherein H.sub.D.sup.+() represents an inverse rendering filters transfer function matrix.
14. A rendering system, comprising: plurality of loudspeakers; at least one microphone; a signal processing unit; wherein using a rendering filters transfer function matrix a number of virtual sources is reproduced with the plurality of loudspeakers; and wherein the signal processing unit is configured to determine at least some components of a loudspeaker-enclosure-microphone transfer function matrix estimate describing acoustic paths between the plurality of loudspeakers and the at least one microphone using said rendering filters transfer function matrix; wherein in response to a change of at least one out of a number of virtual sources and a position of at least one of the virtual sources, the signal processing unit is configured to update at least some components of the loudspeaker-enclosure-microphone transfer function matrix estimate using a rendering filters transfer function matrix corresponding to the changed virtual sources; wherein the signal processing unit is configured to update at least some components of the loudspeaker-enclosure-microphone transfer function matrix estimate based on the distributedly evaluated equation
(|1)=(1|2)+.sub.S.sup.(1)H.sub.D.sup.+(1) as part of an initialization of a following interval's estimated source-specific transfer function matrix by
.sub.S(+1|)=((1|2)+.sub.S.sup.(1)H.sub.D.sup.+(1))H.sub.D(+1)+.sub.S.sup.()H.sub.T.sup.(,+1) in order to reduce a peak load of the signal processing unit; wherein 2 denotes a second previous time interval, wherein 1 denotes a previous time interval, wherein denotes a current time interval, wherein +1 denotes a following time interval, wherein between the time intervals at least one out of a number of virtual sources and a position of at least one of the virtual sources is changed, wherein (|1) represents a loudspeaker-enclosure-microphone transfer function matrix estimate, .sub.S (+1|) represents an estimated source-specific transfer function matrix, wherein (1|2) represents a loudspeaker-enclosure-microphone transfer function matrix estimate, wherein .sub.S.sup.(1) represents an update of an estimated source-specific transfer function matrix, H.sub.D.sup.+(1) represents an inverse rendering filters transfer function matrix, H.sub.D(+1) represents a rendering filters transfer function matrix, .sub.S.sup.() represents an update of an estimated source-specific transfer function matrix, and wherein H.sub.T.sup.(,+1) represents a transition transform matrix which describes an update of an estimated source-specific transfer function matrix of the current time interval to the following time interval, such that only a contribution of .sub.S.sup.()H.sub.T.sup.(,+1) is computed between two time intervals.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Embodiments of the present invention will be detailed subsequently referring to the appended drawings, in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
DETAILED DESCRIPTION OF THE INVENTION
(16) Equal or equivalent elements or elements with equal or equivalent functionality are denoted in the following description by equal or equivalent reference numerals.
(17) In the following description, a plurality of details are set forth to provide a more thorough explanation of embodiments of the present invention. However, it will be apparent to one skilled in the art that embodiments of the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form rather than in detail in order to avoid obscuring embodiments of the present invention. In addition, features of the different embodiments described hereinafter may be combined with each other unless specifically noted otherwise.
(18)
(19) In embodiments, the signal processing unit 106 can be configured to use the rendering filters transfer function matrix H.sub.D for calculating individual loudspeaker signals (or signals that are to be reproduced by the individual loudspeakers 102) from source signals associated with the virtual sources 108. Thereby, normally, more than one of the loudspeakers 102 is used for reproducing one of the source signals associated with the virtual sources 108. The signal processing unit 106 can be, for example, implemented by means of a stationary or mobile computer, smartphone, tablet or as dedicated signal processing unit.
(20) The rendering system can comprise up to N.sub.L Loudspeakers 102, wherein N.sub.L is a natural number greater than or equal to two, N.sub.L2. Further, the rendering system can comprise up to N.sub.M microphones, wherein N.sub.M is a natural number greater than or equal to one, N.sub.M1. The number N.sub.S of virtual sources may be equal to or greater than one, N.sub.S1. Thereby, the number N.sub.S of virtual sources is smaller than the number N.sub.L of loudspeakers, N.sub.S<N.sub.L.
(21) In embodiments, the signal processing unit 106 can be further configured to estimate at least some components of a source-specific transfer function matrix H.sub.S describing acoustic paths 112 between the number of virtual sources 108 and the at least one microphone 104, to obtain a source-specific transfer function matrix estimate .sub.S. Thereby, the processing unit 106 can be configured to determine the loudspeaker-enclosure-microphone transfer function matrix estimate using the source-specific signal transfer function matrix estimate .sub.S.
(22) In the following, embodiments of the present invention will be described in further detail. Thereby, the idea of estimating the source-specific transfer function matrix (HS) and using the same for determining the loudspeaker-enclosure-microphone transfer function matrix estimate will be referred to as source-specific system identification.
(23) In other words, subsequently embodiments of the source-specific system identification (SSSysid) and embodiments allowing either a minimization of the peak or the average computational complexity, based on embodiments of the source-specific system identification, will be described. While embodiments of the source-specific system identification allow a unique and efficient filter adaptation and provide the mathematical foundation for deriving a valid LEMS estimate from the identified filters, embodiments of average- and peak-load-optimized systems allows a flexible, application-specific use of processing resources.
(24) Consider an object-based rendering system, i.e. WFS [SRA08], which renders N.sub.S statistically independent virtual sound sources (e.g., point sources, plane-wave sources) employing an array of N.sub.L loudspeakers. To allow for a voice control of an entertainment system or an additional use of the reproduction system as hands-free front-end in a communication scenario, a set of N.sub.M microphones for sound acquisition and an AEC unit may be used. The acoustic paths between the loudspeakers and N.sub.M microphones of interest can be described as linear systems with discrete-time Fourier transform (DTFT) domain transfer function matrices H(e.sup.j).sup.N.sup.
.sup.N.sup.
.sup.N.sup.
.sup.N.sup.
(25)
where the cascade of the rendering filters with the LEMS will be referred to as source-specific system
H.sub.S=HH.sub.D.sup.N.sup.
(26) Both for recording near-end sources only (involving an AEC unit) and for room equalization, the LEMS H can be identified adaptively. This can be done by minimizing a quadratic cost function derived from the difference e.sub.Mic between the recorded microphone signals x.sub.Mic and the microphone signal estimates obtained with the LEMS estimate , as depicted in
(27) As mentioned before, multichannel acoustic system identification suffers from the strongly cross-correlated loudspeaker signals typically occurring when rendering acoustic scenes with more than one loudspeaker: for more loudspeakers than virtual sources (N.sub.L>N.sub.S), the acoustic paths of the LEMS H cannot be determined uniquely (non-unique ness problem [BMS98]). This means that an infinitely large set of possible solutions for H exists, from which only one corresponds to the true LEMS H.
(28) As opposed to this, the paths from each virtual source to each microphone can be described as an N.sub.SN.sub.M MIMO system H.sub.S (marked in
(29) Although is not determined uniquely by .sub.S in general, the non-uniqueness of this mapping is exactly the same as the non-uniqueness problem for determining directly and finding one of the systems is easily possible by approximating an inverse rendering system H.sub.D.sup.+ and pre-filtering the source-specific system .sub.S to obtain one particular
=.sub.SH.sub.D.sup.+.(2)
(30) Hence, a statistically optimal estimate , which also could have been the result from adapting directly, can be obtained by identifying H.sub.S by an .sub.S with very low effort and without non-uniqueness problem and transforming .sub.S into an estimate of in a systematic way. This can be seen as exploiting non-uniqueness rather than seeing it as a problem: if it is impossible to infer the true system anyway, the effort for finding one of the solutions should be minimized.
(31) Subsequently, determining an LEMS estimate from a Source-Specific System Estimate will be described. In other words, a suitable mapping from a source-specific system to an LEMS corresponding to the source-specific system will be described. For given source-specific transfer function estimates .sub.S, the concatenation of the driving filters with the LEMS estimate should fulfill H.sub.D.sub.S, analogously to Eq. (1). For the typical case of less synthesized sources than loudspeakers (N.sub.S<N.sub.L), this linear system of equations does not allow a unique solution for an inverse H.sub.D.sup.1 does not exist. However, the minimum-norm solution can be obtained by the Moore-Penrose pseudoinverse [Str09]. Note that the rendering system's driving filters and their inverses are determined during the production of the audio material and can be calculated at the production stage as already. Hence, the LEMS estimate can then be computed from the source-specific transfer functions according to Eq. (2) by pre-filtering H.sub.S. For a driver matrix H.sub.D with pseudoinverse H.sub.D.sup.+,
P=H.sub.D=H.sub.D.sup.+
P.sup.=(IP)
are known as the projectors into the column space of H.sub.D and into the left null space of H.sub.D, respectively [Str09]. These two matrices decompose the N.sub.L-dimensional space into two orthogonal subspaces. With this, the LEMS H can be expressed as sum of two orthogonal components
(32)
where H.sup.=H.sub.SH.sub.D.sup.+ is a filtered version of the source-specific system H.sub.S and H.sup. lies in the left null space of H.sub.D and is not excited by the latter. Therefore, H.sup. is not observable at the microphones and represents the ambiguity of the solutions for (non-uniqueness problem). Whenever H.sub.D.sup.+ is employed to map a source-specific system back to an LEMS estimate, the estimate's rows will lie in the column space of H.sub.D and all components in the left null space of H.sub.D, namely H.sup., are implied to be zero (0).
(33) Hence, only the LEMS components sensitive to the column space of H.sub.D can and should be estimated from a particular H.sub.S. This idea will be employed in the following to extend source-specific system identification for time-varying virtual acoustic scenes.
(34) In practice, the number and the positions of virtual acoustic sources may change over time. Thus, the rendering task can be divided into a sequence of intervals with different, but internally constant virtual source configuration. These intervals can be indexed by the interval index K, where K is an integer number. At the beginning of an interval , an initial source-specific system estimate
.sub.S(|1)=(|1)H.sub.D()(4)
can be computed from the information available from observing the interval 1, namely the initial LEMS estimate (|1)=(1|1) can be obtained from interval 1, and the current interval's rendering filters H.sub.D(). After adapting only the source-specific system .sub.S during interval , a final source-specific system estimate .sub.S(|) is available at the end of interval . Embodying the idea to update only H.sup. and keep .sup.(|1)=(|1)(IH.sub.D()H.sub.D.sup.+()) unaltered during a particular interval , this can be formulated as
(|)=.sup.(|1)+.sub.S(|)H.sub.D.sup.+().
(35) This can be shown to correspond to a minimum-norm update
(36)
the smallest update which leads to .sub.S(|). As this procedure leaves H.sup. unaltered (H.sup.(|)=H.sup.(|1)), information about the true LEMS can accumulate over all intervals, allowing a continuous refinement of in case of time-varying acoustic scenes.
(37) First, interval 1. At the beginning of interval 1 (Start in
(38) Second, the transition between intervals 1 and 2. At the transition between intervals 1 and 2 (center part of
(39) Third, interval 2. Analogously to interval 1, only a small source-specific system is adapted within interval 2 (bottom). Yet, an estimate is available in the background (system components contributed by interval 1 are gray now). In case of another scene change (exceeds time line in
(40) In the following, embodiments which reduce (or even minimize) a peak computational load or an average computational load for system identification will be described.
(41) Thinking about computationally powerful devices with limited electrical power resources (e.g., multicore tablets or smartphones) or devices which have to perform other, less time-critical tasks in addition to the signal processing, a minimization of the average computational load for the adaptive filtering is desirable. On the other hand, for the identification of very large systems, in case of computationally less powerful processing devices, or when sharing one processing device with other time-critical applications (e.g., head units of a car), the peak load produced by signal processing application is to be reduced. Thus, the idea of a generic concept allowing either average load or peak load minimization is combined with the idea of source-specific system identification in the following.
(42) In order to reduce the average load, the update can directly be computed as described above with respect to the time-varying virtual acoustic scenes, which leads to an efficient update equation
(|)=(|1)+(.sub.S(|).sub.S(|1))H.sub.D.sup.+(),(6)
for which the operations on an LEMS estimate are outlined in
(43) A peak-load optimization can be obtained by the idea of splitting the SSSysId update into a component directly originating from the most recent interval's source specific system (to be computed at the scene change) and another component which solely depends on information available one scene change before (pre-computable).
(44) Doing so after inserting the above described update (Eq. (6)) in Eq. (4) leads to
(45)
with the transition transform from matrix H.sub.T.sup.(,+1)=H.sub.D.sup.+()H.sub.D(+1) which maps the update of a source-specific system of interval to an update for a source-specific system in interval +1. The benefit of this formulation is becomes obvious from the adaptation scheme depicted in
(46) Further, in
(47) Note that both the peak-load optimized and the average-load optimized SSSysId mathematically lead to identical LEMS estimates (up to the machine precision). The total computational overhead of the peak-load optimized scheme with respect to the average-load optimized is caused by the additional transform by H.sub.T.sup.(,+1), which is negligible for long time intervals with constant virtual source configuration.
(48) The lack of side information (virtual source signals and rendering filters or rendering filter computation strategy from other side information) when deploying audio material for a particular rendering system precludes the use of this approach. If the side information cannot be excluded to be available during system identification, a strong evidence for the use of this method can be obtained from the computational load of the system identification process in an AEC application: rendering a single virtual source for a very long time, the computational load caused by the adaptive filtering becomes very low and independent of the number of loudspeakers, which contradicts classical system identification approaches. If this holds, distinguishing between SSSysId and SDAF is needed. To this end, a static virtual scene with more than one virtual source with independently time-varying spectral content can be synthesized: while SSSysId produces constant computational load, the computational load of SDAF will peak repeatedly due to the purely data-driven trans-forms for signals and systems. Another approach for distinguishing SSSysId from SDAF would be to alternate between signals with orthogonal loudspeaker-excitation pattern (e.g. virtual point sources at the positions of different physical loudspeakers): the Echo-Return Loss Enhancement (ERLE) can be expected to break down similarly for every scene change for SDAF, while SSSysId exhibits a significantly lowered breakdown when performing a previously observed scene-change again. However, these tests involve at least access to the load statistics of a processor running the aforementioned rendering tasks.
(49) In the following, a verification and evaluation of the basic properties of the SSSysId adaptation scheme are provided by simulating a WFS scenario with a linear sound bar of N.sub.L=48 loudspeakers in front of a single microphone (the use of just a single microphone is sufficient for general analyses of the behavior of the adaptation concept as filter adaptation is performed independently for each microphone, anyway) under free-field conditions, as depicted in
(50) The WFS system synthesizes at a sampling rate of 8 kHz one or more simultaneously active virtual point sources radiating statistically independent white noise signals. Besides, high-quality microphones are assumed by introducing additive white Gaussian noise at a level of 60 dB to the microphones. The system identification is performed by a GFDAF algorithm. The rendering systems' inverses are approximated in the Discrete Fourier Transform (DFT) domain and a causal time-domain inverse system is obtained by applying a linear phase shift, an inverse DFT, and subsequent windowing.
(51) For numerical stability, the pseudoinverse is approximated in the DFT domain by a Tikhonov regularized inverse H.sub.D.sup.+Tik=(H.sub.D.sup.HH.sub.D+I).sup.1H.sub.D.sup.H with a regularization constant =0.005, thereby offering a trade-off between the accuracy of the inversion (small ) and the filter coefficient norm for ill-conditioned H.sub.D. To evaluate the simulations, the normalized residual error signal
(52)
(53) Where x.sub.Mic(k).sup.N.sup.
.sup.N.sup.
(54)
(55) Where H.sub. and .sub.(|) are DFT-domain transfer function matrices of the estimated and the true LEMS, {0, . . . , L1} is the DFT bin index, and L is the DFT order.
(56) In the following, two different experiments will be described.
(57) According to a first experiment, 24 s of the microphone signal are synthesized, which are divided into three intervals of length 8 s with different, but internally constant virtual source configurations. The three interval's groups of virtual sources are depicted in
(58)
(59) Obviously, the normalized residual error depicted in
(60) According to a second experiment, a study of the long-term stability of the proposed adaptation scheme is performed. To this end, 100 different virtual source positions are drawn with coordinates {right arrow over (x)}.sub.S=[x,y,0].sup.T, x[0.5,4.5], y[5.1,1.1] and each source is exclusively active in its own interval of length 1 s. The resulting scene is depicted in
(61) The adaptation of source-specific systems and the direct adaptation of the LEMS will be compared in terms of the normalized system error norms. These are depicted in
(62) Obviously, the less complex source-specific updates (curve 160) lead to a completely stable adaptation and similar performance as updating the LEMS directly (curve 162), also in case of repeatedly changing virtual source configurations and for excitation with just a single virtual source. Thereby, the computational complexity is reduced by an order of magnitude. However, a slightly increased normalized system error norm is the result of the repeated transforms with regularized rendering inverse filters and the truncation of the convolution results to the modeled filter lengths.
(63) Embodiments provide a method for identifying a MIMO system employing side information (statistically independent virtual source signals, rendering filters) from an object-based rendering system (e.g., WFS or hands-free communication using a multi-loudspeaker front-end). This method does not make any assumptions about loudspeaker and microphone positions and allows system identification optimized to have minimum peak load or average load. As opposed to state-of-the-art methods, this approach has predictably low computational complexity, independent of the spectral or spatial characteristics of the N.sub.S virtual sources and the positions of the transducers (N.sub.L loudspeakers and N.sub.M microphones). For long intervals of constant virtual source configuration, a reduction of the complexity by a factor of about N.sub.L/N.sub.S is possible. A prototype has been simulated in order to verify the concept exemplarily for the identification of an LEMS for WFS with a linear sound bar.
(64)
(65)
(66) Many applications entail the identification of a Loudspeaker-Enclosure-Microphone System (LEMS) with multiple inputs (loudspeakers) and multiple outputs (microphones). The involved computational complexity typically grows at least proportionally along the number of acoustic paths, which is the product of the number of loudspeakers and the number of microphones. Furthermore, typical loudspeaker signals are highly correlated and preclude an exact identification of the LEMS (non-uniqueness problem). A state-of-the art method for multichannel system identification known as Wave-Domain Adaptive Filtering (WDAF) employs the inherent nature of acoustic sound fields for complexity reduction and alleviates the non-uniqueness problem for special transducer arrangements. On the other hand, embodiments do not make any assumption about the actual transducer placement, but employs side-information available in an object-based rendering system (e.g., Wave Field Synthesis (WFS)) for which the number of virtual sources is lower than the number of loudspeakers to reduce the computational complexity. In embodiments, (only) a source-specific system from each virtual source to each microphone can be identified adaptively and uniquely. This estimate for a source-specific system then can be transformed into an LEMS estimate. This idea can be further extended to the identification of an LEMS for the case of different virtual source configurations in different time intervals. For this general case, the idea of a peak-load-optimized and an average-load-optimized structure are presented, where the peak-load-optimized is well suited for less powerful systems and the average-load-optimized structure for powerful but portable systems which have to minimize the average consumption of electrical power.
(67) Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a block or device corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding block or item or feature of a corresponding apparatus. Some or all of the method steps may be executed by (or using) a hardware apparatus, like for example, a microprocessor, a programmable computer or an electronic circuit. In some embodiments, one or more of the most important method steps may be executed by such an apparatus.
(68) Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a Blu-Ray, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. Therefore, the digital storage medium may be computer readable.
(69) Some embodiments according to the invention comprise a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.
(70) Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may for example be stored on a machine readable carrier.
(71) Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine readable carrier.
(72) In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.
(73) A further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein. The data carrier, the digital storage medium or the recorded medium are typically tangible and/or non-transitionary.
(74) A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet.
(75) A further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein.
(76) A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.
(77) A further embodiment according to the invention comprises an apparatus or a system configured to transfer (for example, electronically or optically) a computer program for performing one of the methods described herein to a receiver. The receiver may, for example, be a computer, a mobile device, a memory device or the like. The apparatus or system may, for example, comprise a file server for transferring the computer program to the receiver.
(78) In some embodiments, a programmable logic device (for example a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are advantageously performed by any hardware apparatus.
(79) The apparatus described herein may be implemented using a hardware apparatus, or using a computer, or using a combination of a hardware apparatus and a computer.
(80) The methods described herein may be performed using a hardware apparatus, or using a computer, or using a combination of a hardware apparatus and a computer.
(81) While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations and equivalents as fall within the true spirit and scope of the present invention.