Estimation of reverberant energy component from active audio source
10393571 ยท 2019-08-27
Assignee
Inventors
Cpc classification
International classification
Abstract
Example embodiments disclosed herein relate to a estimation of reverberant energy components from audio sources. A method of estimating a reverberant energy component from an active audio source (100) is disclosed. The method comprises determining a correspondence between the active audio source and a plurality of sample sources by comparing one or more spatial features of the active audio source with one or more spatial features of the plurality of sample sources, each of the sample sources being associated with an adaptive filtering model (101); obtaining an adaptive filtering model for the active audio source based on the determined correspondence (102); and estimating the reverberant energy component from the active audio source over time based on the adaptive filtering model (103). Corresponding system (800) and computer program product (900) are also disclosed.
Claims
1. A method of estimating a reverberant energy component from an active audio source, comprising: determining a plurality of spatial features of the active audio source based on captured sound from the active audio source, wherein the plurality of spatial features comprise diffusivity; determining a correspondence between the active audio source and a plurality of adaptive filtering models, each adaptive filtering model corresponding to a respective plurality of spatial features, by comparing the plurality of spatial features of the active audio source with the plurality of spatial features of the plurality of adaptive filtering models, wherein determining the correspondence comprises determining, for each of the plurality of adaptive filtering models, a respective distance between the plurality of spatial features of the active audio source and the respective plurality of spatial features of the each adaptive filtering model; obtaining a particular adaptive filtering model for the active audio source based on each of the determined distances; and estimating the reverberant energy component from the active audio source over time based on the particular adaptive filtering model.
2. The method according to claim 1, wherein obtaining the particular adaptive filtering model for the active audio source comprises: in response to determining that none of the determined distances satisfies a predefined threshold: creating the particular adaptive filtering model, including assigning the spatial features of the active audio source to the particular adaptive filtering model; or in response to determining that a determined distance that corresponds to one of the adaptive filtering model satisfied the predetermined threshold: assigning the one adaptive filtering model of the plurality of adaptive filtering models to the active audio source; and designating the one adaptive filtering model as the particular adaptive filtering model.
3. The method according to claim 2, wherein creating the particular adaptive filtering model comprises: estimating the particular adaptive filtering model by feeding an energy of the captured sound of a previous time frame into a predefined adaptive filter; and lowering a difference between an output of the adaptive filter and an energy of the captured sound of a current time frame.
4. The method according to claim 3, wherein the sound from the active audio source is captured by at least one audio capturing device, the at least one audio capturing device comprising an omnidirectional microphone.
5. The method according to claim 1, wherein the active audio source comprise speakers of an audio conference located at different positions with regards to at least one audio capturing device.
6. The method according to claim 1, wherein obtaining the adaptive filtering model comprises: transforming the captured sound into an audio signal in a frequency domain; extracting a direct energy component and the reverberant energy component; and estimating the particular adaptive filtering model by: feeding the direct energy component and the reverberant energy component into a predefined adaptive filter, and lowering a difference between an output of the predefined adaptive filter and the reverberant energy component.
7. The method according to claim 6, wherein the sound from the active audio source is captured by at least one audio capturing device by performing operations comprising: extracting the direct energy component and the reverberant energy component based on an arrangement of the at least one audio capturing device and a linear relation of the audio signal between one or two audio capturing devices.
8. The method according to claim 1, wherein the sound from the active audio source is captured by at least one audio capturing device and wherein the at least one audio capturing device comprises at least one of: three microphones arranged in directional cardioid topology, or three omnidirectional microphones arranged in equilateral triangle topology.
9. The method according to claim 1, wherein the plurality of spatial features of the audio source comprise spatial information about the audio source, and wherein determining the correspondence between the active audio source and the plurality of adaptive filtering models comprises: selecting one of the plurality of adaptive filtering models the spatial features of which are closest to the active audio source; and determining that the active audio source corresponds to the selected adaptive filtering model in response to determining that a distance between spatial features of the selected adaptive filtering model and the spatial features of the active audio source is within a predefined threshold.
10. The method according to claim 1, wherein the plurality of spatial features comprises angle, distance, position or sound level.
11. The method according to claim 1, wherein a spatial feature of an audio source describes a property of the audio source in relation to an audio capturing device which is configured to capture sound from the audio source.
12. The method according to claim 1, wherein determining the correspondence between the active audio source and the plurality of adaptive filtering models comprises determining an adaptive filtering model of the plurality of adaptive filtering models whose spatial features are closest to the spatial features of the active audio source.
13. A computer program product for estimating a reverberant energy component from an active audio source, the computer program product being tangibly stored on a non-transient computer-readable medium and comprising machine executable instructions which, when executed, cause one or more processors to perform steps of the method according to claim 1.
14. The method according to claim 1, wherein a spatial feature of an audio source is indicative of at least one of: a position of the audio source relative to the audio capturing device, spatial information regarding the audio source relative to the audio capturing device, a distance of the audio source from the audio capturing device, an angle indicating an orientation of the audio source relative to the audio capturing device, a sound level at which sound coming from the audio source is captured at the audio capturing device and/or a diffusivity of sound being emitted by the audio source.
15. The method according to claim 1, wherein a spatial feature of the active audio source is determined based on data of the active audio source captured by one or more sensors including at least one of an audio capturing device, a visual capturing device and/or an infrared detection device.
16. A system for estimating a reverberant energy component from an active audio source, comprising: a determining unit configured to: determine a plurality of spatial features of the active audio source based on captured sound from the active audio source, wherein the plurality of spatial features comprise diffusivity; and determine a correspondence between the active audio source and a plurality of adaptive filtering models, each adaptive filtering model corresponding to a respective plurality of spatial features, by comparing the plurality of spatial features of the active audio source with the plurality of spatial features of the plurality of adaptive filtering models, wherein determining the correspondence comprises determining, for each of the plurality of adaptive filtering models, a respective distance between the plurality of spatial features of the active audio source and the respective plurality of spatial features of the each adaptive filtering model; an adaptive filtering model obtaining unit configured to obtain a particular adaptive filtering model for the active audio source based on each of the determined distances; and a reverberant energy component estimating unit configured to estimate the reverberant energy component from the active audio source over time based on the particular adaptive filtering model.
Description
DESCRIPTION OF DRAWINGS
(1) Through the following detailed descriptions with reference to the accompanying drawings, the above and other objectives, features and advantages of the example embodiments disclosed herein will become more comprehensible. In the drawings, several example embodiments disclosed herein will be illustrated in an example and in a non-limiting manner, wherein:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11) Throughout the drawings, the same or corresponding reference symbols refer to the same or corresponding parts.
DESCRIPTION OF EXAMPLE EMBODIMENTS
(12) Principles of the example embodiments disclosed herein will now be described with reference to various example embodiments illustrated in the drawings. It should be appreciated that the depiction of these embodiments is only to enable those skilled in the art to better understand and further implement the example embodiments disclosed herein, not intended for limiting the scope in any manner.
(13) The example embodiments disclosed herein utilize at least one audio capturing endpoint such as microphone in order to obtain the direct energy component as well as the reverberant energy component. By modelling the reverberant energy component as the output of a linear filter and the direct energy component as filter input, a proper adaptive filtering model is used to approximate the corresponding filter coefficient, which is then used as an indicator of how reverberant the source is and can be further mapped to a diffusivity measure. In real applications, multiple sources are usually involved. Therefore, in order to be able to track multiple sources at different locations in an auditory scene in real time, each source is assigned with an adaptive filtering model for speeding up the estimating processes. Additionally, a mechanism is developed to quickly switch between sources by using other spatial features (for example, angle, sound level, etc.), such that once a source is active, its corresponding adaptive filtering model can be adapted in a short time.
(14) In order to be able to track multiple sources, each source has its own adaptive model that is adapted whenever the source is active. Keeping track of the adaptive model for each source helps accelerate the reverberant energy estimation and also provides more robustness and stability.
(15)
(16) The determining step S101 may be achieved in different ways. For example, some of spatial features may be extracted from the active audio source captured by the audio capturing endpoint. The spatial features may include angle information which indicates the orientation of the active audio source in relation to the audio capturing endpoint, as well as amplitude information which indicates the loudness or sound level of the active audios source. Alternatively, the step S101 may also be achieved by a visual capturing endpoint such as a camera, which may obtain spatial information of a particular source by analyzing the captured image. Other means such as infrared detection may also be utilized so as to obtain the spatial features of the active audio source. Consequently, the spatial features of the active audio source may be compared with those of sample sources in order to determine whether there is a sample source used for representing the active audio source. As indicated above, the spatial features (denoted as herein) may include information regarding the position of the active audio source (such as angle information and/or distance information). As such, a spatial feature of an audio source may describe a property of the audio source in relation to an audio capturing device (e.g. a microphone) which is adapted to capture sound from the audio source. In particular, a spatial feature of an audio source may be indicative of or may correspond to at least one of: a position of the audio source relative to the audio capturing device, spatial information regarding the audio source relative to the audio capturing device, a distance of the audio source from the audio capturing device, an angle indicating an orientation of the audio source relative to the audio capturing device, a sound level at which sound coming from the audio source is captured at the audio capturing device and/or a diffusivity of sound being emitted by the audio source. A spatial feature of the active audio source may be determined based on data of the active audio source captured by one or more sensors, such as an audio capturing device, a visual capturing device and/or an infrared detection device.
(17) At step S102, an adaptive filtering model is obtained for the active audio source based on the determined correspondence. For example, the adaptive filtering model may be obtained in two ways. The first way may rely on the determining step at the step S101: if the active audio source corresponds to none of the sample sources, which also includes the situation where no sample source is provided, a sample source corresponding to the active audio source is created. The created sample source is assigned with the spatial features of the captured active audio source, and will be later assigned with an adaptive filtering model.
(18) Then, the adaptive filtering model associated with the created sample source may is estimated. This process may be carried out in different ways and will be explained in detail later in the descriptions.
(19) On the other hand, the second way may also rely on the determining step S101: if the active audio source corresponds to one of the sample sources, the active filtering model associated with the corresponded sample source may be assigned to the active audio source.
(20) At step S103, a reverberant energy component is estimated from the active audio source over time based on the obtained adaptive filtering model at the step S102. The estimation of the reverberant energy component over time is useful in updating the adaptive filtering model. As a result, diffusivity may be obtained from the adaptive filtering model by a predetermined mapping for example.
(21)
(22) At step S201, the instantaneous spatial features may be obtained either by the audio capturing endpoint (angle, sound level) directly or as a ratio of the largest eigenvalue to the second largest eigenvalue (diffusivity), which are computed using the covariance matrix of the input signal. As indicated in
d.sub.i=|.sub.i| for i=1,2 . . . N(1)
where N represents the total number of sample source models.
(23) At step S203, the sample source model k whose spatial features are closest to W is then picked up. To make the selection more robust, at step S204, the minimum distance d.sub.k shown in
(24) The adaptive filtering estimation at the step S103 will be discussed in the following. The sound is captured from the active audio source by at least one audio capturing device. In one embodiment, there may be provided with three cardioid microphones arranged in equilateral triangle topology. In another embodiment, there may be provided with three omnidirectional microphones arranged in equilateral triangle topology. It should be noted that there can be provided with less microphones such as two microphones or more microphones such as at least four or more microphones arranged in any suitable form once the spatial features can be obtained from such an arrangement. Alternatively, in another embodiment, there can be provided with only one microphone in order to capture a sound signal without spatial feature. In general, the adaptive filtering model estimation processes for multiple microphones and for single microphone are different and will be discussed separately in the following.
(25) Extraction Process for Multiple Microphones
(26) In one embodiment, as described above, the audio capturing endpoint may include three cardioid microphones arranged in equilateral triangle topology. As shown in
(27) The cardioid directional microphone has directional amplitude response as shown in
(28) The sound captured from each of the three microphones is represented as L, R and S, respectively in accordance with their orientations. The three cardioid microphones are assumed to be identical except for their orientations.
(29) Time domain version of the L, R and S signals can be denoted as L(n), R(n) and S(n), respectively. Therefore, their corresponding frequency domain counterparts can be transformed as L(, k), R(, k) and S(, k), respectively, where represents a normalized angular frequency in radius and k represents the frame index. A frame length l is chosen as the one that corresponds to 20 ms, depending on the sampling rate. In one embodiment, l is chosen as 960 as for a sampling rate of 48 kHz, meaning that the 20 ms is sampled 960 times with an interval of 1/48000 second. In the following discussion, the frame index k is omitted in most cases for expository convenience.
(30) The microphone array includes three cardioid microphones and has its amplitude response H() as:
(31)
where represents the angle of the active source relative to the audio capturing endpoint which has a predefined forward direction as 0. H.sub.L() represents the amplitude response for the channel L of the cardioid microphone array, H.sub.R() represents the amplitude response for the channel R of the cardioid microphone array, and H.sub.S() represents the amplitude response for the channel S of the cardioid microphone array.
(32) It may be assumed that the microphones are spaced with a small enough distance so that the phase difference in each microphone signal is negligible. Therefore, according to Equation (2), the input signal for a single source staying at angle can be described as:
(33)
where X() represents the input signal in a frequency domain, and D() represents the direct signal of the audio source in the frequency domain. r()=[r.sub.L()r.sub.R()r.sub.S ()].sup.T is the term standing for reverberation.
(34) In one embodiment, it may be assumed that the reverberant components in different microphones are uncorrelated and of zero mean, for example:
(35)
where C.sub.r represents the covariance reverberation matrix of the signal energy and E represents its expectation. .sub.r.sub.
(36) In order to extract the direct and reverberant energy, the covariance matrix of the input signal may be first computed as:
C(,k)=C(,k1)+(1)X(,k)X.sup.H(,k)(5)
where C(, k) represents the covariance matrix for frequency and frame index k. represents a smoothing factor.
(37) In one embodiment, may be set to a value ranged from 0.9 to 0.95, for example, 0.9. Because the audio signal includes both the direct energy component and the reverberant energy component, and thus the expectation of the signal energy may be expressed as:
E[C()]=E[C.sub.d()]+E[C.sub.r()]=o.sub.d.sup.2()H()H.sup.H()+E[C.sub.r()](6)
where o.sub.d.sup.2() represents the expected power of direct source energy, and C.sub.d () represents the covariance of the direct source energy.
(38) Based on Equation (6), it can be shown that the sum A() of the diagonal entries of C() can be expressed as:
(39)
where G.sub.1 represents a constant, and .sub.r.sup.2() represents the average reverberant energy in each microphone. E[C.sub.11()] represents the expected covariance for the first column (channel L) and the first row (channel L) of the expected covariance matrix presented in Equation (4). Similarly, E[C.sub.22()] represents the expected covariance for the second column (channel R) and the second row (channel R) of the expected covariance matrix, and E[C.sub.33()] represents the expected covariance for the third column (channel S) and the third row (channel S) of the expected covariance matrix.
(40) In the particular arrangement of the cardioid microphones shown in
(41)
where G.sub.2 represents a constant (e.g., 0.625). E[C.sub.12()] represents the expected covariance for the first column (channel L) and the second row (channel R) of the expected covariance matrix presented in Equation (4). Similarly, E[C.sub.13()] represents the expected covariance for the first column (channel L) and the third row (channel S) of the expected covariance matrix, and E[C.sub.23 ()] represents the expected covariance for the second column (channel R) and the third row (channel S) of the expected covariance matrix. The calculation of G.sub.1 and G.sub.2 will be explained later in the descriptions.
(42) Because it is assumed that the microphones are uncorrelated with off-diagonal entries of C.sub.r() being equal to 0, in this case, B() does not include reverberation entries like A() does. In the particular arrangement of the cardioid microphones shown in
(43)
(44) It can be seen from Equation (9) that the direct energy component o.sub.d.sup.2() and the reverberant energy component o.sub.r.sup.2() can be written as:
(45)
(46) As derived from Equations (2) to (10), the direct energy component o.sub.d.sup.2() and the reverberant energy component o.sub.r.sup.2() can be extracted based on the arrangement of the microphones (which determines the values of G.sub.1 and G.sub.2) and a linear relation of the audio signal between one (C.sub.11, C.sub.22 and C.sub.33) or two (C.sub.12, C.sub.13 and C.sub.23) of the microphones. In this embodiment, the linear relation may be reflected by the covariance matrix of the audio signal which may be calculated by Equation (6).
(47) In the embodiment described above, it is assumed that the reverberant components in different microphones are uncorrelated and of zeros mean. However, the coherence of reverberant sound field may be frequency dependent and non-zero in most cases. Based on the coherence of different sound fields, the complex sound field coherence may be generated as below:
(48)
(49) where .sub.12 () represents the sound field coherence for the channels L and R, .sub.13 () represents the sound field coherence for the channels L and S, and F.sub.23 () represents the sound field coherence for the channels R and S.
(50) As for the same sound field, .sub.12 ()=.sub.13 ()=.sub.23 (), and thus they are denoted as () in the following. Based on Equation (6), the covariance matrix can be expressed as below:
(51)
where .sub.dd () represents the direct energy component, and .sub.r.sub.
(52) The simplified results of Equations (17), (18) and (19) are derived based on Equations (11), (12) and (13) respectively. Based on Equations (14) through (19), the sum A() of the diagonal entries of C() and the sum B() of the upper off-diagonal entries of C() can be expressed as:
(53)
(54) Similar to Equations (7) and (8), G.sub.1 and G.sub.2 represent two constants, given that the microphone array is fixed during the audio capturing process, which can be determined by the following equations:
G.sub.1=H.sub.L()H.sub.L.sup.H()+H.sub.R()H.sub.R.sup.H()+H.sub.S()H.sub.S.sup.H()(22)
G.sub.2=H.sub.L()H.sub.R.sup.H()+H.sub.L()H.sub.S.sup.H()+H.sub.R()H.sub.S.sup.H()(23)
(55) Therefore, based on Equation (2), G.sub.1 and G.sub.2 can be calculated for the example microphone arrangement as 1.125 and 0.625, respectively. By combining Equations (20) and (21), the direct energy component .sub.dd () and the reverberant energy component .sub.rr () can be expressed as:
(56)
(57) It can be seen from Equation (24) that the direct energy component .sub.dd () and the reverberant energy component .sub.rr () can be written as:
(58)
(59) In this embodiment, the value of () is ranged from 0 to 1. The value of 0 may stand for a non-coherent sound field, in other words, a heavily reverberated room. When () is equal to 0, the estimation of the reverberant energy component can correspond to the calculation based on Equations (2) through (10).
(60) The value of 1 may stand for a coherent sound field, in other words, a space where the reverberation characteristics do not change with respect to the frequencies. The coherent sound field may be the ideal sound field which can only be available in anechoic chamber. In reality, when the reverberation time of the room is very low or the distance between the source and the microphone is small (e.g. close-talking scenario). In such a case, the direct sound can be dominant.
(61) In one embodiment, ()=sinc(2f.sub.sd.sub.mic/c), with the value of () determined by the sinc function for the situations between 0 and 1, where f.sub.s represents the frequency, c represents the speed of sound, and d.sub.mic represents the distance between the two adjacent microphones.
(62) In one example embodiment disclosed herein, the audio capturing endpoint may include three omnidirectional microphones instead of three cardioid microphones. The arrangement of the three omnidirectional microphones can be identical with that of the three cardioid microphones as described previously, as illustrated by
(63) Different from Equation (2), The microphone array including three omnidirectional microphones has its amplitude response H() as:
(64)
(65) From the above equation, the values of G.sub.1 and G.sub.2 are both 3.
(66)
(67)
where represents the adaption stepsize set to 0.1. Typically, the value of may range from 0.05 to 0.2. Y represents the filter input taps, i.e., Y(, k)=[.sub.d.sup.2(, k) .sub.d.sup.2(, k1) . . . .sub.d.sup.2(, kl+1)].sup.T. {circumflex over ()}.sub.r.sup.2 () represents the estimated reverberant energy component by the filter 501, and e() represents an error between .sub.r.sup.2 () and {circumflex over ()}.sub.r.sup.2 ().
(68) By taking reference back to
(69) On the other hand, if there exists one sample source corresponded to the active audio source, the adaptive filtering model associated with the corresponded sample source can be assigned for obtaining the initial filter 501. As a result, the assigned adaptive filter will rapidly finish the adaption process or omit the adaption process, and the reverberant energy component can be rapidly estimated over time compared with the scenario through the steps S102 and S103.
(70) It should be noted that the direct and reverberant energy components and their corresponding models may be performed for all frequencies bin independently in parallel. The overall reverberation model can be denoted as:
R.sub.model()=[h(.sub.L) . . . h(.sub.U)](30)
where .sub.L and .sub.U represent the lower and upper bound frequency of interest. In one embodiment, for speech sources, the bounds may be limited to 200 Hz and 8 kHz, respectively in order to save computing resources. In another embodiment, for instrument sources, the bounds may be set to 20 Hz and 20 kHz in order to convey music data without compromising on details.
(71) An overall flow of the example embodiments above can be seen in
(72) With the reverberation estimation processes described above, a library including a number of sample sources is used to save the corresponding adaptive filtering model either predefined for all the audio sources in a room, or learned by the algorithms (Equations (10) and (25), for example) and framework (
(73) Extraction Process for a Single Microphone
(74) The previous embodiments make use of multiple microphones, for example two or more microphones of any geometry, with the assumption that the reverberant energy is more diffuse or higher in dimensionality than the direct energy. Additionally, the reverberant energy is greater than the general diffuse noise (acoustic or electrical noise) at the signal inputs for some period of time after the onset of energy.
(75) In another example embodiment disclosed herein, only one microphone such as an omnidirectional microphone is used to capture sound. The adaptive filtering model is different from the one illustrated by
(76) In this embodiment, it is preferred that the energy signal being estimated is strictly positive, and the direct and reverberant signals are uncorrelated. Furthermore, it may be assumed that the clean voice power spectrum is largely impulsive, with rapid onset and a decay rate much greater than that for the reverberation. For example, natural voice characteristics are decaying at least 20 or 30 dB within 100 ms, being around half of the normal syllable duration. This would correspond to a room with the reverberation time being less than 200 ms. In that sense, it may be assumed that the impulse response and reverberation characteristics represent a strictly positive filter that represents a spread or slower decay of the signal energy than the underlying excitation (voice). Otherwise, the reverberation would be of low significance to any perception or signal processing.
(77) However, it is noted that in this case, the error signal e.sub.t is not zero mean Gaussian, rather it is the impulsive signal shown in
(78) As seen in
(79)
where h.sub.i represents the filter coefficient for the i-th frame.
(80) The estimation of the reverberant energy component for the previous time frame can be obtained by Equation (32) and error between the current time frame (the active audio source stops making sound) and the estimation can be obtained by Equation (33) as below:
(81)
The filter coefficient can then be calculated by:
h.sub.i=h.sub.i+e.sub.tX.sub.t-i, if X.sub.t<X.sub.t-1(34)
where and are two constants defined in the following.
(82) may be set such that a maximum reverberation time constant is allowed to be estimated effectively, and the impact of the clean audio energy is reduced from biasing the adaption. An example value is to set for a maximum reverberation time of around 1 second, in which case for a 20 ms update rate, it can be seen that the value would represent a decay in each frame of at least 1.2 dB, or in the power domain a scalar value of 0.75. A range of values for at 20 ms would be from around 0.25 (200 ms) to 0.9 (3000 ms). For different block sizes, the value of can be calculated appropriately. It should be noted that using a smaller value for decreases the bias on the identified filter coefficients for smaller reverberation times.
(83) may be set by using normal considerations for adaptive filters. Whilst a normalized LMS approach could be considered, it is noted that generally a better estimation of the reverberation decay filter will be obtained when the larger error values e.sub.t dominate the adaption, which occurs with less normalization. Approaches of managing the normalization and transition from normalized to direct LMS are already known and thus the descriptions thereof are omitted.
(84) The ability to estimate the reverberant energy, according to the embodiments disclosed herein, may be achieved without a requirement of specific stage of the separation of an audio signal into components representing the reverberation and direct source. For example, no explicit source separation, beam-forming or deconvolutive processing is required as much as those of the existing approaches.
(85)
(86) In an example embodiment, the adaptive filtering model obtaining unit 802 may comprise a sample source creating unit and an adaptive filtering model estimating unit. In response to determining that the active audio source corresponds to none of the sample sources, the sample source creating unit may be configured to create a sample source corresponding to the active audio source; and the adaptive filtering model estimating unit may be configured to estimate the adaptive filtering model associated with the created sample source. The system also includes an adaptive filtering model assigning unit. In response to determining that the active audio source corresponds to one of the sample sources, the adaptive filtering model assigning unit is configured to assign the adaptive filtering model associated with the corresponding sample source to the active audio source.
(87) In some example embodiments, the system 800 may include a sound capturing unit configured to capture sound from the active audio source by using at least one microphone; and a spatial feature extracting unit configured to extract a spatial feature from the captured sound, wherein the determining unit is configured to determine the correspondence between the active audio source and the plurality of sample sources is based on the extracted spatial feature.
(88) In another example embodiment, the adaptive filtering model estimating unit 803 may include a sound transforming unit configured to transform the captured sound into an audio signal in a frequency domain; and an energy component extracting unit configured to extract a direct energy component and the reverberant energy component, wherein the adaptive filtering model estimating unit is configured to estimate the adaptive filtering model by feeding the direct energy component and the reverberant energy component into a predefined adaptive filter and lowering a difference between an output of the adaptive filter and the reverberant energy component. In a further example embodiment, the energy component extracting unit may be configured to extract the direct energy component and the reverberant energy component based on an arrangement of the microphone and a linear relation of the audio signal between one or two of the microphones. In yet another example embodiment, the at least one microphone comprises three microphones, and the arrangement of the microphone comprises three directional cardioid microphones or three omnidirectional microphones in equilateral triangle topology.
(89) In some other example embodiments, the adaptive filtering model estimating unit 803 may be configured to estimate the adaptive filtering model by feeding an energy of the captured sound of a previous time frame into a predefined adaptive filter and lowering a difference between an output of the adaptive filter and an energy of the captured sound of a current time frame. In another example embodiment, the at least one microphone comprises one omnidirectional microphone for capturing sound from the active audio source.
(90) In yet another example embodiment, the determining unit 801 may include a selecting unit configured to select one of the sample sources spatially closest to the active audio source, wherein the determining unit is configured to determine that the active audio source corresponds to the selected sample source in response to a distance between the selected sample source and the active audio source being within a predefined threshold.
(91) In some other example embodiments, the spatial feature comprises at least one of angle, diffusivity or sound level.
(92) For the sake of clarity, some optional components of the system 800 are not shown in
(93)
(94) The following components are connected to the I/O interface 905: an input section 906 including a keyboard, a mouse, or the like; an output section 907 including a display, such as a cathode ray tube (CRT), a liquid crystal display (LCD), or the like, and a speaker or the like; the storage section 908 including a hard disk or the like; and a communication section 909 including a network interface card such as a LAN card, a modem, or the like. The communication section 909 performs a communication process via the network such as the internet. A drive 910 is also connected to the I/O interface 905 as required. A removable medium 911, such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like, is mounted on the drive 910 as required, so that a computer program read therefrom is installed into the storage section 908 as required.
(95) Specifically, in accordance with the example embodiments disclosed herein, the processes described above with reference to
(96) Generally speaking, various example embodiments disclosed herein may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device. While various aspects of the example embodiments disclosed herein are illustrated and described as block diagrams, flowcharts, or using some other pictorial representation, it will be appreciated that the blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
(97) Additionally, various blocks shown in the flowcharts may be viewed as method steps, and/or as operations that result from operation of computer program code, and/or as a plurality of coupled logic circuit elements constructed to carry out the associated function(s). For example, example embodiments disclosed herein include a computer program product comprising a computer program tangibly embodied on a machine readable medium, the computer program containing program codes configured to carry out the methods as described above.
(98) In the context of the disclosure, a machine readable medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine readable medium may be a machine readable signal medium or a machine readable storage medium. A machine readable medium may include, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of the machine readable storage medium would include an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
(99) Computer program code for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer program codes may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor of the computer or other programmable data processing apparatus, cause the functions/operations specified in the flowcharts and/or block diagrams to be implemented. The program code may execute entirely on a computer, partly on the computer, as a stand-alone software package, partly on the computer and partly on a remote computer or entirely on the remote computer or server or distributed among one or more remote computers or servers.
(100) Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in a sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are contained in the above discussions, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub-combination.
(101) Various modifications, adaptations to the foregoing example embodiments of this invention may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings. Any and all modifications will still fall within the scope of the non-limiting and example embodiments of this invention. Furthermore, other example embodiments set forth herein will come to mind of one skilled in the art to which these embodiments pertain to having the benefit of the teachings presented in the foregoing descriptions and the drawings.
(102) Various aspects of the present invention may be appreciated from the following enumerated example embodiments (EEEs).
(103) EEE 1. A method of estimating a reverberant energy component from an active audio source, comprising:
(104) determining a correspondence between the active audio source and a plurality of sample sources, each of the sample sources being associated with an adaptive filtering model;
(105) obtaining an adaptive filtering model for the active audio source based on the determined correspondence; and
(106) estimating the reverberant energy component from the active audio source over time based on the adaptive filtering model.
(107) EEE 2. The method according to EEE 1, wherein obtaining an adaptive filtering model for the active audio source comprises:
(108) in response to determining that the active audio source corresponds to none of the sample sources:
(109) creating a sample source corresponding to the active audio source; and
(110) estimating the adaptive filtering model associated with the created sample source;
(111) in response to determining that the active audio source corresponds to one of the sample sources:
(112) assigning the adaptive filtering model associated with the corresponding sample source to the active audio source.
(113) EEE 3. The method according to EEE 2, further comprising:
(114) capturing sound from the active audio source by using at least one microphone; and
(115) extracting a spatial feature from the captured sound,
(116) wherein determining the correspondence between the active audio source and the plurality of sample sources comprises determining the correspondence based on the extracted spatial feature.
(117) EEE 4. The method according to EEE 3, wherein estimating the adaptive filtering model comprises:
(118) transforming the captured sound into an audio signal in a frequency domain;
(119) extracting a direct energy component and the reverberant energy component; and
(120) estimating the adaptive filtering model by: feeding the direct energy component and the reverberant energy component into a predefined adaptive filter, and lowering a difference between an output of the adaptive filter and the reverberant energy component.
EEE 5. The method according to EEE 4, wherein the extracting comprises:
(121) extracting the direct energy component and the reverberant energy component based on an arrangement of the microphone and a linear relation of the audio signal between one or two of the microphones.
(122) EEE 6. The method according to EEE 5, wherein the at least one microphone comprises one of the following:
(123) three microphones arranged in directional cardioid topology, or
(124) three omnidirectional microphones arranged in equilateral triangle topology.
(125) EEE 7. The method according to EEE 2, wherein estimating the adaptive filtering model comprises:
(126) estimating the adaptive filtering model by feeding an energy of the captured sound of a previous time frame into a predefined adaptive filter; and
(127) lowering a difference between an output of the adaptive filter and an energy of the captured sound of a current time frame.
(128) EEE 8. The method according to EEE 7, wherein the at least one microphone comprises an omnidirectional microphone for capturing sound from the active audio source.
(129) EEE 9. The method according to any of EEEs 1 to 8, wherein determining the correspondence between the active audio source and the plurality of sample sources comprises:
(130) selecting one of the sample sources spatially closest to the active audio source; and
(131) determining that the active audio source corresponds to the selected sample source in response to a distance between the selected sample source and the active audio source being within a predefined threshold.
(132) EEE 10. The method according to any of EEEs 3 to 8, wherein the spatial feature comprises at least one of angle, diffusivity or sound level.
(133) EEE 11. A system for estimating a reverberant energy component from an active audio source, comprising:
(134) a determining unit configured to determine a correspondence between the active audio source and a plurality of sample sources, each of the sample sources being associated with an adaptive filtering model;
(135) an adaptive filtering model obtaining unit configured to obtain an adaptive filtering model for the active audio source based on the determined correspondence; and
(136) a reverberant energy component estimating unit configured to estimate the reverberant energy component from the active audio source over time based on the adaptive filtering model.
(137) EEE 12. The system according to EEE 11, wherein the adaptive filtering model obtaining unit comprises:
(138) a sample source creating unit and an adaptive filtering model estimating unit, wherein in response to determining that the active audio source corresponds to none of the sample sources:
(139) the sample source creating unit is configured to create a sample source corresponding to the active audio source; and
(140) the adaptive filtering model estimating unit is configured to estimate the adaptive filtering model associated with the created sample source; and
(141) an adaptive filtering model assigning unit, wherein in response to determining that the active audio source corresponds to one of the sample sources:
(142) the adaptive filtering model assigning unit is configured to assign the adaptive filtering model associated with the corresponding sample source to the active audio source.
(143) EEE 13. The system according to EEE 12, further comprising:
(144) a sound capturing unit configured to capture sound from the active audio source by using at least one microphone; and
(145) a spatial feature extracting unit configured to extract a spatial feature from the captured sound, wherein the determining unit is configured to determine the correspondence between the active audio source and the plurality of sample sources based on the extracted spatial feature.
(146) EEE 14. The system according to EEE 13, wherein the adaptive filtering model estimating unit comprises:
(147) a sound transforming unit configured to transform the captured sound into an audio signal in a frequency domain; and
(148) an energy component extracting unit configured to extract a direct energy component and the reverberant energy component,
(149) wherein the adaptive filtering model estimating unit is configured to estimate the adaptive filtering model by feeding the direct energy component and the reverberant energy component into a predefined adaptive filter and lowering a difference between an output of the adaptive filter and the reverberant energy component.
(150) EEE 15. The system according to EEE 14, wherein the energy component extracting unit is configured to extract the direct energy component and the reverberant energy component based on an arrangement of the microphone and a linear relation of the audio signal between one or two of the microphones.
EEE 16. The system according to EEE 15, wherein the at least one microphone comprises one of the following:
(151) three microphones arranged in directional cardioid topology, or
(152) three omnidirectional microphones arranged in equilateral triangle topology.
(153) EEE 17. The system according to EEE 12, wherein the adaptive filtering model estimating unit is configured to estimate the adaptive filtering model by feeding an energy of the captured sound of a previous time frame into a predefined adaptive filter and lowering a difference between an output of the adaptive filter and an energy of the captured sound of a current time frame.
EEE 18. The system according to EEE 17, wherein the at least one microphone comprises an omnidirectional microphone for capturing sound from the active audio source.
EEE 19. The system according to any of EEEs 11 to 18, wherein the determining unit comprises:
(154) a selecting unit configured to select one of the sample sources spatially closest to the active audio source,
(155) wherein the determining unit is configured to determine that the active audio source corresponds to the selected sample source in response to a distance between the selected sample source and the active audio source being within a predefined threshold.
(156) EEE 20. The system according to any of EEEs 13 to 18, wherein the spatial feature comprises at least one of angle, diffusivity or sound level.
(157) EEE 21. A computer program product for estimating a reverberant energy component from an active audio source, the computer program product being tangibly stored on a non-transient computer-readable medium and comprising machine executable instructions which, when executed, cause the machine to perform steps of the method according to any of EEEs 1 to 10.