HEARING DEVICE AND A HEARING SYSTEM CONFIGURED TO LOCALIZE A SOUND SOURCE
20170105074 ยท 2017-04-13
Assignee
Inventors
Cpc classification
H04R2420/01
ELECTRICITY
H04R25/407
ELECTRICITY
H04R25/554
ELECTRICITY
International classification
Abstract
The problem of estimating the direction to one or more sound sources of interest relative to a user wearing a pair of hearing devices, e.g. hearing aids, is dealt with. A target signal is generated by a target signal source and transmitted through an acoustic channel to a microphone of a hearing system. Due to additive environmental noise, a noisy acoustic signal is received at the microphones of the hearing system. An essentially noise-free version of the target signal is transmitted to the hearing devices of the hearing system via a wireless connection. Each of the hearing devices comprises a signal processing unit comprising a sound propagation model of the acoustic propagation channel from the target sound source to the hearing device when worn by the user. The sound propagation model is configured to be used for estimating a direction-of-arrival of the target sound signal relative to the user.
Claims
1. A hearing device adapted to be worn at or on the head or to be fully or partially implanted in the head of a user, the hearing device comprising at least one input transducer for converting an input sound comprising mixture of a) a target sound signal from a target sound source and b) a possible additive noise sound signal from the environment to a noisy electric input signal; at least one wireless receiver for receiving a wirelessly transmitted version of the target signal and providing an essentially noise-free target signal; a signal processing unit connected to said at least one input transducer and to said at least one wireless receiver, the signal processing unit comprising a sound propagation model of an acoustic propagation channel from the target sound source to the hearing device when worn by the user, the sound propagation model being configured to be used for estimating a direction-of-arrival of the target sound signal relative to the user.
2. A hearing device according to claim 1 wherein the sound propagation model is frequency independent.
3. A hearing device according to claim 1 wherein the sound propagation model comprises a far field model.
4. A hearing device according to claim 1 wherein the sound propagation model allows interaural time differences (ITD) and inter aural level differences (ILD) to be estimated by
ILD=K.sub.1 sin()[relative level]
ITD=K.sub.2+K.sub.3 sin()[time], respectively, where K.sub.1, K.sub.2, and K.sub.3 are constants to be chosen, and is the angle of the direction-of-arrival of the target sound source relative to a reference direction.
5. A hearing device according to claim 4 wherein the least one input transducer comprises two microphones, and wherein the constants (K.sub.1, K.sub.2, K.sub.3) are chosen to be equal to or substantially equal to (0, 0, a/c) or to (, a/(2c), a/(2c)), where a is the microphone distance, and c is the speed of sound, and where is a constant.
6. A hearing device according to claim 1 wherein the sound propagation model comprises a free field model.
7. A hearing device according to claim 1 wherein the sound propagation model comprises a spherical head model.
8. A hearing device according to claim 1 comprising a time to time-frequency conversion unit for converting an electric input signal in the time domain into a representation of the electric input signal in the time-frequency domain, providing the electric input signal at each time instance 1 in a number for frequency bins k, k=1, 2, . . . , N.
9. A hearing device according to claim 1 wherein the signal processing unit is configured to provide a maximum-likelihood estimate of the direction of arrival of the target sound signal.
10. A hearing device according to claim 9 wherein the signal processing unit is configured to use Inverse Discrete Fourier Transforms (IDFTs) to estimate the value of the direction of arrival 8 that maximizes the likelihood function.
11. A hearing device according to claim 1 wherein the sound propagation model of an acoustic propagation channel from the target sound source to the hearing device when worn by the user comprises a signal model defined by
R(l, k)=S(l, k){tilde over (H)}(k, )+V(l, k) where R(l, k) is a time-frequency representation of the noisy target signal, S(l, k) is a time-frequency representation of the noise-free target signal, {tilde over (H)}(k, ) is a frequency transfer function of the acoustic propagation channel from the target sound source to the respective input transducers of the hearing device, and V(l, k) is a time-frequency representation of the additive noise.
12. A hearing device according to claim 1 wherein the signal processing unit is configured to provide a maximum-likelihood estimate of the direction of arrival of the target sound signal by finding the value of , for which the likelihood function
13. A hearing device according to claim 1 wherein the at least one input transducer of the hearing devices comprises one or two input transducers, e.g. microphones.
14. A hearing device according to claim 1 configured to determine whether a signal arrives from a front or a rear half plane relative to a user.
15. A hearing device according to claim 1 comprising a hearing aid, a headset, an earphone, an ear protection device or a combination thereof.
16. A hearing device according to claim 1 configured to switch between local and informed estimation direction-of-arrival depending on a control signal, e.g. a control signal from a voice activity detector.
17. A hearing device according to claim 16 configured to only determine a direction-of-arrival, when a voice is detected in an input signal, e.g. when a voice is detected in the wirelessly received (essentially) noise-free signal.
18. A hearing device according to claim 1 comprising a beamformer unit and wherein the signal processing unit is configured to use the estimate of the direction of arrival of the target sound signal relative to the user in the beamformer unit to provide a beamformed signal comprising the target signal.
19. A hearing system comprising first and second hearing devices according to claim 1 adapted to be located at or in first and second ears, or to be fully or partially implanted in the head at or in first and second ears, respectively, of the user.
20. A hearing system according to claim 19 configured to estimate a target source to input transducer propagation delay for the first and second hearing devices.
Description
BRIEF DESCRIPTION OF DRAWINGS
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
DETAILED DESCRIPTION OF EMBODIMENTS
[0051] The problem addressed by the present disclosure is to estimate the location of the target sound source. To do so, we make some assumptions about the signals reaching the microphones of the hearing aid system and about their propagation from the emitting target source to the microphones. In the following, we outline these assumptions.
[0052]
[0053]
[0054] Signal Model:
[0055] Generally, we assume a signal model of the form (cf. e.g. [2], Eq. (1)) describing the noisy signal received by the m.sup.th input transducer (e.g. microphone m):
r.sub.m(n)=s(n)*h.sub.m(n,)+v.sub.m(n), (m={left,right} or {1,2}). [2] (1)
where s, h.sub.m, and v.sub.m, are the (essentially) noise-free target signal emitted at the target talker's position, the acoustic channel impulse response between the target talker and microphone m, and an additive noise component, respectively. is the angle of the direction-of-arrival of the target sound source relative to a reference direction defined by the user (and/or by the location of the first and second (left and right) hearing devices on the body (e.g. the head, e.g. at the ears) of the user), n is a discrete time index, and * is the convolution operator. In an embodiment, a reference direction is defined by a look direction of the user (e.g. defined by the direction that the user's nose point in (when seen as an arrow tip)). In an embodiment, we operate in the short-time Fourier transform domain (STFT), which allows us to write all involved quantities as functions of a frequency index k, a time (frame) index 1, and the direction-of-arrival (angle) . The relevant quantities are given by, Eqs. (2-10) below (cf. also [2]).
[0056] Most state-of-the-art hearing aids operate in the short time Fourier transform (STFT) domain because it allows frequency dependent processing, computational efficiency and the ability to adapt to the changing conditions. Therefore, let R.sub.m(l, k), S(l, k) and V.sub.m(l, k) denote the STFT of r.sub.m, s and v.sub.m, respectively. In an embodiment, it is assumed that S also includes source (e.g. mouth) to microphone transfer function and microphone response. Specifically,
where m={left, right}, l and k are frame and frequency bin indexes, respectively, N is the frame length, A is a decimation factor, w(n) is the windowing function, and j=(1) is the imaginary unit. We define S(l, k) and V.sub.m(l, k) similarly. Moreover, let H.sub.m(k, ) denote the Discrete Fourier Transform (DFT) of h.sub.m:
where m={left, right}, N is the DFT order, a.sub.m(k, ) is a real number and denotes the frequency-dependent attenuation factor due to propagation effects, and D.sub.m(k, ) is the frequency-dependent propagation time from the target sound source to microphone m. For simplicity and decreasing computation overhead we may use the Fast Fourier transformation (FFT) algorithm to calculate STFT to model the acoustic channel as a function that delays and attenuates its input signals uniformly across frequencies, i.e.
where {tilde over (D)}.sub.m() and {tilde over ()}.sub.m () are constant across frequencies.
[0057] Now, we can approximate Eq. (1) in the STFT domain as:
R.sub.m(l, k)=S(l, k){tilde over (H)}.sub.m(k, )+V.sub.m(l, k). [2] (6)
[0058] Collecting the microphone equations (Eq. (6)) in a column vector leads to the following signal model:
R(l, k)=S(l, k){tilde over (H)}(k, )=V(l, k), [2] (7)
where
R(l, k)=[R.sub.left(l, k), R.sub.right(l, k)].sup.T, [2] (8)
{tilde over (H)}(k, )=[{tilde over (H)}.sub.left(k, ), {tilde over (H)}.sub.right(k, )].sup.T, [2](9)
V(l, k)=[V.sub.left(l, k), V.sub.right(l, k)].sup.T, [2] (10)
and the superscript denotes the transpose operator.
[0059] Maximum Likelihood Framework.
[0060] The general goal is to estimate the direction-of-arrival using a maximum likelihood framework. To this end, we assume that the (complex-valued) noise DFT coefficients follow a Gaussian distribution as illustrated in Eq. (11) below for the additive noise (cf. e.g. also [2]).
[0061] To define the likelihood function, we assume the additive noise V(l, k) as expressed in Eq. (10) above is distributed according to a zero-mean circularly-symmetric complex Gaussian distribution:
V(l, k)
where C.sub.v(l, k)=E{V(l, k)V.sup.H(l, k)} is the inter-input transducer (e.g. inter-microphone) noise covariance matrix, and where E{.} and superscript H represent the expectation and Hermitian transpose operators, respectively. Since S(l, k) is available at the hearing assistance system, we can relatively easily determine the time-frequency regions in the noisy microphone signals where the target speech is essentially absent. Therefore, we adaptively estimate C.sub.v(l, k) (e.g. as C.sub.v(n+1)=C.sub.v(n1)+(1)C.sub.v(n), where is a step size, an n is a time index) using exponential smoothing over the frames where the noise is dominant. Moreover, we assume the noisy observations are independent across frequencies. Therefore, the likelihood function for each frame is defined by:
where |.| denotes the matrix determinant, N is the number of frequency indexes, and
[0062] The corresponding log-likelihood function L is given by:
L=M N log .sub.k=1.sup.N log|C.sub.v(l, k)|+.sub.k=1.sup.N{(Z(l, k)).sup.HC.sub.v.sup.1(l, k)(Z(l, k))}. [2] (13)
[0063] Assuming that noisy DFT coefficients are statistically independent across frequency k, the likelihood function for a given frame (with index l) is given by Eq. (12) (including the many equations after Eq. (12)).
[0064] Discarding terms in Eq. (12) that do not depend on , we arrive at Eq. (14).
[0065] Aiming at estimating only Maximum Likelihood Estimates (MLEs) in dependence of , contributions to L in Eq. (13) that do not depend on (e.g. the first two parts M N log .sub.k=1.sup.N log|C.sub.v(l, k)|) are ignored in the reduced log-likelihood function:
L {tilde over (L)}=.sub.k=1.sup.N{(Z(l, k)).sup.HC.sub.v.sup.1(Z(l, k))}. [2] (14)
[0066] Head Model
[0067] Generally, we consider microphones which are located on/at one or both ears of a hearing aid user. It is well-known that the presence of the head influences the sound before it reaches the microphones, depending on the direction of the sound. In the following, we outline methods (all based on the maximum likelihood framework above), which differ in the way the head presence is taken into account. In the proposed framework, the head presence may be taken into account using models of the inter-aural level differences (ILD's) and inter-aural time differences (ITD's) between microphones of first and second hearing devices located on opposite sides of a user' head (e.g. at a user's ears).
[0068] Although ILD's and ITD's are conventionally defined with respect to the acoustic signals reaching the ear drums of a human, we stretch the definition to mean the level- and time-differences between microphone signals (where the microphones are typically located at/on the pinnae of the user, cf. e.g.
[0069] ITDs and ILDs are functions of angle-of-arrival (in a horizontal plane, cf.
ILD=K.sub.1 sin()[dB]
ITD=K.sub.2+K.sub.3 sin()[time],
where K.sub.1, K.sub.2, and K.sub.3 are constants to be chosen.
[0070] In a first example (as further elucidated in [1]), the following parameter choices are made
(K.sub.1, K.sub.2, K.sub.3)=(0, 0, a/c),
where a is the microphone distance, and c is the speed of sound. With these choices, strictly speaking, we completely ignore the presence of the head of the hearing aid user (free-field assumption), and we assume that the target source is infinitely far away (far field assumption).
[0071] In a second example (as further elucidated in [2]), the following parameter choices are made
(K.sub.1, K.sub.2, K.sub.3)=(, a/(2c), a/(2c)),
where is a constant. This implements a crude solid-sphere head model. Here, the ILD is 0 dB for =0 (sound from the front), and has its maximum for sounds from +90 degrees (the sides). The ITD reflects Woodworth's model (see [2] for details).
[0072] Note that both head models are independent of frequency. While these assumptions are obviously not completely valid in practice, they allow for simple solutions (in terms of computational complexity) and surprisingly good performance.
[0073] Maximum Likelihood Estimation of
[0074] The general goal of all proposed algorithms is to find maximum likelihood estimates of the direction () to the wireless microphone. Generally, this is achieved by finding the value of , for which the likelihood function Eq. (14) (cf. [2]) is maximum. For M=2 microphones, the likelihood function is obviously a function of the two microphone signals. But since the head models discussed above impose certain (assumed) relations between microphone signals, the detailed expression for the likelihood function depends on the head model used. In the following, we outline algorithms based on the free field model (K.sub.1, K.sub.2, K.sub.3)=(0, 0, a/c), and on the solid-sphere model (K.sub.1, K.sub.2, K.sub.3)=(, a/(2c), a/(2c)), respectively.
[0075] Informed Direction-of-Arrival EstimationFree-Field Model
[0076] In the following we use the free-field model described above (as described in detail in [1]). This leads to a rather simple expression for the likelihood function, which can be maximized for the angle-of-arrival . We consider solutions where the number of microphones equal M=2 and M=1, respectively, and where different assumptions are made with respect to the background noise.
[0077]
[0078] M=2 Microphones, General Noise CovarianceMaximum Likelihood
[0079] D.sub.1 and D.sub.2 (or D.sub.left and D.sub.right) represent the frequency-dependent propagation time from the target sound source to microphone m (m=1, 2 or m=left, right, cf.
[0080] Taking the above relationship into account, and considering the received signals of M=2 microphones together, D.sub.m and can be jointly estimated. In an embodiment, the first and second microphones are symmetrically arranged around the reference direction (plane) used for (i.e. as defined by =0), see e.g.
[0081] In the following, we find the MLE of for two different cases of the inter-microphone noise covariance matrix C.sub.v(l, k). We first consider the general case of C.sub.v(l, k) without any constraints. Afterwards, we assume that the additive noise V.sub.1 and V.sub.2 at the first and second microphones are un-correlated, and we model C.sub.v(l, k) as a diagonal matrix to decrease the computation overhead.
[0082] 1) General C.sub.v(l, k): Let us denote C.sub.v.sup.1(l, k) for M=2 as
[0083] Furthermore, in a far field and a free field situation, we have that the frequency-dependent attenuation factors due to propagation effects .sub.1=.sub.2=. Using this assumption, we expand Eq. (13) above for M=2 and note that D.sub.2=D.sub.1(a/c)sin(). The obtained expansion {tilde over (L)}(, , D.sub.1) is a function of , , and D.sub.1, and we aim to find the MLE of and D.sub.1. To eliminate the dependency on , we substitute the MLE of in {tilde over (L)}(, , D.sub.1). It can be shown that the MLE of is:
[0084] Inserting {circumflex over ()} into {circumflex over (L)}(, , D.sub.1) provides
[0085] In this general case, the likelihood function is given by [1], Eq. (20) (and [1], Eqs. (18, 19)). We wish to find the value of that maximizes the likelihood function. As illustrated in the following, this can be done using Inverse Discrete Fourier Transforms (IDFTs), which are computationally relatively cheap (and thus attractive in a low-power application, e.g. a hearing aid).
[0086] An IDFT (efficiently obtained by an IFFT algorithm) is given by the following equation.
[0087] In our case, we have
[0088] This has an IDFT structure as
[0089] It is noted that the sum is shifted from k=1:N to k=0:(N1). This is allowed as
[0090] In the above outline, it is assumed that D is estimated as an integer number. It is further anticipated that the delay D is smaller than N allowing the delay to be within the same frame of both the transmitted clean (essentially noise-free) target signal as well as the recorded target+noise (noisy target) signal.
[0091] It is further assumed that the attenuation factors 1, 2 are frequency-independent, which makes their ratio
frequency-independent, which again makes frequency-independent.
[0092] From the above outline (and [1] Eq. (18)), it can be seen that f(,D.sub.1) is an IDFT, which can be evaluated efficiently, with respect to D.sub.1. Therefore, for a given , computing {circumflex over (L)}(, D.sub.1) results in a discrete-time sequence, where the MLE of D.sub.1 is the time index of the maximum of the sequence. Since is unknown, we consider a discrete set of different s, and compute {circumflex over (L)}(, D.sub.1) for each . The MLEs of D.sub.1 and are then found from the global maximum:
[{circumflex over ()}, {circumflex over (D)}.sub.1]=arg max.sub.,D.sub.
[0093] M=2 Microphones, Diagonal Noise CovarianceMaximum Likelihood
[0094] If we assume that the noise observed in the two microphone signals is independent (an assumption which is valid for e.g., microphone noise, but which is less valid for external acoustic noise sources, in particular at low frequencies), then the inter-microphone noise covariance matrix C.sub.v becomes diagonal ([1], Eq. (22)).
[0095] 2) Diagonal C.sub.v(l, k): To decrease the computation overhead and to simplify the solution, let us assume V.sub.1(l, k) and V.sub.2(l, k) are uncorrelated, so that the noise covariance matrix is diagonal:
[0096] Following a similar procedure as in the previous section leads to a reduced log-likelihood function
In this case, the expression for the likelihood function becomes simpler. Specifically, the likelihood function is given by [1], Eq. (23) (and [1] Eqs. (24, 25)). Again, finding the that maximizes the likelihood function can be done using IDFTs, but the computational load in doing so is smaller than above (and generally, performance is also slightly reduced, because the uncorrelated noise assumption is slightly less accuratesee [1],
[0097] M=1 MicrophonesMaximum Likelihood
[0098] It is possible to estimate the angle-of-arrival of a target speaker using M=1 microphone versions of the proposed framework (i.e. in case of a binaural hearing system, estimating the respective delays D.sub.1, D.sub.2 independently for each microphone (m=1,2) of hearing aids HD.sub.1, HD.sub.2, respectively, and then estimate a DoA from the (individually) determined delays and the head model). Specifically, we can estimate the time it takes for the signal to travel from the target source to each microphone independentlythis requires maximizing simple M=1 likelihood functions (cf. [1], Eq. (13)).
[0099] As above, maximizing this function in terms of the signal travelling time can be done using IDFTs with respect to D.sub.m (cf. [1], Eq.(14)), and the estimate of is found using [1], Eq. (15):
[0100] The expression in [1] (13) can be interpreted as a Generalized Cross Correlation (GCC) function with a weighting function
[0101] M=1 MicrophonesInformed PHAT
[0102] In the following, the proposed methods are compared with the method proposed in [3], which belongs to the independent delays class of approaches and which is based on a conventional cross correlation to find D.sub.1 and D.sub.2. In general, any method based on Generalized Cross Correlation (GCC) method [4] can be used to estimate D.sub.1 and D2 independently:
[0103] The method proposed in [1] uses (k)=1. PHAT is well-known for non-informed setups, but appears new in the informed setup. We propose an informed PHAT weighting function as
[0104] Informed Direction-of-Arrival EstimationSpherical Head Model
[0105] With further reference to [2], we insert here the crude solid-sphere head model in the likelihood function, cf. [2], Eq. (14) above. Then we maximize the resulting expression with respect to to find maximum likelihood estimates. As for the free-field model described above, the simple form of the head model, allows us to find maximum likelihood estimates using (computationally relatively cheap) IDFTs.
[0106]
[0107] M=2 Microphones, General Noise CovarianceMaximum Likelihood
[0108] To use the solid-sphere model, one needs to decide on the value of the parameter >0. The parameter y may e.g. be determined in offline simulation experiments. Some possible values of are e.g., =6, such as =2.2, =10.7, etc. In general, y depends on the noise and/or target signal spectra.
[0109] Using the solid-sphere head model, it can be shown that the likelihood function can be expressed as [2], Eq. (19) (using Eqs. (20), (21)). As described in [2] this can be maximized with respect to using IDFTs.
[0110] M=2 Microphones, Diagonal Noise CovarianceMaximum Likelihood
[0111] It is straightforward to reduce the expression above by inserting C.sub.12C.sub.21=0 in the above [2], Eqs. (20, 21) providing below equations (20) and (21).
[0112] M=1 Microphones
[0113] The solid-sphere head model describes (assumed) relationships between two microphone signals picked up on either side of the head. If only one microphone is available, no such relationship exists. In other words, the spherical head model approach is not applicable to the M=1 microphone situation.
[0114] For a person skilled in the art, it is relatively straightforward to generalize the expressions above to the situation where the positions of several wireless microphones must be estimated jointly.
EXAMPLE
[0115] An example of a situation where a hearing system according to the present disclosure can be useful is illustrated in
[0116]
[0117]
[0118]
[0119] Alternatively, the inter-aural wireless link (IA-WL) may be based on near-field transmission technology (e.g. inductive), e.g. based on NFC or a proprietary protocol.
[0120]
[0121] In the embodiment of a hearing device in
[0122] The hearing device (HD) further comprises an output unit (e.g. an output transducer or electrodes of a cochlear implant) providing an enhanced output signal as stimuli perceivable by the user as sound based on said enhanced audio signal or a signal derived therefrom
[0123] In the embodiment of a hearing device in
[0124] The hearing device (HA) exemplified in
[0125] In an embodiment, the hearing device, e.g. a hearing aid (e.g. the signal processing unit), is adapted to provide a frequency dependent gain and/or a level dependent compression and/or a transposition (with or without frequency compression) of one or frequency ranges to one or more other frequency ranges, e.g. to compensate for a hearing impairment of a user.
[0126] In summary,
where {circumflex over ()}.sub.j is the estimated DoA for the j.sup.th frame of the signal. As can be seen, the proposed Maximum Likelihood (ML)-based methods perform better than the Cross-Correlation-based method and the proposed informed PHAT method. Among the ML-based methods, the ones which consider dependent delays estimate more accurately, at a higher computation cost. However, using a non-diagonal C.sub.v does not provide considerable improvement compared with modeling C.sub.v as diagonal. The estimators perform worse for s towards the sides of the head because the considered far field and free field assumption (i.e. .sub.1=.sub.2) is less valid for these s.
[0127] All in all, three solutions to the estimation of a direction of arrival of a target source have been proposed in the present disclosure.
[0128] Solution a): The simplest solution is a one-microphone solution, which estimates the propagation time from target sound source to two microphonesone on each side of the headindependently. That is, this is a one-microphone solution applied twice (wherein the propagation delays D.sub.1 and D.sub.2, (see
[0129] Solution b) The second solution takes into account that the propagation time from emitter (transmitter) to microphones cannot be very different given the fact that microphones are separated by a distance corresponding to the width of a human head (that is, if one propagation time is 10 s (say), then the other cannot be 20 s (say), because the maximum travelling time between microphones is around 0.5 ms). This approach assumes the background noise to be uncorrelated between microphonesan assumption which theoretically is invalid especially at low frequencies. In
[0130] Solution c) The third and most advanced (and computationally complex) solution is similar to the second solution with one difference: the background noise is no longer assumed to be uncorrelated between sensors (microphones). In
[0131] Solution a) is the easiest to implement in a hearing aid system and appears better than existing algorithms. Solution b) performs better than Solution a) but is more computationally complex and requires wireless binaural communication; this algorithm is relevant for near-future hearing aid systems. Solution c) is computationally more complex than Solution b) and offers slightly better performance [informed solution].
[0132] As used, the singular forms a, an, and the are intended to include the plural forms as well (i.e. to have the meaning at least one), unless expressly stated otherwise. It will be further understood that the terms includes, comprises, including, and/or comprising, when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will also be understood that when an element is referred to as being connected or coupled to another element, it can be directly connected or coupled to the other element but an intervening elements may also be present, unless expressly stated otherwise. Furthermore, connected or coupled as used herein may include wirelessly connected or coupled. As used herein, the term and/or includes any and all combinations of one or more of the associated listed items. The steps of any disclosed method is not limited to the exact order stated herein, unless expressly stated otherwise.
[0133] It should be appreciated that reference throughout this specification to one embodiment or an embodiment or an aspect or features included as may means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the disclosure. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects.
[0134] The claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean one and only one unless specifically so stated, but rather one or more. Unless specifically stated otherwise, the term some refers to one or more.
[0135] Accordingly, the scope should be judged in terms of the claims that follow.
REFERENCES
[0136] [1]: Informed TDoA-based Direction of Arrival Estimation for Hearing Aid Applications, M. Farmani, M. S. Pedersen, Z.-H. Tan, and J. Jensen, 2015 IEEE Global Conference on Signal and Information Processing (GlobalSIP), 14-16 Dec. 2015, DOI:
[0137] 10.1109/GlobalSIP.2015.7418338, INSPEC Accession Number: 15807779. 25 Feb. 2016.
[0138] [2]: Informed Direction of Arrival Estimation Using a Spherical-Head Model for Hearing Aid Applications, M. Farmani, M. S. Pedersen, Z.-H. Tan, and J. Jensen, Published in: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 20-25 Mar. 2016, DOI: 10.1109/ICASSP.2016.7471697, INSPEC Accession Number: 16021462, 19 May 2016.
[0139] [3]: Courtois et al., Implementation of a binaural localization algorithm in hearing aids: Specifications and achievable solutions, in Audio Engineering Society Convention 136, April 2014, p. 9034.
[0140] [4]: C. Knapp and G. C. Carter, The generalized correlation method for estimation of time delay, IEEE Trans. Acoustics, Speech and Signal Process, vol. 24, no. 4, pp. 320-327, 1976.