HEARING ASSISTANCE SYSTEM COMPRISING AN EEG-RECORDING AND ANALYSIS SYSTEM
20180014130 · 2018-01-11
Assignee
Inventors
- Thomas LUNNER (Smørum, DK)
- Fredrik GUSTAFSSON (Linköping, SE)
- Carina GRAVERSEN (Smørum, DK)
- Emina ALICKOVIC (Linköping, SE)
Cpc classification
H04R2225/67
ELECTRICITY
H04R2225/61
ELECTRICITY
H04R1/1041
ELECTRICITY
H04R25/407
ELECTRICITY
H04R25/554
ELECTRICITY
International classification
Abstract
A hearing assistance system comprises an input unit for providing electric input sound signals u.sub.i, each representing sound signals U.sub.i from a multitude n.sub.u of sound sources S.sub.i, an electroencephalography (EEG) system for recording activity of the auditory system of the user's brain and providing a multitude n.sub.y of EEG signals y.sub.j, and a source selection processing unit receiving said electric input sound signals u.sub.i and said EEG signals y.sub.j, and in dependence thereof configured to provide a source selection signal Ŝ.sub.x indicative of the sound source S.sub.x that the user currently pays attention to using a selective algorithm that determines a sparse model to select the most relevant EEG electrodes and time intervals based on minimizing a cost function measuring the correlation between the individual sound sources and the EEG signals, and to determine the source selection signal Ŝ.sub.x based on the cost functions obtained for said multitude of sound sources.
Claims
1. A hearing assistance system comprising an input unit for providing electric input sound signals each representing sound signals U.sub.i from a multitude n.sub.u of sound sources S.sub.i (i=1, . . . , n.sub.u), an electroencephalography (EEG) system for recording activity of the auditory system of the user's brain and providing a multitude n.sub.y of EEG signals y.sub.j (j=1, . . . , n.sub.y), and a source selection processing unit coupled to said input unit and to said EEG-system and receiving said electric input sound signals u.sub.l and said EEG signals y.sub.j, and in dependence thereof configured to provide a source selection signal Ŝ.sub.x indicative of the sound source S.sub.x that the user currently pays attention to, wherein the source selection processing unit is configured to analyze said electric input sound signals u.sub.i i=1, . . . , n.sub.u, and said multitude of EEG signals y.sub.j, j=1, . . . , n.sub.y, to determine a dynamic finite impulse response (FIR) filter from each sound source to each EEG channel, and to determine the source selection signal Ŝ.sub.x indicative of the sound source S.sub.x that the user currently pays attention to based on a cost function obtained for said multitude of sound sources.
2. A hearing assistance system according to claim 1 wherein the source selection processing unit is configured to use a stimuli reconstruction (SR) method for estimating the FIR inverse model from EEG signal to sound source.
3. A hearing assistance system according to claim 2 wherein the source selection processing unit is configured to use a sparse model for modeling the finite impulse response (FIR) filter from each sound source to each EEG channel.
4. A hearing assistance system according to claim 2 wherein the source selection processing unit is configured to use the alternating direction method of multipliers (ADMM) methodology to reformulate the optimization problem into another one with different B vectors in the cost function.
5. A hearing assistance system according to claim 1 wherein the source selection processing unit is configured to analyze said electric input sound signals u.sub.1 i=1, . . . , n.sub.u, and said multitude of EEG signals y.sub.j, j=1, . . . , n.sub.y, based on a selective algorithm that determines a sparse model to select the most relevant EEG electrodes and time intervals based on minimizing a cost function measuring the correlation between the (individual) sound source and the EEG signals, a full FIR single input multiple output (SIMO) model for each electric input sound signal based on said electric input sound signals u.sub.i and said EEG signals y.sub.j, an alternating direction method of multipliers (ADMM) to provide sparse models from said full FIR single input multiple output (SIMO) models for use in identifying the model that best describes the corresponding electric input sound signal and EEG signal data, wherein the sound source S.sub.x that the user currently pays attention to is determined by comparing cost functions of each model.
6. A hearing assistance system according to claim 1 wherein said input unit comprises a sound source separation unit for providing said electric input sound signals u.sub.i from one or more electric input sound signals representative of a mixture of said sound signals U.sub.i.
7. A hearing assistance system according to claim 1 configured to provide an estimate û.sub.x of the sound signal U.sub.X that the user currently pays attention to.
8. A hearing assistance system according to claim 1 wherein said EEG system comprises a multitude of EEG sensors, each comprising an EEG electrode, for providing said multitude of EEG signals y.sub.j (i=1, . . . , n.sub.y).
9. A hearing assistance system according to claim 1 comprising one or two hearing devices, each hearing device being adapted for being located at or in an ear or for being fully or partially implanted in the head of a user, the or each hearing device comprising an output unit for providing output stimuli perceivable by the user as sound, based on said estimate û.sub.x of the sound signal U.sub.x that the user currently pays attention to.
10. A hearing assistance system according to claim 9, wherein said EEG system comprises a multitude of EEG sensors each comprising an EEG electrode, for providing said multitude of EEG signals, and the hearing device(s) comprise(s) at least a part of said EEG system, such as at least some of said EEG-electrodes.
11. A hearing assistance system according to claim 9 wherein the hearing device or devices comprises a hearing aid, a headset, an earphone, an ear protection device, a speakerphone or a combination thereof.
12. A hearing assistance system according to claim 7 comprising first and second hearing devices, wherein the hearing assistance system is configured to allow the exchange of information between the first and second hearing devices or between the first and second hearing devices and an auxiliary device.
13. A hearing assistance system according to claim 12 configured to include electric input sound signals u.sub.i,1 and u.sub.i,2 provided by respective input units, and/or EEG signals y.sub.j1,1 and y.sub.j2,2 provided by respective EEG-systems of the first and second hearing devices in the determination of the sound source S.sub.x that the user currently pays attention to.
14. A hearing assistance system according to claim 9 comprising an auxiliary device configured to exchange information with the hearing device or with the first and second hearing devices.
15. A hearing assistance system according to claim 12 configured to maintain or apply appropriate directional cues for the electric sound signal u.sub.x representing the sound source S.sub.x that the user currently pays attention to.
16. A method of automatically selecting an audio source intended to be listened to by a wearer of a hearing device in a multi-audio source environment, the method comprising providing electric input sound signals each representing sound signals U.sub.i from a multitude n.sub.u of sound sources S.sub.i (i=1, . . . , n.sub.u), recording activity of the auditory system of the user's brain and providing a multitude n.sub.y of EEG signals y.sub.j (j=1, . . . , n.sub.y), and providing a source selection signal Ŝ.sub.x indicative of the sound source S.sub.x that the user currently pays attention to in dependence of said electric input sound signals u.sub.i and said EEG signals y.sub.j, including analyzing said electric input sound signals u.sub.i i=1, . . . , n.sub.u, and said multitude of EEG signals y.sub.j, j=1, . . . , n.sub.y, to determine a dynamic finite impulse response (FIR) filter from each sound source to each EEG channel, and to determine the source selection signal Ŝ.sub.x indicative of the sound source S.sub.x that the user currently pays attention to based on cost functions obtained for said multitude of sound sources.
17. A method of automatically selecting an audio source intended to be listened to by a wearer of a hearing device in a multi-audio source environment, the method comprising providing electric input sound signals u.sub.i, each representing sound signals U.sub.i from a multitude n.sub.u of sound sources S.sub.i (i=1, . . . , n.sub.u), recording activity of the auditory system of the user's brain and providing a multitude n.sub.y of EEG signals y.sub.j (j=1, . . . , n.sub.y), and providing a source selection signal Ŝ.sub.x indicative of the sound source S.sub.x that the user currently pays attention to in dependence of said electric input sound signals u.sub.i and said EEG signals y.sub.j, analyzing said electric input sound signals u.sub.1 i=1, . . . , n.sub.u, and said multitude of EEG signals y.sub.j, j=1, . . . , n.sub.y, using a selective algorithm that determines a sparse model to select the most relevant EEG electrodes and time intervals based on minimizing a cost function measuring the correlation between the sound source and the EEG signals, and determining the source selection signal S.sub.x indicative of the sound source S.sub.x that the user currently pays attention to based on the cost function obtained for said multitude of sound sources.
18. A method according to claim 17 comprising analyzing said electric input sound signals u.sub.1 i=1, . . . , n.sub.u, and said multitude of EEG signals y.sub.j, j=1, . . . , n.sub.y, wherein said selective algorithm is based on providing a full FIR single input multiple output (SIMO) model for each electric input sound signal u.sub.1, based on said electric input sound signals u.sub.l and said EEG signals y.sub.j, and using an ADMM to provide sparse models from said full FIR single input multiple output (SIMO) models for use in identifying the model that best describes the corresponding electric input sound signal and EEG signal data, and determining the sound source S.sub.x that the user currently pays attention to by comparing cost functions of each model.
19. A method according to claim 17 further comprising the following steps aimed at understanding how the human auditory system reacts when exposed to different sound sources and attending to one of these sources, providing a standard causal multi input multiple output (MIMO) finite impulse response (FIR) model of order k from sound to EEG for each electric input sound signal u.sub.i to each EEG signal y.sub.j, and using an alternating direction of multipliers method (ADDM) to provide a sparse model that automatically selects the EEG channels and parameters of the FIR model, including time delay and model order k, of the highest relevance.
20. A method according to claim 18 wherein said order k of the FIR MIMO model is selected with a view to the time span wherein a speech signal has an effect on simultaneously recorded EEG signals.
21. A non-transitory computer readable medium storing a program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method of claim 16.
22. A data processing system comprising a processor and program code means for causing the processor to perform the method of claim 16.
Description
BRIEF DESCRIPTION OF DRAWINGS
[0127] The aspects of the disclosure may be best understood from the following detailed description taken in conjunction with the accompanying figures. The figures are schematic and simplified for clarity, and they just show details to improve the understanding of the claims, while other details are left out. Throughout, the same reference numerals are used for identical or corresponding parts. The individual features of each aspect may each be combined with any or all features of the other aspects. These and other aspects, features and/or technical effect will be apparent from and elucidated with reference to the illustrations described hereinafter in which:
[0128]
[0129]
[0130]
[0131]
[0132]
[0133]
[0134]
[0135]
[0136]
[0137]
[0138]
[0139]
[0140]
[0141]
[0142]
[0143]
[0144]
[0145]
[0146] The figures are schematic and simplified for clarity, and they just show details which are essential to the understanding of the disclosure, while other details are left out. Throughout, the same reference signs are used for identical or corresponding parts.
[0147] 30
[0148] Further scope of applicability of the present disclosure will become apparent from the detailed description given hereinafter. However, it should be understood that the detailed description and specific examples, while indicating preferred embodiments of the disclosure, are given by way of illustration only. Other embodiments may become apparent to those skilled in the art from the following detailed description.
DETAILED DESCRIPTION OF EMBODIMENTS
[0149] The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practised without these specific details. Several aspects of the apparatus and methods are described by various blocks, functional units, modules, components, circuits, steps, processes, algorithms, etc. (collectively referred to as “elements”). Depending upon particular application, design constraints or other reasons, these elements may be implemented using electronic hardware, computer program, or any combination thereof.
[0150] The electronic hardware may include microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. Computer program shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
[0151] The present application relates to the field of hearing assistance systems, including devices, e.g. hearing aids. The disclosure deals in particular with the problem of speech representation in the human brain and the so-called cocktail-party problem, that is, a separation of a single sound source of the listener's interest from the multitude of sound sources in a noisy and crowded background. The routine with which the human brain solves the cocktail-party problem hides the intrinsic problem complexity: (1) different competing sound sources emit different sound signals concurrently, (2) these signals are then mixed, (3) the sum of these signals enters the ear(s) (4) which is later on decoupled, (5) a sound signal of interest is selected and (6) the other sound signals are tuned out within the auditory system. Although the cocktail party problem has been around for decades, cocktail party problem solving seems to be an underdeveloped field. We still have very little knowledge about how our brain solves it and many questions still remain unanswered.
[0152] A number of patent applications and patents by the present inventors deal with measuring brain wave signals (e.g. EEG signals) using one or more electrodes located on or in connection with a hearing aid, cf. e.g. [16], [17], [18], [23].
[0153] Previous studies described several conceptually different approaches to understand how the brain solves the cocktail party problem. The bottom line of all these approaches is the realization that the different sound sources excite different neural responses and that the brain activity follows the sound amplitude envelope. Most of the studies adhere to stimulus reconstruction (SR) approach, an inverse model from neural response, that is, brain signals y(t), to speech u(t). The literature on stimulus reconstruction is almost as considerable as that on selective attention.
[0154] The decision on how the SR is to be performed is quite subjective and is usually the result of the compromise between different aspects, which include flexibility, parsimony, usage intention, recording modality type, computation cost, etc. In general, SR boils down to performing linear regression (LR). Recently, more a sophisticated method based on deep neural networks (DNNs) was proposed in lieu of LR. DNN proved to be more helpful in understanding the speech influence on the brain activity, its representation and reconstruction, but the pay-off is the higher complexity and thus higher computational burden.
[0155] It should be noted that appealing results were obtained for SR applied to electrocorticographic (ECoG) data and magnetoencephalographic (MEG) data, but the particular problem with ECoG and MEG data which makes them less attractive are invasiveness of ECoG to the brain tissues and lack of portability of MEG instruments. In lieu of ECoG and MEG instruments, EEG instruments are noninvasive, portable and readily available what makes them more suitable and attractive (e.g. for hearing devices, such as hearing aids). Moreover, it was shown that attention can be decoded from EEG data with SR [21]. Whenever we need to compare the results for our model with models found in literature, we shall adhere to the article of O′Sullivan et.al. [21] since it is the representative article between the multitude of researches on the SR, selective attention and solving the cocktail-party problem in general.
[0156] To conclude, there are three key reasons why SR is attractive: [0157] It can be used to find time scales with stimuli information in neural responses. [0158] It can be used in neural signal processing. [0159] It can be used in solving the cocktail party problem to some extent, that is, in gaining deeper understanding of the speech representation and reconstruction and selecting the attended speech stream in a multi-talker background.
[0160] On the negative side, SR model corresponds to an anti-causal model, and the downsides are the lack of understanding of dynamical effects and difficulty in real-time implementation, so in practice there might be no benefit from SR. It must be stressed that in off-line application, stimulus reconstruction (SR) can still be used for data analysis and understanding auditory subcomponent of connectome (a ‘connectome’ being a network map illustrating interactions in the nervous system).
[0161] A particular interest lies in obtaining a reliable model to deeper understand the attention in hearing, in particular, how the sound is represented in the brain and how to correctly identify the speech stream that the listener currently attends to. This identification should preferably be performed in real time. In other words, the identification of the sound source, referred to as S.sub.i (or just i), and the corresponding sound source signal u.sub.i, which is attended to by the listener at time t, cannot be obtained at a later time t.sub.1, where t.sub.1>t. This is, on the other hand, the case when using SR methods.
[0162] The present disclosure proposes to overcome the above pitfalls by formulating a causal, multivariate finite impulse response (FIR) model from speech to EEG, and to subsequently use an alternating direction of multipliers method (ADMM) to get a sparse model that automatically selects the EEG channels and FIR parameters (including time delay and model order) of the highest relevance. Besides a sparse model, it also gives physical insights. If the model is well-conditioned, it is likely that it will also indicate the attended sound source. An advantage of the approach (in addition to its real time realization) is that a software implementation can be made relatively simple and efficient.
[0163] Since the present disclosure is focused on hearing devices, e.g. hearing aids, requiring on-line (real time) applications, we consider only brain signal data recorded with EEG instruments, for the reasons mentioned above.
[0164] The present disclosure provides a hearing assistance system and a method for identifying a specific sound source in multi-sound source (e.g. multi-talker) background in real-time using un-averaged single-trial EEG.
[0165] The model suggested in the present disclosure for the identification of the speech stream currently attended to by a wearer of the hearing assistance system will be referred to as the CLassification of Attended speech STream In Cocktail-party problem (CLASTIC) model.
[0166] The cocktail party problem arises when a number of different (constant or time-variant) competing sources S.sub.i, i=1, 2, . . . , n.sub.u, emit sound signals U.sub.i (represented by electric input sound signals u.sub.i) simultaneously and a listener receives the sum (u.sub.total) of these signals, i.e.,
u.sub.total=Σ.sub.i=1.sup.n.sup.
[0167] Under the assumption that the listener is attempting to focus on only one sound source (e.g. a speech stream) at a time, the technical challenge is to identify which of the speech signals u.sub.i(t) is the subject of focus by the listener (user of the hearing assistance system). This is proposed to be done based on external sensor measurements, here, EEG signals, y.sub.j(t), with j=1, 2, . . . , n.sub.y.
[0168]
[0169] A number of methods are available in the art to provide (real-time) separation of sound sources from one or more signals comprising a mixture of the sound sources. These methods include blind source separation, means of e.g. blind source separation, cf. e.g. [Bell and Sejnowski, 1995], [Jourjine et al., 2000], [Roweis, 2001], [Pedersen et al., 2008], microphone array techniques, cf. e.g. chapter 7 in [Schaub, 2008], or combinations hereof, cf. e.g. [Pedersen et al., 2006], [Boldt et al., 2008]. Other methods include Nonnegative Matrix Factorization (NMF), Probabilistic Latent Component Analysis (PLCA). A real time separation method based on modelling the contents of a buffer comprising a time segment of the mixed signal as an additive sum of components, which are stored in pre-computed dictionaries is e.g. disclosed in US2016099008A1.
[0170]
[0171]
[0172]
[0173] The second embodiment of a hearing assistance system (HAS) shown in
[0174] The hearing assistance systems shown in
[0175] In the embodiment of
[0176] As illustrated in
[0177] In the embodiment of
[0178] The multi-talker environment as illustrated in
[0179] Signal Modeling and Estimation:
[0180] A. A Forward Model:
[0181] We have n.sub.u sound sources u.sub.i(t), i=1, 2; . . . , n.sub.u and n.sub.y EEG channels y.sub.j(t), j=1, 2, . . . , n.sub.y. Physically, the sound should causally affect the listening attention in the brain. We will constrain the dynamics to be linear finite impulse filter (FIR) b.sub.ij(t), so this causal relation is modeled as the convolution (moving average, weighted average)
where n.sub.b is the model order (design parameter) of the FIR filter.
[0182] Having N samples of u.sub.i(t) and y.sub.j(t), the relation can be written in vector form as
Y.sub.j=(U.sub.i)B.sub.ij+E.sub.j (3)
where Y.sub.j=(y.sub.j(1), . . . , y.sub.j(N)).sup.T and similarly for U and E, while B.sub.ij=(b.sub.ij(1), . . . , b.sub.ij(n.sub.b)).sup.T and H(U.sub.i) is a Hankel matrix with elements H(U.sub.i).sup.mn=u.sub.i(m−n).
[0183] The least squares (LS) method estimates the FIR parameters by the minimizing argument (arg min) of the two norm of the estimation error
{circumflex over (B)}.sub.ij=arg min.sub.B V.sub.ij(B) (4)
{circumflex over (B)}.sub.ij=arg min.sub.B∥Y.sub.j−H(U.sub.i)B.sub.ij∥.sub.2.sup.2 (5)
{circumflex over (B)}.sub.ij=H(U.sub.i).sup.†Y.sub.j (6)
where H(U.sub.i).sup.†=(H(U.sub.i).sup.TH(U.sub.i)).sup.TH(U.sub.i).sup.T denotes the pseudo inverse.
[0184] B. SR as a Reverse Model:
[0185] The SR method aims at estimating the FIR inverse model from EEG signal to sound source
[0186] The idea is of course not that the brain affects the sound, that is why time is reversed in the EEG signal so y.sub.j(t+k) is used rather than y.sub.j(t−k) in the convolution.
[0187] The notation here is dual to the forward model we propose. Though both methods look equivalent at first glance, there are several important differences: [0188] The forward model can be used to predict future values of the EEG signal, and is thus useful for classification of the attended source in a real-time application. The reverse model must be applied on batches, and is thus not as suitable for real-time classification. [0189] Even a short FIR filter in the forward model, may require a long FIR filter in the reverse model, so normally the forward model should have fewer parameters than the reverse model, n.sub.a>n.sub.b, for physical reasons. [0190] In the least squares method, the left hand side should be the variable that is observed with noise, while the other one should be noise-free. It is natural to assume that the brain has other tasks to solve than to process the sound, so there is clearly a large error in the EEG signal. The perceived sound also includes disturbances, which we model as the n.sub.u separate sound sources. These are arguments that favour the forward model. If there is noise on the regression vector, that is the Hankel matrices H(U.sub.i) and
[0192] C. Classification:
[0193] The loss function V.sub.ij({circumflex over (B)}.sub.ij) gives a measure of model fit. The smaller value compared to the signal energy V.sub.ij(0)=Σy.sub.j.sup.2 (t) the better. Note that V.sub.ij(0) means no model at all. We should already here state that the model fit is very poor for this kind of application, compared to many other physical systems. The brain is processing a lot of other information, and the correlation to the sound sources is order of magnitudes smaller than its normal operations. However, for classification, the model fit is secondary and the primary purpose is to find the attended source. This can now be classified using arg min.sub.i V.sub.ij({circumflex over (B)}.sub.ij), that is, the sound source that best explains the EEG signal.
[0194] If multiple EEG signals are used, which should be the case, then the total LS loss function is simply the sum over all channels, and the attended source is classified as
arg min.sub.i V.sub.ij({circumflex over (B)}.sub.ij) (9)
[0195] D. Sparse Modeling:
[0196] It is for physical reasons plausible that not all EEG channels and not all time delays in the model are useful for modeling the sound sources. For that purpose, we propose to add l.sub.1 regularization to the l.sub.2 LS cost function, and use
V.sub.i(B)=Σ.sub.j=1.sup.n.sup.
where B is the total multiple output FIR filter B=(B.sub.1, B.sub.2, . . . , B.sub.ny) for input i. The l.sub.1 term is an approximation of the l.sub.0 norm that simply counts the number of non-zero elements in the FIR filter B. That is, we seek a compromise between a good model fit and a sparse FIR filter with few parameters. The l.sub.1 norm is used to get a convex problem, where efficient numerical solvers can be used. The parameter λ is used to compromise sparseness to model fit.
[0197] ADMM reformulates the optimization problem into another one with different B vectors in the cost function, and an equality constraint,
V.sub.i(B)=Σ.sub.j=1.sup.n.sup.
subject to
[0198] The subtle point with this is that this trick enables a very efficient method. Basically, ADMM iterates between computing B, B and a third quantity, each step requiring simple calculations, and with, in practice, very fast convergence in only a few iterations.
[0199] There is also a group formulation of ADMM, where the penalty term is a sum of l.sub.l norms [24]. For this application, it can be used to select either the most relevant EEG channels, or the time instants where the combined EEG response is the most informative. To exploit such structured sparseness, the following norms may preferably be used: [0200] Use the row sum Σ.sub.t=1.sup.N∥B.sub.t,:∥.sub.1 to get a sparse time response. [0201] Use the column sum Σ.sub.j=1.sup.n.sup.
[0202] The ADMM is described in more detail in the article by [Alickovic et al; to be published], which is attached to the present application and to which the above used equation numbers (1)-(12) refer.
EXAMPLE
[0203] An embodiment of a proposed solution to the ‘sound source of interest identification in a multi-sound source environment’ problem comprises five components, two of which (X1, X2) may be seen as preparatory steps aimed at understanding how the human auditory system reacts when exposed to different sound sources and attending to one of these sources:
[0204] X1. To identify dynamical effects of the speech on the brain and the relevance of each EEG channel, a FIR multiple input multiple output (MIMO) model from speech to EEG is formulated (to provide a physical insight into the model).
[0205] X2. To provide a sparse model that automatically selects the EEG channels and parameters of the FIR model, including time delay and model order k, of the highest relevance, an alternating direction of multipliers method (ADDM) is used.
[0206] A. To use this knowledge and attain a reliable model to precisely detect the single sound source of interest (under the requirement of real time identification) a causal model from speech to EEG with a reasonably long memory is necessary. A full FIR single input multiple output (SIMO) model for each speech stream is formulated,
[0207] B. To get sparse models and to look for the model that best describes the data, the ADMM framework is used.
[0208] C. To determine the sound sources of the listener's interest cost functions of each model are compared.
[0209] These five components are further described in the following.
[0210] FIR MIMO Model Formulation (Component X1, X2)
[0211] A standard causal multivariate FIR model (FIR(k)) can be formulated as the following difference equation:
y.sub.j(t)=b.sub.i,j0u.sub.i(t)+b.sub.i,j,lu.sub.j(t−1)+ . . . +b.sub.i,j,ku.sub.i(t−k)+e.sub.j(t) (2)′
for j=1, . . . , n.sub.y, i =1, . . . , n.sub.u and t=1, . . . , N, where e.sub.j(t) is the disturbance and k is the order of the model. In general, e.sub.j(t) is considered to be white noise and e.sub.j(t).Math.˜N(0, σ.sub.j.sup.2). It is also assumed that u.sub.i(t)=0 and y.sub.j(t)=0 for t<0, which is a convenient way to ensure causality.
[0212] Formulated models will serve for different purposes and the amount of data used (N) corresponds to data from sliding window (the intended on-line application), one trial (model selection for one batch of one minute), all trials for one listener (subject-specific or individual model selection for all 30 minutes of data) or all trials for all subjects (grand-subject or global model selection). The data set Ω={.sub.1(1), u.sub.2(1), y.sub.1(1), . . . , y.sub.128(N)} is the source of information we have at hand about the underlying actual system which needs to be fitted to model structure (2).
[0213] The first objective is to formulate FIR(k) MIMO model from each sound source i to each EEG channel j. Thus, the problem boils down to estimating kn.sub.u model parameters from each sound source from Nn.sub.y measurements. Model order k should be carefully chosen to include all time lags where speech signals may have a substantial influence on the EEG signals. Here, the main goal is to decide on the EEG channels and time-delays of the highest relevance. A simplistic approach to solve this problem would be to add l.sub.1 regularization term to the simple least squares (LS), leading to ADMM, to get a more parsimonious model.
[0214] Eq. (2)′ can be formulated in linear regression framework as
y.sub.j(t)=U.sub.i,j.sup.T(t)B.sub.i,j+e.sub.j(t) (3)′
where U.sub.i,j(t) is a regression vector with elements U.sup.(m).sub.i,j=u.sub.i,j(t−m), T denotes transposition, and B.sub.i,j is a function mapping stimulus U.sub.i,j(t) to neural response y.sub.j(t) with elements B.sup.(m).sub.i,j=b.sub.i,j,m for m=1, . . . k. For simplicity of the notation, the model (3)′ can be generalized to one that explains all n.sub.u sound sources and n.sub.y EEG channels from a batch of N data as
Y=UB+E (4)′
where
is a Hankel matrix, Y and E are N×n.sub.y and U is N×k.Math.m.sub.u matrices.
[0215] Once we have deeper understanding of the most informative EEG channels and time lags are selected, the next objective is to obtain the sparsest solution. As we aim to obtain a sparse B matrix, non-vanishing (non-zero) terms in B will give us which electrodes were active at each particular time lags, i.e., zero elements in B refer to inactive electrodes whereas non-zero elements refer to active electrodes. Thus B selects for the most important EEG-channels and time lags.
[0216] FIR SIMO Model Formulation (Component A, B)
[0217] The next objective is to estimate the full FIR SIMO model for each input source separately and attempt to select the model that explains data the best. This can be solved as a l.sub.l regularized least squares (LS) problem. Similar to the full FIR MIMO model in eq. (4)′, the full FIR SIMO model for each input sound source can be generalized to a model that explains all n.sub.y EEG signals from a batch of N data as
Y=U.sub.iB.sub.i+E (5)′
where B.sub.i=[B.sub.i,l, . . . B.sub.i,ny] is the Hankel matrix and U.sub.1 is an N×k matrix.
[0218] Cost Minimization (Component C)
[0219] The l.sub.1 regularized LS problem can be defined as:
minimize {(½)∥Y−UB∥.sup.2+λ∥B∥.sub.l} (6)′
[0220] The Frobenius norm ∥W∥.sup.2=Trace(W.sup.TW) is used for matrix residual of the first term (model fit), Trace(.Math.) being the sum of the diagonal elements of the argument matrix, and
∥B∥.sub.l=Σi,j,m|.sub.i,j,m|.
[0221] The parameter λ>0 is a regularization parameter set as the trade-off between model fit to the measurements u and y and model sparsity. Model defined in eq. (6)′ is also known as the least
absolute shrinkage and selection operator (lasso) [26].
[0222] It should be noted that—in general—we do not attempt to estimate the covariance matrix of the EEG signals, but assume that they are independent signals that can be described by a stationary process with the same noise level σ.sup.2. If, however, such a covariance matrix exists, the norm above is easily modified. The independence assumption allows to solve the least squares problem for each column of B separately, which saves a lot of computations.
[0223] It should further be noted that the term ‘stimuli’ refers to an input speech stream and these two terms will be used interchangeably when referring to the input signals u.sub.i. It should also be noted that the ‘response’ refers to the output EEG signals and we will use them interchangeably when talking about the output signals y.sub.j.
[0224] Model Estimation:
[0225] In following, mathematical tools that may be used to find a reliable model for attention selection are discussed. The l.sub.l regularized LS problem can be modified to a convex optimization problem and solved using standard methods for convex optimization such as CVX [9], [8] or YALMIP [15]. For large-scale l.sub.l regularized LS problems, special purpose-specific methods, such as PDCO [25] and 11-1s [10], were developed. We solve lasso problem (6) with ADMM, fast first order method, as alluded to previously.
[0226] A. Selection of Regularization Parameter λ.
[0227] The estimated parameter sequence b.sub.i,j,m as a function of λ is called ‘regularization path’ for the problem (6)′. In general, as λ decrease, model fit improves, but the pay-off is many non-zero elements and vice versa. The fundamental result of convex analysis states that l.sub.l-regularized LS estimate must converge to zero valued estimates b.sub.i,j,m for some positive value of λ if and only if λ≧λ.sup.max, i.e., λ.sup.max can be seen as threshold above which b.sub.i,j,m=0, ∀(i, j, m). Hence, fraction of λ.sup.max is a sound start determining the ‘best’ value of λ. The λ.sup.max can be expressed as:
∞>λ.sup.max=∥U.sup.TY∥.sub.∞ (7)′
where ∥κ∥.sub.∞=max.sub.i,j|κ.sub.ij| denotes max norm (I.sub.∞) of κ.
[0228] To verify eq. (7)′, the results from the convex analysis are used (cf. e.g. [11]). Let
V.sub.N=½∥Y−UB∥.sup.2+λ∥B∥.sub.1 (8)′
[0229] The objective function in eq. (6)′ is convex, but not differentiable, and therefore, taking the sub-differential of (6)′ with respect to B, we have
V.sub.N=[U.sup.T.Math.(UB−Y)].sub.i,j,m−λ sign(b.sub.i,j,m) (9)′
where sign(.Math.) is defined component-wise as
[0230] Next, we note that the sub-differential eq. (9)′ is a set. It follows readily from the optimality condition for convex programs, i.e., B is the optimal solution if and only if 0∈V.sub.N, that (U.sup.TY).sub.i,j,m∈[−λ, λ] which yields λ.sup.max=∥U.sup.TY∥.sub.∞. The sound choice for λ is the fraction of λ.sup.max, i.e., (in percent) (0.01-1)λ.sup.max.
[0231] B. Lasso ADMM.
[0232] In ADMM form, lasso problem given in (6)′ is as follows:
minimize ½∥Y−UB∥.sup.2+λ∥B∥.sub.1 subject to B=B (10)′
[0233] Augmented Lagrangian (AL) blends linear and quadratic terms as:
L.sub.p(B, B, Δ)=½∥Y−UB∥.sup.2+λ∥B∥.sub.1+(ρ/2)∥B−B+Δ∥ (11)′
where ρ>0 is a penalty term and Δ is a scaled dual variable linked to constraint B=B. In every iteration step it, ADMM minimizes AL over B and B separately with a single Gauss-Seidel pass. At iteration it, the following steps are carried out:
B.sup.it+1=(U.sup.TU+ρI).sup.−1(U.sup.TY+ρ(B.sup.it−Δ.sup.it)) (12a)′
B.sup.it+1=S.sub.λ/ρ(B.sup.it+1+Δ.sup.it) (12b)′
Δ.sup.it+1=Δ.sup.it+B.sup.it−1−B.sup.it+1 (12c)′
where soft thresholding operator S is defined as S.sub.λ/ρ(α)−(α−λ/ρ).sub.+−(−α−λ/ρ).sub.+. where + is a sub-script and
The number of iterations needed for the algorithm to converge is greatly influenced by the selection of parameter ρ. With properly selected ρ, ADMM can converge to reasonably accurate model estimates within relatively few iteration steps and ρ can be set to 1, i.e., ρ=1.
[0234] We carry out iterations it=1, 2, . . . in (12a)′−(12c)′ until convergence or until termination criteria are met. Let
ε.sup.it.sub.prim=B.sup.it−B.sup.it (13a)′
ε.sup.it.sub.dual=−ρ(B.sup.it−B.sup.it−1) (13b)′
be primal and dual residuals at the it-th iteration. The algorithm is e.g. terminated when these two residuals satisfy stopping criteria that is usually ε.sup.it.sub.prim=<ε.sup.prim and ε.sup.it.sub.dual=<ε.sup.dual where ε.sup.prim>0 and ε.sup.dual>0 are feasibility tolerances set as
ε.sup.prim=√{square root over (kn.sub.u)}ε.sup.abs+ε.sup.rel max{∥B.sup.it∥, ∥B.sup.it∥} (14a)′
ε.sup.dual=√{square root over (kn.sub.u)}ε.sup.abs+ε.sup.relρ∥Δ.sup.it∥ (14b)′
and ε.sup.abs and ε.sup.rel are the absolute and relative tolerances. ADMM is discussed in more details in [4].
[0235] C. Group Lasso ADMM
[0236] The problem we have considered till now is given in (6)′. If the regularizer ∥B∥.sub.1 is replaced with
Σ.sub.t=1.sup.kn.sup.
in the form
minimize½∥Y−UB∥.sup.2+λΣ.sub.t=1.sup.kn.sup.
where B=[B.sub.1, B.sub.2, . . . , B.sub.kny].sup.T. The problem (15)′ is known as group lasso. It is easy to reformulate (15)′ as:
where E.sub.t is the t-th row in I.sub.kny (where I.sub.kny is the identity matrix of size kn.sub.y).
[0237] ADMM for (15)′ and (16)′ is the same as for (10)′ with B.sup.it being replaced with block soft threshold as
B.sub.t.sup.it+1=S.sub.λ/ρ(B.sub.t.sup.it+1+Δ.sup.it); t=1, 2, . . . , kn .sub.u (17)′
and soft threshold operator S is defined as S.sub.λ/ρ(α)=(1−λ/(ρ.Math.∥α∥))+α and subscript + refers to positive part of the expression, i.e,
[0238] It can be noticed that (16)′ enforces an entire row to be zero, which means that the resulting B is not necessarily sparse although it has entire zero rows. Roughly, with a single value λ, it is not always easy to find which rows are actually forced to be zero. It may hence be advantageous to use prior knowledge (is such knowledge is available) of probable zero rows, which may be (heuristically) enforced with the following reformulation of (16)′ as:
where significantly larger values of λ.sub.i are given to those rows.
[0239] EEG Channel and Model Order Selection with MIMO FIR Model
[0240] In following, the dynamical effects of the brain in relation to the present multi-source acoustic stimulation scenario are discussed. To get physical insight, the ADMM is applied with the aim of identifying the EEG channels and time-delays of the highest relevance. What remains now is to decide on a suitable regularization parameter λ so that the model fit is satisfactory while the number of zero elements is kept fairly high. Low sparsity is computationally forbidding for larger k values. Once, a suitable λ has been selected, model dynamics can be analysed.
[0241] From a more pragmatic point of view, it is now relatively easy to observe which electrodes are active at which time lags and duration of sound effect(s) on the brain. This knowledge can typically be incorporated into separating out the sound source of interest from the other sources and solving the cocktail party problem.
[0242] An interesting outcome of the sparse data is connectivity. The electrodes are only picking up what is happening on the surface of the brain, but neurons may actually be connected at deeper levels. So sparse events close in time but physically separated may be connected. Thus, with sparsity, we may get insight into deeper levels and how different connections and layers communicate to each other.
[0243] We will use the amount of data N corresponding to all trials for each listener, that is N˜30 trials×60 seconds×64 Hz, as a vehicle for selecting relevant EEG electrodes and time delays for understanding speech representation in the brain.
[0244] CLASTIC: CLassification of Attended speech STream In Cocktail-party Problem.
[0245] To (possibly) solve the cocktail party problem is now to perform LS estimation for each FIR
[0246] SIMO model in (10)′ and see which input signal gives the smallest cost. Put simply, to estimate B, for n-th batch is only needed to compute the cost function at the minimum
{circumflex over (V)}.sub.N.sup.i(n)=½∥Y−U.sub.iB.sub.i∥.sup.2+λ∥B.sub.i∥.sub.1 (19)′
[0247] Then, the (possible) approach to determine the sound source attended to by the user can be
î=arg min.sub.i {circumflex over (V)}.sub.N.sup.i(n) (20)′
[0248] A related approach to identify the sound source î (denoted Ŝ.sub.i or Ŝ.sub.x in connection with the drawings) attended to by the user may be to use the moving average (MA(p)) of the loss functions. Then
[0249] Where the index k is the model order, i.e., the number of time lags considered in the model. What is meant by (21)′ is that we can have the first decision on the attended sound source after p batches. As an example, consider batches of one minute each and let p be 10. In this case we have first decision after 10 minutes and decisions are updated each minute afterwards.
[0250] Model Order k.
[0251] In the following some guidelines for selecting the model order k (cf. FIR(k)), that is, the number of parameters or time lags when formulating the model in (2)′ are provided. A penalty function can be added to simple LS to find the true model order k* to avoid over-learning. Intuitive advice is to use the regularized criterion since regularization parameter λ can be thought of as a “knob” that we use to curb the effective number of parameters in the model, without being forced to decide which parameters should vanish, but letting the criterion (8)′ to use the time lags that influence the model fit the most. Thus, when the number of parameters k is unknown, X can be used as the trade-off between model fit and model order. Since l.sub.l regularization introduces sparsity and the adequate freedom is needed to describe the true system so that we can understand how the speech is represented within the brain, we evaluate the model of higher order, that is, we set k=5×F.sub.S=320 (5 seconds prior to time t, t=1, 2; . . . , N), where F.sub.S is the sampling frequency (F.sub.S=64 Hz).
[0252] The filters B, B.sub.1 and B.sub.2 are rightly to be considered as spatial filters, mapping the stimuli signal (speech stream) to response signal (EEG signals) at each time lag.
[0253] Individual Model Selection (N=30 min)
[0254] We first introduce experimental results for individual model selection for all 30 minutes of data for each subject. Regularization parameter λ is selected so that the cost (see Eq. (20)′) for the attended speech stream is smaller when compared to the cost for the unattended speech stream.
[0255] The (estimated) sparse filters B for FIR MIMO model in Eq. (4)′ give us the filter weights B.sup.(m).sub.i,j across the scalp for each individual time-lag m. This formulation is suitable for jointly tracking two competing speech streams in the left and right ear. Sparsity introduced with ADMM gives us the “most active” electrodes at each individual time lag where the “most active” electrodes will possibly indicate the neurons connected at deeper levels across the higher order auditory cortex. This gives us the deeper insight into dynamic properties of the human auditory system and the ability to track the sound within the brain and see brain parts being highly excited by stimuli at each particular point in time. With such an approach, we can better understand how the auditory system extracts intelligible speech features in acoustically complex backgrounds.
[0256] The filter B can (possibly) explain neural encoding mechanism jointly with the feedbacks and also the brain's mechanism for solving the cocktail-party problem. When B.sup.(m).sub.i,j is analysed separately for each time-lag and for each subject, dynamics can be identified. It can be seen that the electrodes were the most active or the most sensitive to speech streams up to FIR filter order of 60, what corresponds to response fading after approximately 1 second.
[0257] To confirm this impression, we averaged the filters B.sup.s over all subjects, s=1, 2, . . . , 10, so that all non-vanishing (nonzero) filter weights for all subjects can be inspected together. From the “average” filter it can be verified that most of the neural processes occur for both speeches during the first 60 time lags.
[0258] Attentional modulation was also further investigated and quantified for the attended and unattended speech streams separately. A related question is if the data set Ω allows us to distinguish between different models given by filters B.sub.1 and B.sub.2, see Eq. (5)′. Two SIMO FIR models were formulated from data set Ω for two different speech streams and filters B.sub.1 and B.sub.2 were computed for each subject. We call a data set Ω informative if we can distinguish between the highly structured patterns of filters B.sub.1 and B.sub.2[14]. This finding allows us to visualize and gain deeper understanding of how the different competing speech streams are encoded in the brain. The primary difference between B.sub.1 and B.sub.2 is evident when plotted separately for each time lag what confirms that data set Ω is informative or contains relevant information about the dynamics of speech streams and their differences in highly structured patterns.
[0259] To investigate what properties the sequence of filters {B.sub.1}.sup.s.sub.t and {B.sub.2}.sup.s.sub.t may have, we further examined the average across across the subjects for B.sub.1 and B.sub.2 separately for all time lags t=1, 2, . . . , N and all subjects s=1, 2, . . . , 10. Averaging the filters B.sub.1 and R.sub.2 over time and subjects, say,
gives the fair picture of which EEG channels and time lags have been captured in the underlying system.
[0260] The insight of the present disclosure points towards that the EEG electrodes are the most active in the first second following the stimuli. This may represent the processes at higher order auditory cortex. This also confirms that dataset Ω is informative enough to distinguish between filters B.sub.1 and B.sub.2 and thus to identify the attended speech stream.
[0261] In an embodiment, a feed filter B is obtained with an ADMM to linear support vector machine (SVM) algorithm (Y=U.sub.i*B.sub.i, where U.sub.i=sound signals, Y=EEG signals).
[0262]
[0263]
[0264] The time line (bold arrow denoted ‘Time’) separating
[0265]
[0266] A source selection processing unit (SSPU) according to the present invention is coupled to an input unit (cf. IU in
[0267]
[0268]
[0269]
[0270]
[0271]
[0272] The processed EEG and sound signals (y′, u′) are fed to the alternating direction of multipliers method unit (ADMM) where the data are processed to generate appropriate matrices of relevance for the ADMM procedure (cf. e.g. eq. (2)-(12) above) and executed in unit ADMM Execution. The receiving further inputs regarding order k of the FIR filters, the number of samples N of the input signals, and the sparsity parameter λ (cf. e.g. eq. (10)-(12) above). The sound source S that the user currently pays attention to is determined based on chosen selection criteria (cf. unit Selection criteria), e.g. cost function(s)), applied to the output of the ADMM Execution unit. The further inputs (k, N, λ) are e.g. derived from learning data (e.g. in a learning mode of operation) or otherwise selected in advance and stored in a memory of the hearing assistance system. The sound source selection unit is (e.g.) configured to provide an estimate of the sound signal U.sub.x (from sound source S.sub.x) that the user currently pays attention to, cf. (electric) output signal u.sub.x in
[0273] It is intended that the structural features of the systems and devices described above, either in the detailed description and/or in the claims, may be combined with steps of the method, when appropriately substituted by a corresponding process. Further details of the ideas presented in the present disclosure are given in the article by [Alickovic et al; to be published], which is attached to the present application and intended to constitute an appendix to be consulted for further details, if necessary.
[0274] As used, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well (i.e. to have the meaning “at least one”), unless expressly stated otherwise. It will be further understood that the terms “includes,” “comprises,” “including,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will also be understood that when an element is referred to as being “connected” or “coupled” to another element, it can be directly connected or coupled to the other element but an intervening element may also be present, unless expressly stated otherwise. Furthermore, “connected” or “coupled” as used herein may include wirelessly connected or coupled. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. The steps of any disclosed method is not limited to the exact order stated herein, unless expressly stated otherwise.
[0275] It should be appreciated that reference throughout this specification to “one embodiment” or “an embodiment” or “an aspect” or features included as “may” means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the disclosure. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects.
[0276] The claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more.
[0277] Accordingly, the scope should be judged in terms of the claims that follow.
REFERENCES
[0278] [4] Stephen Boyd, Neal Parikh, Eric Chu, Borja Peleato, and Jonathan Eckstein. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn., 3(1):1-122, January 2011.
[0279] [8] Michael Grant and Stephen Boyd. Graph implementations for nonsmooth convex programs. In V. Blondel, S. Boyd, and H. Kimura, editors, Recent Advances in Learning and Control, Lecture Notes in Control and Information Sciences, pages 95-110. Springer-Verlag Limited, 2008.
[0280] [9] Michael Grant and Stephen Boyd. Cvx: Matlab software for disciplined convex programming, version 2.1, March 2014.
[0281] [10] Seung-Jean Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky. An interior-point method for large-scale 11-regularized least squares. Selected Topics in Signal Processing, IEEE Journal of, 1(4):606-617, December 2007.
[0282] [11] Seung-Jean Kim, K. Koh, M. Lustig, S. Boyd, and D. Gorinevsky. An interior-point method for large-scale 11-regularized least squares. Selected Topics in Signal Processing, IEEE Journal of, 1(4):606-617, December 2007.
[0283] [14] Lennart Ljung. System Identification: Theory for the User. Prentice Hall PTR, Upper Saddle River, N.J. 07458, 2nd edition, 1999.
[0284] [15] J. Löfberg. Yalmip: A toolbox for modeling and optimization in MATLAB. In
[0285] Proceedings of the CACSD Conference, Taipei, Taiwan, 2004.
[0286] [16] T. Lunner and F. Gustafsson. Hearing device with brainwave dependent audio processing, Apr. 10, 2014. U.S. patent application Ser. No. 14/048,883, published as US20140098981A1.
[0287] [17] T. Lunner and N. H. Pontoppidan. Configurable hearing instrument, Jun. 19, 2014. U.S. patent application Ser. No. 14/103,399, published as US20140169596A1.
[0288] [18] Thomas Lunner. Hearing device with external electrode, Mar. 3, 2015. U.S. Pat. No. 8,971,558.
[0289] [21] James A. O'Sullivan, Alan J. Power, Nima Mesgarani, Siddharth Rajaram, John J. Foxe, Barbara G. Shinn-Cunningham, Malcolm Slaney, Shihab A. Shamma, and Edmund C. Lalor. Attentional selection in a cocktail party environment can be decoded from single-trial eeg. Cerebral Cortex, 25(7):1697-1706, 2015.
[0290] [23] N. H. Pontoppidan, T. Lunner, M.S. Pedersen, L. I. Hauschultz, P. Koch, G. Naylor, and E. B. Petersen. Hearing assistance device with brain computer interface, Dec. 18, 2014. U.S. patent application Ser. No. 14/303,844, published as US20140369537A1.
[0291] [24] Alan J. Power, John J. Foxe, Emma-Jane Forde, Richard B. Reilly, and Edmund C. Lalor. At what time is the cocktail party? a late locus of selective attention to natural speech. European Journal of Neuroscience, 35(9):1497-1503, 2012.
[0292] [25] Michael Saunders. Pdco: Primal-dual interior method for convex objectives, October 2002.
[0293] [26] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B, 58:267-288, 1994.
[0294] [Alickovic et al.; to be published] Emina Alickovic, Carina Graversen, Thomas Lunner, Fredrik Gustafsson, A sparse estimation approach to modeling listening attention from EEG signals. To be published.
[0295] [Bell and Sejnowski, 1995] Bell, A. J. and Sejnowski, T. J. An information maximisation approach to blind separation and blind deconvolution. Neural Computation 7(6):1129-1159. 1995.
[0296] [Boldt et al., 2008] Boldt, J. B., Kjems, U., Pedersen, M. S., Lunner, T., and Wang, D. Estimation of the ideal binary mask using directional systems. IWAENC 2008. 2008.
[0297] [Jourjine et al., 2000] Jourjine, A., Rickard, S., and Yilmaz, O. Blind separation of disjoint orthogonal signals: demixing N sources from 2 mixtures. IEEE International Conference on Acoustics, Speech, and Signal Processing. 2000.
[0298] [Roweis, 2001] Roweis, S. T. One Microphone Source Separation. Neural Information Processing Systems (NIPS) 2000, pages 793-799 Edited by Leen, T. K., Dietterich, T. G., and Tresp, V. 2001. Denver, Colo., US, MIT Press.
[0299] [Schaub, 2008] Schaub, A. Digital Hearing Aids. Thieme Medical Publishers, 2008.
[0300] [Pedersen et al., 2008] Pedersen, M. S., Larsen, J., Kjems, U., and Parra, L. C. A survey of convolutive blind source separation methods, Benesty J, Sondhi M M, Huang Y (eds): Springer Handbook of Speech Processing, pp 1065-1094 Springer, 2008.
[0301] [Pedersen et al., 2006] Pedersen, M. S., Wang, D., Larsen, J., and Kjems, U. Separating Underdetermined Convolutive Speech Mixtures. ICA 2006. 2006.