Audio processing device, system, use and method in which one of a plurality of coding schemes for distributing pulses to an electrode array is selected based on characteristics of incoming sound
11264964 · 2022-03-01
Assignee
Inventors
- Michael Syskind Pedersen (Smørum, DK)
- Gary Jones (Smørum, DK)
- Søren Kamaric Riis (Smørum, DK)
- Karsten Bo Rasmussen (Smørum, DK)
- Julian Skovgaard (Smørum, DK)
Cpc classification
H04R2460/03
ELECTRICITY
H04R2227/007
ELECTRICITY
H04R25/407
ELECTRICITY
H04R25/554
ELECTRICITY
H03G9/025
ELECTRICITY
H04R2430/03
ELECTRICITY
H04R2225/43
ELECTRICITY
H04R2227/009
ELECTRICITY
International classification
Abstract
The invention relates to a hearing aid a cochlear implant comprising a) at least one input transducer for capturing incoming sound and for generating electric audio signals which represent frequency bands of the incoming sound, b) a sound processor which is configured to analyze and to process the electric audio signals, c) a transmitter that sends the processed electric audio signals, d) a receiver/stimulator, which receives the processed electric audio signals from the transmitter and converts the processed electric audio signals into electric pulses, e) an electrode array embedded in the cochlear comprising a number of electrodes for stimulating the cochlear nerve with said electric pulses, and f) a control unit configured to control the distribution of said electric pulses to the number of said electrodes. The control unit is configured to distribute said electric pulses to the number of said electrodes by applying one out of a plurality of different coding schemes, and wherein the applied coding scheme is selected according to characteristics of the incoming sound.
Claims
1. A cochlear implant comprising at least one input transducer for capturing incoming sound and for generating electric audio signals which represent frequency bands of the incoming sound, a sound processor which is configured to analyze and to process the electric audio signals, a transmitter that sends the processed electric audio signals, a receiver/stimulator, which receives the processed electric audio signals from the transmitter and converts the processed electric audio signals into electric pulses, an electrode array embedded in the cochlear comprising a number of electrodes for stimulating the cochlear nerve with said electric pulses, and a control unit configured to control the distribution of said electric pulses to the number of said electrodes, wherein the control unit is configured to distribute said electric pulses to the number of said electrodes by applying one out of a plurality of different coding schemes, and wherein the applied coding scheme is selected according to characteristics of the incoming sound, wherein the cochlear implant is configured to apply a stimuli-specific coding scheme for listening to music, and wherein the coding scheme for listening to music is configured such that high frequency channels convey rhythm and low frequency channels resolve tonal information.
2. A cochlear implant according to claim 1, wherein the sound processor is configured to analyze the characteristics of the incoming sound.
3. A cochlear implant according to claim 1, wherein the distribution of said electric pulses to the number of said electrodes is performed according to a specific hearing situation.
4. A cochlear implant according to claim 1, configured to increase a stimulation rate in case that not all frequencies need to be stimulated.
5. A cochlear implant according to claim 1 wherein the control unit is configured to distribute the electric pulses to the number of electrodes according to a coding scheme for a telephone conversation and/or according to a coding scheme for listening to music and/or according to further coding schemes.
6. A cochlear implant according to claim 1 wherein the sound processor in the cochlear implant is configured to analyze the electric audio signals which represent frequency bands of the incoming sound with respect to an information content and to process only frequency bands that contain meaningful information such that a smaller number of electrodes than the total number of electrodes available is used for stimulating the cochlear nerve.
7. A cochlear implant according to claim 1 configured to activate a power saving mode in which the incoming sound is analyzed by the sound processor and only frequency bands of the incoming sound that contain meaningful information are transmitted to the electrodes.
8. A cochlear implant according to claim 1 wherein some channels of the cochlear implant can be turned off depending on an input channel.
9. A cochlear implant according to claim 1 wherein a special power saving mode can be activated, in which the acoustic input signal is analysed and only frequency bands that contain a certain information content are delivered to the electrodes.
10. A cochlear implant according to claim 1 wherein the entering of the cochlear implant into a power saving mode is dependent on a user's interaction or reaction to an incoming sound to the one or more microphones, such as head movement or a reply captured by the transducer(s).
11. A cochlear implant according to claim 1 wherein the control unit is configured to control the distribution of electric pulses to the number of electrodes such that electric pulses are delivered to alternating electrodes in order to reduce frequency channel interactions.
12. A cochlear implant according to claim 1 wherein at least one wall channel is provided to reduce channel interactions, wherein the wall channel is a channel in which no signal is presented and which is adjacent to the edge of a channel in which a signal is presented.
13. A cochlear implant according to claim 1 comprising an external part and an implanted part.
14. A cochlear implant comprising at least one input transducer for capturing incoming sound and for generating electric audio signals which represent frequency bands of the incoming sound, a sound processor which is configured to analyze and to process the electric audio signals, a transmitter that sends the processed electric audio signals, a receiver/stimulator, which receives the processed electric audio signals from the transmitter and converts the processed electric audio signals into electric pulses, an electrode array embedded in the cochlear comprising a number of electrodes for stimulating the cochlear nerve with said electric pulses, and a control unit configured to control the distribution of said electric pulses to the number of said electrodes, wherein the control unit is configured to distribute said electric pulses to the number of said electrodes by applying one out of a plurality of different coding schemes, and wherein the applied coding scheme is selected according to characteristics of the incoming sound, wherein some channels of the cochlear implant can be turned off depending on an input channel, wherein a power saving mode is configured to use only 1 or 2 broad frequency bands which in case that the incoming sound is above a predefined amplitude threshold are transmitted to 1 or 2 electrodes to convey a modulation for sound awareness.
15. A cochlear implant according to claim 14, configured to apply a stimuli-specific coding scheme for listening to music.
16. A cochlear implant comprising at least one input transducer for capturing incoming sound and for generating electric audio signals which represent frequency bands of the incoming sound, a sound processor which is configured to analyze and to process the electric audio signals, a transmitter that sends the processed electric audio signals, a receiver/stimulator, which receives the processed electric audio signals from the transmitter and converts the processed electric audio signals into electric pulses, an electrode array embedded in the cochlear comprising a number of electrodes for stimulating the cochlear nerve with said electric pulses, and a control unit configured to control the distribution of said electric pulses to the number of said electrodes, wherein the control unit is configured to distribute said electric pulses to the number of said electrodes by applying one out of a plurality of different coding schemes, and wherein the applied coding scheme is selected according to characteristics of the incoming sound, wherein at least one wall channel is provided to reduce channel interactions, wherein the wall channel is a channel in which no signal is presented and which is adjacent to the edge of a channel in which a signal is presented, wherein a wall channel stimulus within the wall channel is a low-level pulse, preferably a sub-threshold pulse or a supra-threshold pulse.
17. A cochlear implant system comprising two or more cochlear implants according to claim 1, wherein the cochlear implants are adapted for exchanging information about the applied coding scheme.
18. A cochlear implant system according to claim 17 wherein the exchange of information is provided via a wireless communication link.
19. A cochlear implant system according to claim 17 configured to provide that the same coding scheme is applied in both cochlear implants of a binaural system by exchanging synchronizing control signals between the two cochlear implants.
Description
BRIEF DESCRIPTION OF DRAWINGS
(1) The objects of the disclosure may be best understood from the following detailed description taken in conjunction with the accompanying figures. The figures are schematic and simplified for clarity, and they just show details to improve the understanding of the claims, while other details are left out. Throughout, the same reference numerals are used for identical or corresponding parts. The individual features of each object may each be combined with any or all features of the other objects. These and other objects, features and/or technical effect will be apparent from and elucidated with reference to the illustrations described hereinafter in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
DETAILED DESCRIPTION
(14) The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. Several object of the hearing device system and methods are described by various blocks, functional units, modules, components, circuits, steps, processes, algorithms, etc. (collectively referred to as “elements”). Depending upon particular application, design constraints or other reasons, these elements may be implemented using electronic hardware, computer program, or any combination thereof.
(15) A hearing device may include a hearing aid that is adapted to improve or augment the hearing capability of a user by receiving an acoustic signal from a user's surroundings, generating a corresponding audio signal, possibly modifying the audio signal and providing the possibly modified audio signal as an audible signal to at least one of the user's ears. The “hearing device” may further refer to a device such as an earphone or a headset adapted to receive an audio signal electronically, possibly modifying the audio signal and providing the possibly modified audio signals as an audible signal to at least one of the user's ears. Such audible signals may be provided in the form of an acoustic signal radiated into the user's outer ear, or an acoustic signal transferred as mechanical vibrations to the user's inner ears through bone structure of the user's head and/or through parts of middle ear of the user or electric signals transferred directly or indirectly to cochlear nerve and/or to auditory cortex of the user.
(16) The hearing device is adapted to be worn in any known way. This may include i) arranging a unit of the hearing device behind the ear with a tube leading air-borne acoustic signals or with a receiver/loudspeaker arranged close to or in the ear canal such as in a Behind-the-Ear type hearing aid or a Receiver-in-the Ear type hearing aid, and/or ii) arranging the hearing device entirely or partly in the pinna and/or in the ear canal of the user such as in a In-the-Ear type hearing aid or In-the-Canal/Completely-in-Canal type hearing aid, or iii) arranging a unit of the hearing device attached to a fixture implanted into the skull bone such as in Bone Anchored Hearing Aid or Cochlear Implant, or iv) arranging a unit of the hearing device as an entirely or partly implanted unit such as in Bone Anchored Hearing Aid or Cochlear Implant.
(17) A hearing device may be part of a “hearing system”, which refers to a system comprising one or two hearing devices, disclosed in present description, and a “binaural hearing system” refers to a system comprising two hearing devices where the devices are adapted to cooperatively provide audible signals to both of the user's ears. The hearing system or binaural hearing system may further include auxiliary device(s) that communicates with at least one hearing device, the auxiliary device affecting the operation of the hearing devices and/or benefitting from the functioning of the hearing devices. A wired or wireless communication link between the at least one hearing device and the auxiliary device is established that allows for exchanging information (e.g. control and status signals, possibly audio signals) between the at least one hearing device and the auxiliary device. Such auxiliary devices may include at least one of remote controls, remote microphones, audio gateway devices, mobile phones, public-address systems, car audio systems or music players or a combination thereof. The audio gateway is adapted to receive a multitude of audio signals such as from an entertainment device like a TV or a music player, a telephone apparatus like a mobile telephone or a computer, a PC. The audio gateway is further adapted to select and/or combine an appropriate one of the received audio signals (or combination of signals) for transmission to the at least one hearing device. The remote control is adapted to control functionality and operation of the at least one hearing devices. The function of the remote control may be implemented in a SmartPhone or other electronic device, the SmartPhone/electronic device possibly running an application that controls functionality of the at least one hearing device.
(18) In general, a hearing device includes i) an input section such as a microphone for receiving an acoustic signal from a user's surroundings and providing a corresponding input audio signal, and/or ii) a receiving unit for electronically receiving an input audio signal. The hearing device further includes a signal processing unit for processing the input audio signal and an output unit for providing an audible signal to the user in dependence on the processed audio signal.
(19) The input section may include multiple input microphones, e.g. for providing direction-dependent audio signal processing. Such directional microphone system is adapted to enhance a target acoustic source among a multitude of acoustic sources in the user's environment. In one object, the directional system is adapted to detect (such as adaptively detect) from which direction a particular part of the microphone signal originates. This may be achieved by using conventionally known methods. The signal processing unit may include amplifier that is adapted to apply a frequency dependent gain to the input audio signal. The signal processing unit may further be adapted to provide other relevant functionality such as compression, noise reduction, etc. The output unit may include an output transducer such as a loudspeaker/receiver for providing an air-borne acoustic signal transcutaneously or percutaneously to the skull bone or a vibrator for providing a structure-borne or liquid-borne acoustic signal. In some hearing devices, the output unit may include one or more output electrodes for providing the electric signals such as in a Cochlear Implant.
(20) It should be appreciated that reference throughout this specification to “one embodiment” or “an embodiment” or “an object” or features included as “may” means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the disclosure. The previous description is provided to enable any person skilled in the art to practice the various objects described herein. Various modifications to these objects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other objects.
(21) The claims are not intended to be limited to the objects shown herein, but is to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more.
(22) Accordingly, the scope should be judged in terms of the claims that follows.
(23)
(24) The hearing aid device further comprises a first analog-to-digital 20 for converting the first electrical audio input signal 11 into a first time-domain input signal 21 and a second analog-to-digital converter 120 for converting the second electrical audio input signal 111 into a second time-domain input signal 121. The first 21 and second 121 time-domain signals are subsequently delivered to a digital signal processing unit 90A. The digital signal processing unit 90A comprises a first input unit 30 and a second input unit 130. The first input unit 30 is configured to convert the first time-domain input 21 signal to a number N.sub.I,1 of first input frequency bands 31. Thereby, the number N.sub.I,1 of first input frequency bands 31 is determined by a first analysis filter bank that is comprised in the first input unit 30. The second input unit 130 is configured to convert the second time-domain input 121 signal to a number N.sub.I,2 of second input frequency bands 131. Thereby, the number N.sub.I,2 of second input frequency bands 131 is determined by a second analysis filter bank that is comprised in the second input unit 130.
(25) The hearing aid device further comprises first and second frequency band bundling and allocation units 40, 140. The first frequency band bundling and allocation unit 40 is configured to bundle adjacent first input frequency bands 31 and to allocate first frequency bands to be processed 41 to a number N.sub.P,1 of first processing channels 51. The second frequency band bundling and allocation unit 140 is configured to bundle adjacent second input frequency bands 131 and to allocate second frequency bands to be processed 141 to a number N.sub.P,2 of second processing channels 151.
(26) The bundling of first input frequency bands 31 and second input frequency bands 131 can be based and a first bundling scheme and a second bundling scheme that are created based on data stored in the memory 200. The data indicate which of the first N.sub.I,1 input frequency bands 31 and which of the second N.sub.I,2 input frequency bands 131 are subject to a likelihood of feedback that is above a predefined threshold. In a preferred embodiment of
(27) The first frequency bands to be processed 41 and the second frequency bands to be processed 141 are delivered to a signal processing unit 50. The signal processing unit 50 is configured to process the first frequency bands to be processed 41 in the number N.sub.P,1 of first processing channels 51 and to process the second frequency bands to be processed 141 in the number N.sub.P,2 of second processing channels 151. Here it is preferred that the number N.sub.P,1 of first processing channels 51 is smaller than the number N.sub.I,1 of first input frequency bands 31, and that the number N.sub.P,2 of second processing channels 151 is smaller than the number N.sub.I,2 of second input frequency bands 131. The number of first and second processing channels, N.sub.P,1, N.sub.P,2, may be equal or different. The processing of the input frequency bands in a smaller number of processing channels can lead to the advantage, that the computational power can be reduced. A reduced computational power can lead to the advantage that the power consumption of a hearing aid device can be reduced, or the limited number of NP frequency bands can be used in the most efficient way.
(28) The hearing aid device 100 further comprises a first frequency band redistribution unit 60 and a second frequency band redistribution unit 160. The first frequency band redistribution unit 60 is configured to redistribute the N.sub.P,1 processing channels 51 to a number N.sub.O,1 of first output frequency bands 61 and the second frequency band redistribution unit 160 is configured to redistribute the N.sub.P,2 processing channels 151 to a number N.sub.O,2 of second output frequency bands 161. Thereby, the number N.sub.O,1 of first output frequency bands 61 can be larger than the number N.sub.P,1 of first processing channels 51 and the number N.sub.O,2 of second output frequency bands 161 can be larger the number N.sub.P,2 of second processing channels. The number of first and second output frequency bands, N.sub.O,1, N.sub.O,2, may be equal or different.
(29) The first output frequency bands 61 and the second output frequency bands 161 are delivered to a signal combination unit 90B, where the first and second frequency bands are combined (e.g. on a frequency band level (e.g. by forming a (possibly weighted) sum of the first and second output frequency bands) and converted (e.g. by a synthesis filter bank) to a digital audio output signal 91 (in the time-domain) and delivered to a digital-to-analog converter 70.
(30) In an embodiment, the signal combination unit 90B comprises a beamformer filtering unit, and/or a synthesis filter bank providing a resulting spatially filtered signal by applying (possibly) complex (frequency dependent) beamformer weights to the respective first and second electric audio signals. The beamformer filtering unit may e.g. be configured to provide a beamformer that is minimally sensitive in a direction towards the origin of feedback (the speaker) in frequency regions where feedback is likely to occur (using a higher frequency resolution in this frequency region according to the present disclosure) and to (e.g. adaptively) minimize (other) noise in other frequency regions. Alternatively, all frequency bands may be directed to feedback cancellation (e.g. always, or in situations where feedback is estimated to be present, e.g. severe). In an embodiment, the beamformer filtering unit may be configured to cancel feedback (echo) in a low frequency region, e.g. below 1 kHz (e.g. in a specific echo cancelling mode, e.g. in a telephone mode, where sound is picked up by the hearing device and transmitted to a far end listener and where sound from the far end listener is received by the hearing device).
(31) Using a digital-to-analog converter 70, the digital audio output signal 91 is converted into an (analog) electrical audio output signal 71 that is delivered to a speaker 80. The speaker 80 is configured to transmit an acoustic output signal 81 that is based on an electrical audio output signal 71 into an ear of a user of the hearing aid device 100. In a preferred embodiment of
(32) In a preferred embodiment of figurel that is not shown, units of the same kind such as a first and a second input unit are comprised in a single unit having the same functionality as the two separated units. In a preferred embodiment of figurel that is not shown, a number of units with different functionality such as e.g. an input unit and an analog-to-digital converter can be comprised in the same unit that performs the functionality of the comprised individual units. In an alternative embodiment of figurel that is not shown, only one microphone is comprised such that either the upper branch or the lower branch shown in
(33) As stated above, the memory unit is configured to store data indicating which of the first N.sub.I,1 input frequency bands and second N.sub.I,2 input frequency bands are subject to a likelihood of feedback that is above a predefined threshold. Moreover, the likelihood of feedback is stored in a first and second bundling scheme that can be a two-dimensional matrix indicating if a first and/or a second input frequency band shall be bundled or not. This allows to implement a bundling scheme yielding that the frequency resolution in frequency regions comprising frequency bands with a high likelihood of feedback is larger compared to frequency regions that comprise frequency bands with a smaller likelihood of feedback to occur. If the frequency resolution in frequency regions is high, it is possible to reduce or counteract the feedback in the respective frequency bands very efficiently. This is due to the fact that the respective frequency bands can be selected and processed individually and a filter be exclusively applied to these respective frequency bands. Moreover, frequency bands with a small likelihood of feedback to occur can be bundled such that the computational effort and thus the power consumption of the hearing aid can be reduced.
(34) The likelihood of feedback to occur in at least one of the first and/or second frequency bands can be determined by a feedback detection unit 250. The feedback detection unit 250 detects the likelihood of feedback by e.g. dynamically tracking changes in the feedback path 251. In the embodiment of
(35) In an alternative embodiment that is not shown in
(36) In an alternative embodiment that is not shown in
(37) The signal processing as described above can also be implemented in a hearing aid implant such as a cochlear implant. In this case, the processed signal would not be converted to an acoustic output signal that is emitted by a speaker but a processed electric audio signal could be converted into electric pulses. Then, an electrode array comprising a number of electrodes which are embedded in the cochlear of a user could be used for stimulating the cochlear nerve with said electric pulses.
(38)
(39)
(40) In the embodiment shown, the number N.sub.I of input frequency bands and the number N.sub.O of output frequency bands is identical as indicated by the arrow 35. Consequently, the initial frequency resolution is rehabilitated after processing the signal in a smaller number of processing channels N.sub.P. The acoustic output signal 81 provided by the speaker 80 comprises a ‘summation’ of the resulting frequency sub-band signals determined from the contents of the N.sub.P processing channels (filter coefficients 53 (Wp)) subject to a frequency band redistribution unit (cf. unit 60 (or 160) in
(41) The signal processing as described above can also be implemented in a hearing aid implant such as a cochlear implant. In this case, the processed signal would not be converted to an acoustic output signal that is emitted by a speaker but a processed electric audio signal could be converted into electric pulses. Then, an electrode array comprising a number of electrodes which are embedded in the cochlear of a user could be used for stimulating the cochlear nerve with said electric pulses. In this case, the individual band signals (e.g. N.sub.P channel signals Wp1*No1, Wp2*No2, Wp3*No3, . . . , or No redistributed output band signals) could be presented to a different one of the electrodes of the electrode array.
(42)
(43) The number N.sub.P,1 of first processing channels 51 and the number N.sub.P,2 of second processing channels 151 are processed in the signal processing unit 50. Processing in the signal processing unit 50 can include the determination of a set of first filter coefficients (W1p) 54 for each of the N.sub.I,1 first input frequency bands and the determination of a set of second filter coefficients (W2p) 55 for each of the N.sub.I,2 second input frequency bands based on e.g. a likelihood of feedback in at least one of the first and second input frequency bands. After signal processing, the N.sub.P,1 first processing channels and the N.sub.P,2 second processing channels are redistributed to a number N.sub.O,1 of first output frequency bands and to a number N.sub.O,2 of second output frequency bands, respectively (cf. unit NP1.fwdarw.NO, NP2.fwdarw.NO). Each of the number N.sub.O,1 of first output frequency bands and the number N.sub.O,2 of second output frequency bands can be multiplied by an individual (possibly complex) filter coefficient that is determined by the signal processing unit 50. This allows suppressing feedback in frequency bands comprising a high likelihood of feedback (beamforming, cf. unit WS).
(44) The first filter coefficients of the first set of filter coefficients (W1p) and the second filter coefficients of the second set of filter coefficients (W2p) may comprise a real part and an imaginary part. The real and imaginary part of the first and second filter coefficients can be determined such that the likelihood of feedback to occur is minimised and such that the impact on the part of the acoustic output signal which does not comprise feedback is minimum (e.g. using beamforming techniques). Moreover, the acoustic output signal 81 comprises a (possibly weighted) summation of the respective first filter coefficients each multiplied by the respective of the first N.sub.O,1 output frequency bands and the second filter coefficients each multiplied by the respective of the second N.sub.O,2 output frequency bands. The output frequency bands may be received (35A and 35B) from the first input frequency bands 31 and the second input frequency bands 131, respectively. The resulting frequency output bands 61 may be translated to the time-domain (signal 71) by a synthesis filter bank FBS (and possibly converted to an analog signal by DA converter) before presentation to the speaker 80.
(45) The filter coefficients could have different purpose depending on the amount of feedback in the feedback path 250: In frequency bands with high risk of feedback, the coefficients are adapted towards minimizing feedback. In bands, where the risk of feedback is small (e.g. depending on a feedback path measurement, e.g. at low frequencies), the coefficients could be adapted towards minimizing external noise. In certain application scenarios involving large delays from output to input, echo cancellation can appear at relatively low frequencies. In such cases, the coefficients may be used to minimize echo in low frequency bands, e.g. below 1.5 kHz or below 1 kHz.
(46) The signal processing as described above can also be implemented in a hearing aid implant such as a cochlear implant. In this case, the processed signal would not be converted to an acoustic output signal that is emitted by a speaker but a processed electric audio signal could be converted into electric pulses. Then, an electrode array comprising a number of electrodes which are embedded in the cochlear of a user could be used for stimulating the cochlear nerve with said electric pulses (each e.g. representing contents of a different output channel or band).
(47)
(48) At least one of the microphones (10, 110) may be used as a reference microphone for estimating feedback (cf. e.g.
(49)
(50)
(51) The signal processing as described above can also be implemented in a hearing aid implant such as a cochlear implant. Then the bundling of frequency bands could be used and applied to the distribution of electric pulses to a number of said electrodes. The distribution of electric pulses could e.g. be performed by applying one out of a plurality of different coding schemes and the applied coding scheme could be selected according to characteristics of an incoming sound.
(52)
(53) After processing (of each of the first and second microphone signals) in a smaller number of processing channels, the processed frequency channels are redistributed to a number of output frequency bands that can be an identical number to the initial number of input frequency bands. During processing, filter coefficients (e.g. respective channel specific values) are determined and subsequently applied to each of the input frequency bands of the first and second microphone signals (cf. dashed arrow from input bands to multiplication units of each re-distributed band). In the respective multiplication units, the determined filter coefficients for each frequency band of the first and second microphone signals are mixed with the contents of each of the corresponding input frequency bands of the respective first and second microphone signals to provide first and second output frequency bands. The unit denoted ‘+’ represents a combination of the first and second output frequency bands. The unit ‘+’ may e.g. implement a weighted sum of the first and second output bands, e.g. to implement specific frequency (band) specific beam patterns. Subsequently, the resulting frequency sub-bands are processed via a synthesis filter bank in order to obtain a modified time-domain signal.
(54) The signal processing as described above can also be implemented in a hearing aid implant such as a cochlear implant. Then the bundling of frequency bands could be used and applied to the distribution of electric pulses to a number of said electrodes. The distribution of electric pulses could e.g. be performed by applying one out of a plurality of different coding schemes and the applied coding scheme could be selected according to characteristics of an incoming sound.
(55)
(56) In general, the directional processing in different frequency bands could be prioritized either towards cancelling noise and improving speech intelligibility or be prioritized towards cancelling feedback in frequency regions, where only a little speech intelligibility improvement is achieved. Such a prioritization could be based on the measured feedback path and the speech intelligibility band importance index. In order to minimize the power consumption, the bundling of frequency bands can be optimized for as few processing channels as necessary to maintain a sufficient frequency resolution for providing the information contained in the signal in an adequate manner.
(57) In the low and medium frequencies (indicated by curly bracket) 400, directional processing used for noise reduction improves speech intelligibility significantly. Also, at the low frequency regions which are typically below 1000 Hz, feedback is not likely to occur. In the higher frequency region (indicated by curly bracket) 440, which contributes only a little to the overall speech intelligibility, it can be reasonable to prioritize the directional processing from those frequency regions to cancel the feedback path.
(58) In a binaural hearing aid system, the bundling scheme may be the same for both left and right hearing aid. As a consequence, the bundling scheme depends on the feedback path measures at both hearing aids. In another example, the bundling scheme may be different in the left and the right hearing aid. In a yet another example, the bundling scheme is partly the same at left and the right hearing aid, e.g. the bundling scheme may be the same within a frequency range and different within another frequency range.
(59) The prioritization scheme as described above can also be implemented in a hearing aid implant such as a cochlear implant. Then the prioritization of frequency bands could be used and applied to the distribution of electric pulses to a number of said electrodes.
(60)
(61)
(62)
(63)