Binaural hearing system comprising bilateral compression
11653153 · 2023-05-16
Assignee
Inventors
- Tobias Piechowiak (Hedehusene, DK)
- Antonie Johannes Hendrikse (Ballerup, DK)
- Changxue Ma (Barrington, IL)
Cpc classification
H04S2420/01
ELECTRICITY
H04R2225/0216
ELECTRICITY
H04R25/43
ELECTRICITY
International classification
Abstract
The present disclosure relates to a method of performing bilateral dynamic range compression of first and second microphone signals generated by first and second hearing devices, respectively, of a binaural hearing system. The method comprises to pick-up sound pressure inside an ear canal of the user's left or right ear by a first microphone to generate a first microphone signal in response to incoming sound and pick-up sound pressure inside an ear canal of the user's opposite ear by a second microphone to generate a second microphone signal in response to the incoming sound.
Claims
1. A method performed by a binaural hearing system having a first hearing device and a second hearing device, the method comprising: generating a first microphone signal by a first microphone of the first hearing device based on sound pressure associated with a first ear of a user; generating a second microphone signal by a second microphone of the second hearing device based on sound pressure associated with a second ear of the user; transmitting contralateral audio data representative of the second microphone signal to the first hearing device via a wireless communication link; estimating, by a first processing unit at the first hearing device, a first interaural level difference based on the first microphone signal and the contralateral audio data; determining a first gain for the first microphone signal based on a level of the first microphone signal in accordance with a first level-versus-gain characteristic; determining a second gain for the second microphone signal based on a level of the second microphone signal in accordance with a second level-versus-gain characteristic; adjusting the first gain based on the first interaural level difference; and applying the adjusted first gain to the first microphone signal to generate a first output signal.
2. The method according to claim 1, further comprising: transmitting another contralateral audio data representative of the first microphone signal to the second hearing device via the wireless communication link; estimating, by a second processing unit, a second interaural level difference based on the second microphone signal and the other contralateral audio data; adjusting the second gain based on the second interaural level difference; and applying the adjusted second gain to the second microphone signal.
3. The method according to claim 2, wherein the adjusted second gain is applied to the second microphone signal to preserve the second interaural level difference.
4. The method according to claim 2, further comprising: comparing, by the first processing unit or by the second processing unit, the first microphone signal and the second microphone signal to determine which of the first and second hearing devices is subjected to a lower level of incoming sound; and reducing a gain of one of the first hearing device and the second hearing device that is subjected to the lower level of incoming sound, to preserve the first or second interaural level difference.
5. The method according to claim 4, further comprising: increasing a gain of the other one of the first hearing device and the second hearing device that is subjected to a higher level of the incoming sound, to preserve the first or second interaural level difference.
6. The method according to claim 1, wherein the contralateral audio data representing the second microphone signal comprises a power level, an energy level, a native digital audio representation, or any combination of the foregoing, associated with the second microphone signal.
7. The method according to claim 1, wherein the contralateral audio data comprises a perceptually encoded signal selected from the group consisting of MP3, FLAC, AAC, Vorbis, MA4, Opus, and G722.
8. The method according to claim 1, further comprising: splitting the first microphone signal into a first plurality of first sub-signals in different frequency bands; and splitting the second microphone signal into a second plurality of second sub-signals in different frequency bands; wherein the act of estimating comprises estimating a first plurality of interaural level differences associated with the first plurality of first sub-signals and the second plurality of second sub-signals; wherein the act of determining the first gain comprises determining a first plurality of gain values for the first plurality of first sub-signals, respectively; wherein the act of determining the second gain comprises determining a second plurality of gain values for the second plurality of second sub-signals, respectively; wherein the act of adjusting the first gain comprises adjusting the first plurality of gain values based on respective ones of the first plurality of interaural level differences; and wherein the act of applying the adjusted first gain comprises applying the first plurality of adjusted gain values to respective ones of the first plurality of first sub-signals.
9. The method according to claim 1, further comprising: generating a first additional microphone signal by a first additional microphone of the first hearing device arranged at, or behind, the first ear; generating a first additional microphone signal by a second additional microphone of the second hearing device arranged at, or behind, the second ear; mixing the first additional microphone signal and the first microphone signal to obtain a first hybrid microphone signal; and mixing the second additional microphone signal and the second microphone signal to obtain a second hybrid microphone signal.
10. The method according to claim 1, further comprising: determining a transfer function of a first feedback path from the first output signal to the first microphone signal by the first processing unit; and compensating the first feedback path by a fixed or adaptive feedback cancellation filter to increase a maximum stable gain of the first hearing device.
11. The method according to claim 1, further comprising: generating a first additional microphone signal by an additional microphone of the first hearing device arranged at, or behind, the first ear; and mixing the first additional microphone signal and the first microphone signal to obtain a hybrid microphone signal, wherein the mixing is performed such that the first additional microphone signal dominates in the hybrid microphone signal in a frequency range where the first gain exceeds a maximum stable gain of the first hearing device.
12. The method according to claim 1, wherein the first gain and/or the second gain is adjusted for interaural level differences exceeding a threshold value.
13. The method according to claim 12, wherein the threshold value is 1 dB or higher.
14. The method according to claim 12, wherein the first gain and/or the second gain is not adjusted for interaural level differences below the threshold value; or wherein an adjustment of the first gain and/or an adjustment of the second gain for interaural level differences below the threshold value is discarded.
15. The method according to claim 1, further comprising detecting speech segment(s) and non-speech segment(s) in the first microphone signal and the second microphone signal; wherein the first gain and/or the second gain is adjusted for the speech segment(s).
16. The method according to claim 15, wherein the first gain and/or the second gain is not adjusted for the non-speech segment(s); or wherein an adjustment of the first gain and/or an adjustment of the second gain for the non-speech segment(s) is discarded.
17. The method according to claim 1, further comprising applying the second gain to the second microphone signal to generate a second output signal.
18. The method according to claim 1, wherein the first interaural level difference is between the first and second microphone signals, and wherein the first gain is adjusted based on the first interaural level difference to preserve the first interaural level difference between first and second microphone signals.
19. A binaural hearing system comprising: a first hearing device configured for placement at, or in, a first ear of a user, the first hearing device comprising a first microphone arrangement and a first processing unit, wherein the first microphone arrangement comprises a first in-ear microphone configured to pick-up sound pressure associated with a first ear; and a second hearing device configured for placement at, or in, a second ear of the user, the second hearing device comprising a second microphone arrangement and a second processing unit, wherein the second microphone arrangement comprises a second in-ear microphone arranged to pick-up sound pressure associated with a second ear; wherein the first hearing device and the second hearing device are connectable through a wireless communication link; wherein the first processing unit is configured to: receive a first microphone signal generated by the first in-ear microphone, receive contralateral audio data representative of a second microphone signal transmitted from the second hearing device to the first hearing device via the wireless communication link, estimate a first interaural level difference based on the first microphone signal and the contralateral audio data, determine a first gain for the first microphone signal based on a level of the first microphone signal in accordance with a first level-versus-gain characteristic, adjust the first gain, and apply the adjusted first gain to the first microphone signal to generate a first output signal.
20. The binaural hearing system according to claim 19, the first output signal is configured to preserve the first interaural level difference.
21. The binaural hearing system according to claim 19, wherein the second processing unit is configured to: determine a second gain for the second microphone signal based on a level of the second microphone signal in accordance with a second level-versus-gain characteristic, and apply the second gain to the second microphone signal to generate a second output signal.
22. The binaural hearing system according to claim 19, wherein the first hearing device is configured to transmit another contralateral audio data representative of the first microphone signal to the second hearing device via the wireless communication link; and wherein the second processing unit is further configured to: estimate a second interaural level difference based on the second microphone signal and the other contralateral audio data, adjust the second gain based on the second interaural level difference, and apply the adjusted second gain to the second microphone signal.
23. The binaural hearing system according to claim 19, wherein the first hearing device comprises a first BTE housing configured for placement behind the first ear, and a first ear plug configured for placement at least partly inside a first ear canal, wherein the first ear plug comprises the first in-ear microphone; and wherein the second hearing device comprises a second BTE housing configured for placement behind the second ear, and a second ear plug configured for placement at least partly inside a second ear canal, wherein the second ear plug comprises the second in-ear microphone.
24. The binaural hearing system according to claim 23, wherein the first ear plug comprises an outwardly oriented surface comprising a first sound inlet for the first in-ear microphone, and a first inwardly oriented surface or portion comprising a first sound outlet for a first miniature speaker or receiver configured to provide the first output signal; and wherein the second ear plug comprises an outwardly oriented surface comprising a second sound inlet for the second in-ear microphone, and a second inwardly oriented surface or portion comprising a second sound outlet for a second miniature speaker or receiver configured to provide a second output signal.
25. The binaural hearing system according to claim 23, wherein the first hearing device comprises a first additional microphone in the first BTE housing configured to generate a first additional microphone signal; and wherein the second hearing device comprises a second additional microphone in the second BTE housing configured to generate a second additional microphone signal.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) In the following exemplary embodiments are described in more detail with reference to the appended drawings, wherein:
(2)
(3)
(4)
(5)
DETAILED DESCRIPTION OF EMBODIMENTS
(6) Various exemplary embodiments and details are described hereinafter, with reference to the figures when relevant. It should be noted that the figures may or may not be drawn to scale and that elements of similar structures or functions are represented by like reference numerals throughout the figures. It should also be noted that the figures are only intended to facilitate the description of the embodiments. They are not intended as an exhaustive description of the invention or as a limitation on the scope of the invention. In addition, an illustrated embodiment needs not have all the aspects or advantages shown. An aspect or an advantage described in conjunction with a particular embodiment is not necessarily limited to that embodiment and can be practiced in any other embodiments even if not so illustrated, or if not so explicitly described.
(7) In the following various exemplary embodiments of the present binaural hearing aid system are described with reference to the appended drawings. The skilled person will understand that the accompanying drawings are schematic and simplified for clarity, and that they show details to facilitate understanding of the embodiments. Like reference numerals refer to like elements throughout. Like elements will, thus, not necessarily be described in detail with respect to each figure.
(8)
(9) The left hearing aid 10L and the right hearing aid 10R may be substantially identical in some embodiments of the present hearing aid system expect for the above-described unique ID and possibly for the value of certain signal processing parameters as discussed in additional detail below. Therefore, the following description of the physical structures, features, components and signal processing functions of the left hearing aid 10L also applies to the right hearing aid 10R unless otherwise indicated. The left hearing aid 10L may comprise and be energized by a ZnO.sub.2 battery (not shown) or a rechargeable battery that is connected for supplying power to a first hearing aid circuitry 25L. The first hearing aid circuitry 25L may at least comprise the first digital processor 24L and the first wireless data communication interface 34 L. Each of the left and right hearing aids 10L, 10R may be embodied in various housing styles or form factors for example as so-called such as Behind-the-Ear (BTE), In-the-Canal (ITC), Completely-in-Canal (CIC), Receiver-in-the Ear (RIE), Receiver-in-the Canal (RIC) or Microphone-and-Receiver-in ear (MaRIE) designs. The exemplary embodiment of the left hearing aid 10L is provided as a so-called MaRIE design and comprises a first BTE housing 210L configured for placement behind the user's left ear and a first ear plug 30L configured for placement at least partly inside the user's left ear canal as illustrated schematically on
(10) Returning to
(11) The first pair of omnidirectional microphones 17L may generate a first additional microphone signal, such as a directional microphone signal, in response to the incoming or impinging sound. Respective sound inlets or ports (not shown) of the first pair of omnidirectional microphones 17L are preferably arranged with a certain spacing in the left or first BTE housing. The spacing between the sound inlets or ports depends on the dimensions and type of the housing but may lie between 5 and 30 mm. This port spacing range enables the formation of certain monaural beamforming signals. The first in-ear microphone 16L arranged in the left ear or first ear plug 30L is arranged to pick-up or receive sound pressure at an entrance to, or inside, the user's left ear canal via a first sound inlet 18L and generate a corresponding first microphone signal 60L. This may be achieved by arranging the first sound inlet 18L in an outwardly oriented surface of the housing of the first ear plug 30L where outwardly means projecting towards a concha/outer ear of the user's left ear, as opposed to inwardly towards an ear drum of the user's left ear canal. The first ear plug 30L additionally comprises the first miniature speaker or receiver 32L configured to generate a first or left ear output signal as a first or left ear output sound pressure via a first sound outlet 33L. The first sound outlet 33L may be arranged in an inwardly oriented surface or portion of the housing of the left ear plug 30L such that the left ear output sound pressure propagates to the user's left ear drum. The skilled person will appreciate that the housing of the left ear plug 30L preferably fits relatively tightly to the user's left ear canal, to acoustically isolate the first sound inlet 18L from the first sound outlet 33L of the first receiver 32L and suppress acoustic feedback there between to the extent possible as discussed in additional detail below.
(12) The left hearing aid 10L may comprise one or more analogue-to-digital converters (not shown) which convert one or several analogue microphone signals generated by the hybrid microphone arrangement 16L, 17L into corresponding digital microphone signals with a certain resolution and sampling frequency such as between 8 kHz and 64 kHz for use by the first digital processor 24L.
(13) The first BTE housing 210L and first ear plug 30L are preferably mechanically and electrically interconnected via a first bidirectional wired interface 26L as schematically illustrated on
(14) The skilled person will understand that each of the digital processors 24L, 24R may comprise a software programmable microprocessor such as a Digital Signal Processor or comprise hardwired digital logic circuitry. The operation of the each of the left and right ear hearing aids 10L, 10R may be controlled by a suitable operating system executed on the software programmable microprocessor 24L, 24R. The operating system may be configured to manage hearing aid hardware and software resources e.g. including execution of hearing loss compensation algorithms, control of the first wireless data communication interface 34L, estimation of first and second interaural level differences of the incoming sound, controlling the first and second dynamic range compressors and the first and second gain adjustments, managing certain memory resources etc. The operating system may schedule tasks for efficient use of the hearing aid resources and may further include accounting software for cost allocation, including power consumption, processor time, memory allocation, wireless transmissions, and other resources. The operating system may control operation of the wireless communication link 12, 34L, 44L, 34R, 44R. The right ear hearing aid 10R may have the corresponding hardware components and software components that function in a corresponding manner as mentioned above
(15) The first digital processor 24L is configured to perform single-channel or multichannel gain processing of the first microphone signal 60L as mentioned above. This gain processing is preferably carried out by the first dynamic range compressor in accordance with a first level-versus-gain characteristic thereof. The skilled person will understand that the first level-versus-gain characteristic of the first digital processor 24L may be set or defined at initial fitting of the left ear hearing aid 10L to the user or patient based on a certain fitting rule. This fitting rule such as NAL-1 defines a level dependent, i.e. non-linear, amplification of the first microphone signal 60L to compensate for the measured hearing loss of the patient. This fitting rule may be applied over a plurality of frequency bands or channels of the first dynamic range compressor to adapt the latter to compensate for frequency dependence of the patient's hearing loss. This fitting may be carried out by a dispenser using a fitting software platform coupled to the left and right ear hearing aids 10L, 10R via a suitable programming interface to program these with suitable fitting parameters, in particular first compressor parameters that defines the gain processing of the first dynamic range compressor and/or second compressor parameters that defines the gain processing of the second dynamic range compressor. These first and second compressor parameters may be written to, and stored in, respective non-volatile memories (not shown) of the left ear hearing aid 10L and right ear hearing aid 10R. The first digital processor 24L may be configured to read-out the compressor parameters from the non-volatile memory at boot-up of the first digital processor 24L and utilize these in the processing of the first microphone signal 60L by the first dynamic range compressor and the second digital processor 24R may perform corresponding actions at boot-up of the second digital processor 24R.
(16) The first digital processor 24L is configured to generate and transmit first contralateral audio data 61L representative of the first microphone signal 60L via the bidirectional wireless communication link 12, 34L, 44L, 34R, 44R and to receive second contralateral audio data 61R (on
(17)
(18) The skilled person will understand that the sound pressure pick-up or receipt position of the first in-ear microphone 16L inside the user's left ear canal means that the first microphone signal 60L comprises sound contributions from the user's outer ear and concha and therefore provides an accurate representation of the user's individual left-ear head related transfer function, i.e. proper spatial cues. The same is true for the user's individual right-ear head related transfer function as measured by the corresponding second in-ear microphone 16R inside the user's right ear canal such that individual ILDs, as well as other spatial cues, between the user's left and right ears can be accurately determined.
(19)
(20) Respective signal levels, for example represented by respective energy or power levels, of the first plurality of frequency bands of the first microphone signal 60L are determined by the first digital processor in step 320. The respective signal levels of the first plurality of frequency bands are smoothed by the first signal processor in step 360 using individual attack times and individual release times for the frequency bands. The both the attack times and release times may lie between 0.5 ms and 100 ms where the shortest attack/release time constants are utilized in the higher frequency bands, e.g. above 3 kHz, and longest release time constants are utilized in the lower frequency bands e.g. below 200 Hz. The respective smoothed signal level estimates of the first plurality of frequency bands of the first microphone signal 60L are applied by the first digital processor (not shown) to the previously discussed bidirectional wireless communication link, schematically represented by antenna symbol MI_L, and transmitted there through to the right ear hearing aid. The respective smoothed signal level estimates of the first plurality of frequency bands are received at the right ear hearing aid where they may be seen as first contralateral audio data 61L representative of the first microphone signal 60L. The first contralateral audio data are preferably updated at regular time intervals and thereafter transmitted through the wireless communication link for example using a packet-oriented communication protocol. An update frequency of the first contralateral audio data 61L may lie between 10 Hz and 350 Hz for data rates between 2.6 kbps and 266 kbps dependent on the nature of the wireless communication link and its communication protocol. Care must be taken to choose the update frequency so as to avoid too long delay times to ensure the first contralateral audio data 61L, at receipt at the second hearing aid, are truly representative of the first microphone signal 60L at the current time instant and not too “old”. This challenge may be addressed by computing the first ILD and/or second ILD on two different time scales. The first time scale may be fixed by the communication or transmission protocol on the wireless communication link. The second time scale may be fixed by the above-mentioned sampling frequency of the first microphone signal. Thus, the first and second gains can be applied closer to the most recently computed ILDs.
(21) The second digital processor of the right ear hearing aid (shown in
(22) In step 330 the respective signal levels of the first plurality of frequency bands are preferably smoothed by integration with respective time constants such as individual attack times and release times. The attack times may lie between 12 ms and 50 ms while the release times may lie between 125 ms and 6000 ms where shorter attack times are utilized in the higher frequency bands, e.g. above 3 kHz, and longer release time constants are utilized in the lower frequency bands e.g. below 200 Hz.
(23) In step 340 the first signal processor is configured to determine an instantaneous gain value of the dynamic range compressor or compression algorithm of that frequency band by reference to its level-versus-gain characteristic and reference to the smoothed signal level of that frequency band as outputted by the multiband smoothing operation 330. The skilled person will understand that the respective level-versus-gain characteristics of the first plurality of frequency bands may be defined by one or more look-up tables mapping sound pressure levels of the incoming sound to corresponding gains of the first dynamic range compressor within the first plurality of frequency bands. The level-versus-gain characteristic of a particular frequency band may comprise a lower compression knee point e.g. an in-band signal level corresponding to between 40 and 55 dB SPL, and/or an upper compression knee point e.g. the in-band signal level corresponding to between 90 and 100 dB SPL. Below the lower knee point, the level-versus-gain characteristic of the dynamic range compressor may define essentially linear amplification and above the upper knee point, the level-versus-gain characteristic may define essentially infinite compression ratio such as above 10:1. For signal levels in-between the lower and upper knee points which signal levels correspond to the majority of normal sound levels for everyday communication, the level-versus-gain characteristic of a particular frequency band may define a constant or level variable compression ratio between 1.2 and 3.0. The latter compression ratio interval is well-suited to compensate for recruitment of the user's hearing loss and restore normal loudness perception of desired sounds like speech. By using individually computed level-versus-gain characteristics of the dynamic range compressor of each frequency band, an accurate loudness compensation of the user's hearing loss can be provided even if the user's hearing loss exhibits a pronounced frequency dependency. The skilled person will appreciate that the respective level-versus-gain characteristics of the first plurality of dynamic range compressors, and of the corresponding second plurality of dynamic range compressors of the right ear hearing aid, may be determined at the initial fitting of the binaural hearing aid system at the dispenser's office. The respective level-versus-gain characteristics of the first plurality of frequency bands are therefore used by the first digital processor to determine respective ones of the initial gain values in step 340.
(24) The plurality of initial first gain values are applied to the adjustment processing 345 to determine a corresponding plurality of adjusted first gain values to be used by step 350. The plurality of adjusted first gain values are applied to the plurality of frequency bands of the first microphone signal 60L in step 350 and the amplified first microphone signal is applied to the first receiver 32L for example through a suitable synthesis filter (not shown) and suitable output/power amplifier. The power amplifier may comprise a class-D amplifier to drive the first miniature loudspeaker with high efficiency and sufficient power and deliver a corresponding acoustic output signal or sound pressure 355. The first gain value within each frequency band may be increased or decreased or left unchanged by the gain adjustment processing 345 depending on the ILD at that frequency band as determined by step 380 and also depending on the corresponding, or second, ILD for the same frequency band which may be determined in parallel by the second digital processor of the second hearing aid. However, in all instances the goal of the gain adjustment(s) of each frequency band is to preserve in that frequency band, between first acoustic output signal 355 and corresponding second acoustic output signal, the first interaural level difference (ILD) for that band as determined by ILD step 380. In other words, the ILD between the first and second microphone signals per frequency band is preserved by the output sound signals supplied to the user's left and right ears. As a simple example suppose that the ILD between the first and second microphone signals in a particular frequency band or bands at a particular time instant is measured or determined in step 380 to be 20 dB. If the respective compression ratios of the first and second dynamic range compressors of the left ear and right ear hearing aids are set to e.g. 2:1 at hearing aid fitting, this means that the ILD between the first and second acoustic output signals is reduced to about 10 dB in that frequency band due to dynamic range compressor actions. Hence, the aim of the combined gain adjustment in steps 345 of the first and second hearing aids is to re-establish the ILD of 20 dB.
(25) This re-establishment of ILD may be carried out in several ways. According to one embodiment of the present methodology, the first or second digital processor applies a one-sided gain reduction in step 345 to the hearing aid subjected to lowest incoming sound level. This action may involve comparing the level of the first microphone signal 60L to the level of the second microphone signal (not shown), either broad-band for example frequencies between 100 Hz and 10 kHz, or to any particular frequency band of those discussed above to identify which of the first and second hearing aids that is subjected to the lowest level of the incoming sound in that frequency band or broad-banded. This comparison may be carried out by the first digital processor by a simple inspection of a sign of the ILD for that frequency band computed in step 380. If for example the left ear hearing aid has the lowest level of incoming sound, the first digital processor adjusts, typically by reducing, in step 345 exclusively the gain of the left ear hearing aid so as to preserve or re-establish the interaural level difference as determined by step 380 between first and second output acoustic output signals. This may be convenient because the hearing aid subjected to the lowest incoming sound level typically exhibits a higher gain than the opposite hearing aid due to the level dependent gain imparted to the first and second microphone signals 60L, 60R by the respective dynamic range compressors. Reducing the first gain of the left ear hearing aid to re-establish the appropriate interaural level difference in that situation reduces possible feedback stability problems. The one-sided gain reduction serves at the same time to maintain the initially determined second gain value of the dynamic range compressor of the second hearing thereby avoiding to introduce new gain induced feedback problems. The skilled person will understand that the hearing aid subjected to the lowest incoming sound level may change dynamically depending on positions and movements of environmental sound sources, like a speaker, around the hearing aid user and depending on the hearing aid user's orientation in space. Therefore, according to certain embodiments of the present methodology of performing bilateral dynamic range compression and corresponding binaural hearing aid system, the one-sided gain reduction in step 345 may over time be alternatingly applied to the first gain of the first dynamic range compressor of the first hearing aid and the second gain of the second dynamic range compressor of the second hearing aid—for example as determined by the sign of the ILD. This embodiment preferably comprises a bidirectional wireless communication link 12 (shown in
(26) The skilled person will appreciate that alternative embodiments of the present methodology and binaural hearing aid system may prevent the undesired reduction of ILD in other ways for example by utilizing two-sided gain adjustment. According to one such embodiment the first or second digital processor 24L, 24R reduces the gain in step 345 of the hearing aid subjected to lowest incoming sound level. The digital processor of the opposite hearing aid increases the gain of the hearing aid subjected to the highest sound level in step 345 so as that the combined gain adjustments preserve the interaural level difference, as determined by step 380, between first and second output signals. This action may involve comparing the level of the first microphone signal 60L to the level of the second microphone signal 60R, either broad-band, or in any particular frequency band or bands, of those discussed above to identify which of the first and second hearing aids that is subjected to the lowest level of the incoming sound in that frequency band or broad-banded.
(27) Certain embodiments of the present methodology and binaural hearing aid system may comprise a first voice activity detector 370 as illustrated on
(28) The VAD 370 may be of entirely conventional construction or design and therefore common general knowledge. One exemplary embodiment of the VAD 370 utilizes the following signal processing strategy: for each time-frequency bin or frequency band, if the current signal power P(f,t) is greater by a threshold than the estimated Ambient noise level Amb(f,t), then set a Voice-Activity Indicator (VAI) to 1, otherwise VAI (f,t)=0. These actions require a separation between speech and noise signals (f,t). This is achieved by two low-pass filters with different time constants. What actually is separated is the signal plus noise stream from the noise stream alone. Since envelopes of speech signals are varying quicker it allows to filter the desired speech segments from background noise.
(29) As briefly discussed above, the inherent proximity between the first sound inlet 18L and the sound outlet 33L in the housing of the left ear plug 30L (shown in
(30) The first digital processor 24L is configured or programmed to determine a transfer function of a first feedback path from the first acoustic output signal 355 to the first in-ear microphone 16L. The first digital processor 24L is further configured to compensate the first feedback path by a fixed or adaptive feedback cancellation filter to increase the maximum stable gain of the first hearing aid.
(31) An alternative method, or even complementary method, of increasing the maximum stable gain of the left ear hearing aid 10L involves exploiting an additional microphone signal picked-up or received at a different physical location of the housing structure of the hearing aid 10L than the in-ear microphone 16L. This additional microphone may be one, or both, of the first pair of omnidirectional microphones 17L that are arranged in the left ear BTE housing 210L as schematically illustrated on
(32)
(33) As expected, the latter maximum stable gain is significantly higher than the maximum stable gain of the first in-ear microphone 16L inter alia due to a larger physical separation between the first pair of omnidirectional microphones 17L and the first receiver 32L. The first digital processor 24L may apply attenuation to the first microphone signal in the frequency range indicated by the black square 415, because the maximum stable gain is lower than the acoustic insertion gain in that frequency range indicating that acoustical feedback oscillation is likely. The attenuation of the first microphone signal may be carried out by mixing or blending in the microphone signal generated by the first pair of omnidirectional microphones 17L such that the latter is dominating in the first hybrid microphone signal. The mixing is preferably carried out such that a level of the first hybrid microphone signal largely corresponds to the level of the first microphone signal.
(34) Although the above embodiments have mainly been described with reference to certain specific examples, various modifications thereof will be apparent to those skilled in art without departing from the spirit and scope of the invention as outlined in claims appended hereto. The specification and drawings are, accordingly to be regarded in an illustrative rather than restrictive sense. The claimed invention is intended to cover all alternatives, modifications, and equivalents.
LIST OF REFERENCES
(35) 10L first hearing aid 10R second hearing aid 12 wireless communication link 16L first in-ear microphone 16R second in-ear microphone 17L first additional microphone 17R second additional microphone 18L first sound inlet 18R second sound inlet 24L first digital processor 24R second digital processor 25L first hearing aid circuitry 25R second hearing aid circuitry 26L first bidirectional wired interface 26R second bidirectional wired interface 30L first ear plug 30R second ear plug 32L first receiver 32R second receiver 33L first sound outlet 33R second sound outlet 34L first wireless data communication interface 34R second wireless data communication interface 44L first antenna 44R second antenna 50 binaural hearing aid system 60L first microphone signal 60R first microphone signal 61L first contralateral audio data 61R second contralateral audio data 210L first BTE housing 210R second BTE housing 310 Analysis filter bank configured to split or divide the first microphone signal into a first plurality of frequency bands 320 determine first signal levels of the first plurality of frequency bands 330 smooth the first signal levels 340 determine a first initial gain value of the compressor or compressor algorithm 345 apply plurality of initial first gain values to the adjustment processing to determine a corresponding plurality of adjusted first gain values 350 apply the plurality of adjusted first gain values to the plurality of frequency bands 355 provide a corresponding acoustic output signal 360 smooth the first signal levels 370 Voice activity detector configured to detect speech segments and non-speech segments in the first microphone signal 380 determine a first plurality of interaural level differences (ILDs) between the first and second microphone signals per frequency band 390 subtract the smoothed signal level estimates of the second plurality of frequency bands from the correspondingly smoothed signal level estimates of the first plurality of frequency bands.