HEARING AID DEVICE FOR HANDS FREE COMMUNICATION
20230269549 · 2023-08-24
Assignee
Inventors
Cpc classification
H04R25/30
ELECTRICITY
H04R2499/11
ELECTRICITY
H04R2225/39
ELECTRICITY
H04R25/407
ELECTRICITY
H04R25/554
ELECTRICITY
H04R2225/41
ELECTRICITY
H04R25/43
ELECTRICITY
International classification
Abstract
The present invention regards a hearing aid device at least one environment sound input, a wireless sound input, an output transducer, electric circuitry, a transmitter unit, and a dedicated beamformer-noise-reduction-system. The hearing aid device is configured to be worn in or at an ear of a user. The at least one environment sound input is configured to receive sound and to generate electrical sound signals representing sound. The wireless sound input is configured to receive wireless sound signals. The output transducer is configured to stimulate hearing of the hearing aid device user. The transmitter unit is configured to transmit signals representing sound and/or voice. The dedicated beamformer-noise-reduction-system is configured to retrieve a user voice signal representing the voice of a user from the electrical sound signals. The wireless sound input is configured to be wirelessly connected to a communication device and to receive wireless sound signals from the communication device. The transmitter unit is configured to be wirelessly connected to the communication device and to transmit the user voice signal to the communication device.
Claims
1. A hearing aid comprising: a microphone configured to obtain sounds in an environment around the hearing aid and convert the sounds into an electrical signal representative of the sounds; a voice-activity detection unit configured to receive the electrical signal and to determine whether a voice-activity is present in the electrical signal; an auditory output configured to output the electrical signal to the user as an auditory signal; and interface circuitry configured to wirelessly communicate with a mobile telephone; and wherein the hearing aid is configured to, based on a determination of whether the voice-activity is present, operate in at least one of: a hearing-aid mode wherein the hearing aid is disconnected from the mobile telephone, the hearing-aid mode having a first processing scheme; and a speaker mode wherein the hearing aid is configured to wirelessly communicate with the mobile telephone and receive sound data from the mobile telephone and output the sound data via the auditory output, the speaker mode having a second processing scheme, the second processing scheme being different than the first processing scheme.
2. The hearing aid of claim 1, wherein the hearing aid is configured to estimate a noise power spectral density of the sounds in the environment, and wherein the hearing aid is configured to, based on the noise power spectral density and the determination of whether the voice-activity is present, attenuate further sounds obtained by the microphone.
3. The hearing aid of claim 1, wherein the second processing scheme is configured to attenuate disturbing sounds while maintaining selected sounds implying danger.
4. The hearing aid of claim 1, further comprising a beamformer-noise-reduction-system comprising a beamformer configured to suppress predetermined spatial directions of the sounds.
5. The hearing aid of claim 1, wherein the beamformer is configured to attenuate predetermined spatial directions of the sounds based on the determination of whether the voice-activity is present.
6. The hearing aid of claim 1, wherein the interface circuitry is configured to wirelessly communicate with the mobile telephone via an intermediate device.
7. The hearing aid of claim 6, wherein the intermediate device is a remote control, and wherein the hearing aid is configured to receive control signals from the remote control.
8. The hearing aid of claim 1, wherein the hearing aid is configured to receive user input and the hearing aid is configured to operate in the hearing-aid mode or the speaker mode based on the user input.
9. The hearing aid of claim 1, wherein the hearing aid is configured to receive user input indicative of a selection of the hearing-aid mode or the speaker mode, and wherein the hearing aid is configured to operate based on the selection.
10. The hearing aid of claim 1, wherein the hearing-aid mode comprises a plurality of secondary hearing-aid modes, each of the plurality of secondary hearing-aid modes having different operation parameters for the hearing aid.
11. The hearing aid of claim 1, wherein, in the speaker mode, the hearing aid is configured to attenuate the sounds in the environment.
12. The hearing aid of claim 1, wherein, in the speaker mode, the hearing aid is obtaining audio from both the environment and the mobile telephone.
13. The hearing aid of claim 12, wherein in the speaker mode the hearing aid is configured to combine the audio from both the environment and the mobile telephone.
14. The hearing aid of claim 1, wherein the hearing aid comprises a switch configured to establish wireless communication with the mobile telephone.
15. A system comprising a hearing aid according to claim 1, and a communication unit configured as a remote control to control functionality of the hearing aid.
16. A system of claim 15, wherein the communication unit is the mobile telephone, wherein the function as a remote control is implemented as an application in the mobile telephone.
17. The hearing aid of claim 1, wherein the auditory output comprises one or more of a loudspeaker for outputting an airborne acoustic signal, an implanted vibrator, and an implanted electrical stimulator.
18. The hearing aid of claim 1, wherein the voice-activity detection unit is configured determine a probability of whether a voice-activity is present in the electrical signal.
19. A hearing aid comprising: a microphone configured to obtain sounds in an environment around the hearing aid and convert the sounds into an electrical signal representative of the sounds; a voice-activity detection unit configured to receive the electrical signal and to determine whether a voice-activity is present in the electrical signal; an auditory output configured to output the electrical signal to the user as an auditory signal; and interface circuitry configured to wirelessly communicate with a telephone network; and wherein the hearing aid is configured to, based on a determination of whether the voice-activity is present, operate in at least one of: a hearing-aid mode wherein the hearing aid is disconnected from the telephone network, the hearing-aid mode having a first processing scheme; and a speaker mode wherein the hearing aid is configured to wirelessly communicate with the telephone network and receive sound data from the telephone network and output the sound data via the auditory output, the speaker mode having a second processing scheme, the second processing scheme being different than the first processing scheme.
20. The hearing aid according to claim 19, wherein the interface circuitry is configured to wirelessly communicate with the telephone network via a mobile telephone.
Description
[0050] The present invention will be more fully understood from the following detailed description of embodiments thereof, taken together with the drawings in which:
[0051]
[0052]
[0053]
[0054]
[0055]
[0056]
[0057]
[0058] Incoming sound 34 is received by the microphones 14 and 14′ of the hearing aid device 10. The microphones 14 and 14′ generate electrical sound signals 35 representing the incoming sound 34. The electrical sound signals 35 can be divided in frequency bands by the spectral filterbank (not shown) (in which case the subsequent analysis and/or processing of the band split signal is performed for each (or selected) frequency subband. For example, a VAD decision could then be a local per-frequency band decision). The electrical sound signals 35 are provided to the electric circuitry 16. The electric circuitry 16 comprises a dedicated beamformer-noise-reduction-system 36, which comprises a beamformer (Beamformer) 38 and a single channel noise reduction unit (Single-Channel Noise Reduction) 40, and which is connected to a voice activity detection unit 42. The electrical sound signals 35 are processed in the electric circuitry 16, which generates a user voice signal 44, if a voice of a user 46 (see
[0059] The processing of the electrical sound signals 35 in the electric circuitry 16 is performed as follows. The electrical sound signals 35 are first analysed in the voice activity detection unit 42, which is further connected to the wireless sound input 18. If a wireless sound signal 19 is received by the wireless sound input 18 the communication mode is activated. In the communication mode the voice activity detection unit 42 is configured to detect an absence of a voice signal in the electrical sound signal 35. It is assumed in this embodiment of the communication mode, that receiving a wireless sound signal 19 corresponds to the user 46 listening during communication. The voice activity detection unit 42 can also be configured to detect an absence of a voice signal in the electrical sound signal 35 with a higher probability if the wireless sound input 18 receives a wireless sound signal 19. Receiving a wireless sound signal 19 here means, that a wireless sound signal 19 is received, which has a signal-to-noise ratio and/or sound level above a predetermined threshold. If no wireless sound signal 19 is received by the wireless sound input 18 the voice activity detection unit 42 detects whether a voice signal is present in the electrical sound signals 35. If the voice activity detection unit 42 detects a voice signal of a user 46 (see
[0060] The voice activity detection unit (VAD) 42 can further be configured to detect a voice signal only when the signal-to-noise ratio and/or the sound level of a detected voice are above a predetermined threshold. The voice activity detection unit 42 operating in the communication mode can also be configured to continuously detect whether a voice signal is present in the electrical sound signal 35, independent of the wireless sound input 18 receiving a wireless sound signal 19.
[0061] The voice activity detection unit (VAD) 42 indicates to the beamformer 38 if a voice signal is present in at least one of the electrical sound signals 35, i.e., in the user speaking mode (dashed arrow from VAD 42 to Beamformer 38 in
[0062] The spatial sound signal 39 is provided to the single channel noise reduction unit (Single-Channel Noise Reduction) 40. The single channel noise reduction unit 40 uses a predetermined noise signal to reduce the noise in the spatial sound signal 39, e.g., by subtracting the predetermined noise signal from the spatial sound signal 39. The predetermined noise signal is for example an electrical sound signal 35, a spatial sound signal 39, or a processed combination thereof of a previous time period, in which a voice signal is absent in the respective sound signal or sound signals. The single channel noise reduction unit 40 generates a user voice signal 44, which is then provided to the transmitter unit 20 (cf.
[0063] In other modes the hearing aid device 10 can for example be used as an ordinary hearing aid, e.g., in a normal listening mode, in which, e.g., the listening quality is optimized (cf.
[0064] The hearing aid device 10 further comprises a switch 50 to, e.g., select and control the modes of operation and a memory 52 to store data, such as the modes of operation, algorithms and other parameters, e.g., spatial direction parameters (cf.
[0065] The algorithm as described estimates the clean voice signal of the user (wearer) of the hearing aid device as picked up by a (or one or more) chosen microphone(s). However, for the far-end listener, the speech signal would sound more natural, if it were picked up in front of the mouth of the speaker (here the user of the hearing device). This is, of course, not completely possible, since we don't have a microphone positioned there, but we can in fact make a compensation to the output of our algorithm to simulate how it would sound if it were picked up in front of the mouth. This may be done simply by passing the output of our algorithm through a time-invariant linear filter, simulating the transfer function from microphone to mouth. This linear filter could be found from the dummy head in a completely analogous way to what we have done so far. Hence, in an embodiment, the hearing aid device comprises an (optional) post-processing block (M2Mc, microphone-to-mouth compensation) between the output of the current algorithm (Beamformer, Single-Channel Noise Reduction unit (38, 40)) and the transmitter unit (20), cf. dashed unit M2Mc in
[0066]
[0067] In the following, an exemplary communication scenario is presented. A phone call reaches the user 46. The phone call is accepted by the user 46, e.g., by activating the switch 50 at the hearing aid device 10 (or via another user interface, e.g. a remote control, e.g. implemented in the user's mobile phone). The hearing aid device 10 activates the communication mode and connects wirelessly to the mobile phone 12. A wireless sound signal 19 is wirelessly transmitted from the mobile phone 12 to the hearing aid device 10 using the transmitter unit 28 of the mobile phone 12 and the wireless sound input 18 of the hearing aid device 10. The wireless sound signal 19 is provided to the speaker 24 of the hearing aid device 10, which generates an output sound 48 (see
[0068] The voice activity detection (VAD) algorithm or voice activity detection (VAD) unit 42 allows for adapting the user voice, i.e., own voice, retrieval system. The VAD 42 task in this particular situation is rather simple as a user voice signal 44 is likely absent, when a wireless sound signal 19 (having a certain signal content) is received by the wireless sound input 18. When the VAD 42 detects no user voice, in the electrical sound signals 35, while the wireless sound input 18 receives a wireless sound signal 19, a noise power spectral density (PSD) used in the single channel noise reduction unit 40 for reducing noise in the electrical sound signal 35 is updated (because it is assumed that the user is silent (while listening to a remote talker) and hence ambient sounds picked up the microphone(s) of the hearing aid device can be considered as noise (in the present situation)). The look vector in the beamforming algorithm or beamformer unit 38 can be updated as well. When the VAD 42 detects a user voice the beamformers spatial direction, i.e., the look vector is (or may be) updated. This allows the beamformer 38 to compensate for the variation (deviation) of the hearing aid users' head characteristics from a standard dummy head 56 (see
[0069]
[0070] The microphones 14 and 14′ receive incoming sound 34 and generate electrical sound signals 35. The hearing aid device 10′ has more than one signal transmission path to process the electrical sound signals 35 received by the microphones 14 and 14′. A first transmission path provides the electrical sound signals 35 as received by the microphones 14 and 14′ to the voice activity detection unit 42, corresponding to the mode of operation presented in
[0071] A second transmission path provides the electrical sound signals 35 as received by the microphones 14 and 14′ to the beamformer 38. The beamformer 38 suppresses spatial directions in the electrical sound signals 35 using the predetermined spatial direction parameters, i.e., the look vector, to generate a spatial sound signal 39. The spatial sound signal 39 is provided to the voice activity detection unit 42 and the single channel noise reduction unit 40. The voice activity detection unit 42 determines whether a voice signal is present in the spatial sound signal 39. If a voice signal is present in the spatial sound signal 39 the voice activity detection unit 42 transmits a voice detected signal to the single channel noise reduction unit 40 and if no voice signal is present in the spatial sound signal 39 the voice activity detection unit 42 transmits a no voice detected signal to the single channel noise reduction unit 40 (cf. dashed arrow from VAD 42 to Single-Channel Noise Reduction 40 in
[0072] In a normal listening mode, the environment sound picked up by microphones 14, 14′ may be processed by a beamformer and noise reduction system (but with other parameters, e.g. another look vector (not aiming at the user's mouth), e.g. an adaptively determined look vector depending on the current sound field around the user/hearing aid device) and further processed in a signal processing unit (electric circuitry 16) before being presented to the user via an output transducer (e.g. speaker 24 in
[0073] In the following, the dedicated beamformer-noise-reduction-system 36 comprising the beamformer 38 and the single channel noise reduction unit 40 is described in more detail. The beamformer 38, the single channel noise reduction unit 40, and the voice activity detection unit 42 are considered to be algorithms in the following which are stored in the memory 52 and executed on the electric circuitry 16 (cf.
[0074] The beamformer 38 can for example be a generalized sidelobe canceller (GSC), a minimum variance distortionless response (MVDR) beamformer 38, a fixed look vector beamformer 38, a dynamic look vector beamformer 38, or any other beamformer type known to a person skilled in the art.
[0075] A so-called minimum variance distortionless response (MVDR) beamformer 38, see, e.g., [Kjems & Jensen; 2012] or [Haykin; 1996] (S. Haykin, “Adaptive Filter Theory,” Third Edition, Prentice Hall International Inc., 1996), can generally be described by the MVDR beamformer weight vector W.sub.H, as follows
[0076] where {circumflex over (R)}.sub.VV(k) is (an estimate of) the inter-microphone noise covariance matrix for the current acoustic environment, {circumflex over (d)}(k) is the estimated look vector (representing the inter-microphone transfer function for a target sound source at a given location), k is a frequency index and i.sub.ref is an index of a reference microphone (* denotes complex conjugate, and .sup.H denotes Hermitian transposition). It can be shown that this beamformer 38 minimizes the noise power in its output, i.e., the spatial sound signal 39, under the constraint that a target sound component, i.e., the voice of the user 46, is unchanged, see, e.g., [Haykin; 1996]. The look vector d represents the ratio of transfer functions corresponding to the direct part, i.e., first 20 ms, of room impulse responses from the target sound source 58, e.g., the mouth of a user 46 (see
[0077] A second embodiment of the beamformer 38 is a fixed look vector beamformer 38. A fixed look vector beamformer 38 from a user's mouth, i.e., target sound source 58, to the microphones 14 and 14′ of the hearing aid device 10 can, e.g., be implemented by determining a fixed look vector d=d.sub.0 (e.g. using an artificial dummy head 56 (see
[0078] where s(n, k)=[s(n, k,1)s(n, k,2)].sup.T and s(n, k, m) is the output of an analysis filter bank, for microphone m, at time frame n and frequency index k. For a true point sound source, the signal impinging on the microphones 14 and 14′ or on a microphone array would be of the form s(n, k)=s(n, k)d(k) such that (assuming that signal s(n, k) is stationary) the theoretical target covariance matrix R.sub.SS(k)=E[s(n, k)s.sup.H(n, k)] would be of the form
R.sub.SS(k)=ϕ.sub.SS(k)d(k)d.sup.H(k),
[0079] where ϕ.sub.SS(k) is the power spectral density of the target sound signal, i.e., the voice of the user 46 coming from the target sound source 58, meaning the user voice signal 44, observed at the reference microphone 14. Therefore, the eigenvector of R.sub.SS(k) corresponding to the non-zero eigenvalue is proportional to d(k). Hence, the look vector estimate {circumflex over (d)}(k), e.g., the relative target sound source 58 to microphone 14, i.e., mouth to ear transfer function {circumflex over (d)}.sub.0(k), is defined as the eigenvector corresponding to the largest eigenvalue of the estimated target covariance matrix {circumflex over (R)}.sub.ss(k). In an embodiment, the look vector is normalized to unit length, that is:
[0080] such that ∥d∥.sup.2=1. The look vector estimate {circumflex over (d)}(k) thus encodes the physical direction and distance of the target sound source 58, it is therefore also called the look direction. The fixed, pre-determined look vector estimate {circumflex over (d)}.sub.0(k) can now be combined with an estimate of the inter-microphone noise covariance matrix {circumflex over (R)}.sub.VV(k) to find MVDR beamformer weights (see above).
[0081] In a third embodiment, the look vector can be dynamically determined and updated by a dynamic look vector beamformer 38. This is desirable in order to take into account physical characteristics of the user 46 which differ from those of the dummy head 56, e.g., head form, head symmetry, or other physical characteristics of the user 46. Instead of using a fixed look vector d.sub.0, as determined by using the artificial dummy head 56, e.g. HATS (see
[0082] The beamformer 38 provides an enhanced target sound signal (here focusing on the user's own voice) comprising the clean target sound signal, i.e., the user voice signal 44, (e.g., because of the distortionless property of the MVDR beamformer 38), and additive residual noise, which the beamformer 38 was unable to completely suppress. This residual noise can be further suppressed in a single-channel post filtering step using the single channel noise reduction unit 40 or a single channel noise reduction algorithm executed on the electric circuitry 16. Most single channel noise reduction algorithms suppress time-frequency regions where the target sound signal-to-residual noise ratio (SNR) is low, while leaving high-SNR regions unchanged, hence an estimate of this SNR is needed. The power spectral density (PSD) σ.sub.w.sup.2(k, m) of the noise entering the single-channel noise reduction unit 40 can be expressed as
σ.sub.w.sup.2(k,m)=w.sup.H(k,m){circumflex over (R)}.sub.VVw(k,m)
[0083] Given this noise PSD estimate, the PSD of the target sound signal, i.e., user voice signal 44, can be estimated as
{circumflex over (σ)}.sub.s.sup.2(k,m)=σ.sub.x.sup.2(k,m){circumflex over (σ)}.sub.w.sup.2(k,m).
[0084] The ratio of {circumflex over (σ)}.sub.s.sup.2(k, m) and {circumflex over (σ)}.sub.w.sup.2(k, m) forms an estimate of the SNR at a particular time-frequency point. This SNR estimate can be used to find the gain of the single channel reduction unit 40, e.g., a Wiener filter, an mmse-stsa optimal gain, or the like, see, e.g., P. C. Loizou, “Speech Enhancement: Theory and Practice,” Second Edition, CRC Press, 2013 and the references therein.
[0085] The described own-voice beamformer estimates the clean own-voice signal as observed by one of the microphones. This sounds slightly strange, and the far-end listener may be more interested in the voice signal as measured at the mouth of the HA user. Obviously, we don't have a microphone located at the mouth, but since the acoustical transfer function from mouth to microphone is roughly stationary, it is possible to make a compensation (pass the current output signal through a linear time-invariant filter) which emulates the transfer function from microphone to mouth.
[0086]
[0087]
[0091] The first processing scheme 130 comprises the steps 140 and 150. [0092] 140 using the electrical sound signals 35 to update a noise signal representing noise used for noise reduction, [0093] 150 using the noise signal to update values of predetermined spatial direction parameters.
[0094] (In an embodiment, steps 140 and 150 are combined to update an inter-microphone noise-only covariance matrix)
[0095] The second processing scheme 160 comprises the step 170. [0096] 170 determining if the electrical sound signals 35 comprise a voice signal representing voice and activating the first processing scheme 130 if a voice signal is absent in the electrical sound signals 35 and activating a noise reduction scheme 180 if the electrical sound signals 35 comprise a voice signal.
[0097] The noise reduction scheme 180 comprises the steps 190 and 200. [0098] 190 using the electrical sound signals 35 to update the values of the predetermined spatial direction parameters (if near-end speech is dominant, update estimate of own-voice inter-microphone covariance matrix and then find (e.g.) the dominant eigenvector=(relative) transfer function from source to microphone(s)), [0099] 200 retrieving a user voice signal 44 representing the user voice from the electrical sound signals 35. Preferably a spatial sound signal 39 representing spatial sound is generated from the electrical sound signals 35 using the predetermined spatial direction parameters and a user voice signal 44 is generated from the spatial sound signal 39 using (e.g.) the noise signal to reduce noise in the spatial sound signal 39.
[0100] Optionally the user voice signal can be transmitted to, e.g., a communication device such as a mobile phone 12 wirelessly connected to the hearing aid device 10. The method can be performed continuously by starting again at step 100 after step 150 or step 200.
[0101]
[0108] Additionally the beamformer 38 can be an adaptive beamformer 38. In this case the method is used for training the hearing aid device 10 as an own-voice detector and the method further comprises the following steps. [0109] 270 If a voice signal is present in the spatial sound signal 39, determine an estimate of the user voice inter-environment sound input covariance matrices and the eigenvector corresponding to the dominant eigenvalue of the covariance matrix. This eigenvector is the look vector. The look vector is then applied to the adaptive beamformer 38 to improve the spatial direction of the adaptive beamformer 38. The adaptive beamformer 38 is used to determine a new spatial sound signal 39. In this embodiment the sound 34 is obtained continuously. The electrical sound signal 35 can be sampled or supplied as a continuous electrical sound signal 35 to the beamformer 38.
[0110] The beamformer 38 can be an algorithm performed on the electric circuitry 16 or a unit in the hearing aid device 10. The method can also be performed independent of the hearing aid device on any other suitable device. The method can be iteratively performed, e.g., by starting again at step 210 after performing step 270.
[0111] In the above examples, the hearing aid device(s) communicate(s) directly with a mobile phone. Other embodiments, where the hearing aid device(s) communicate(s) with the mobile phone VIA an intermediate device is also intended to be within the scope of the accompanying claims. The user advantage is that, whereas today the mobile phone or the intermediate device must be held in a hand or worn in a string around the neck so that its microphone is just below the mouth, with the proposed invention, the mobile phone and/or the intermediate device may be covered by clothes or carried in a pocket. This is convenient and has the benefit that the user does not need to flash that he wears a hearing aid device.
[0112] In the above examples, the processing (electric circuitry 16) of the input sound signals (from microphone(s) and wireless receiver) is generally assumed to be located in the hearing aid device. In case of sufficient available bandwidth for transmitting audio signals ‘back and forth’, such processing (e.g. including beamforming and noise reduction) may be located in an external device, e.g. an intermediate device or a mobile telephone device. Thereby power and space can be saved in the hearing aid device; such parameters typically both being limited in a state of the art hearing aid device.
REFERENCE SIGNS
[0113] 10 hearing aid device [0114] 12 mobile phone [0115] 14 microphone [0116] 16 electric circuitry [0117] 18 wireless sound input [0118] 19 wireless sound signal [0119] 20 transmitter unit [0120] 22 antenna [0121] 24 speaker [0122] 26 antenna [0123] 28 transmitter unit [0124] 30 receiver unit [0125] 32 interface to public telephone network [0126] 34 incoming sound [0127] 35 electrical sound signal representing sound [0128] 36 dedicated beamformer-noise-reduction-system [0129] 38 beamformer [0130] 39 spatial sound signal [0131] 40 single channel noise reduction unit [0132] 42 voice activity detection unit [0133] 44 user voice signal [0134] 46 user [0135] 48 output sound [0136] 50 switch [0137] 52 memory [0138] 54 dummy head model system [0139] 56 dummy head [0140] 58 target sound source [0141] 60 training voice signal