Hearing system containing a hearing instrument and a method for operating the hearing instrument

11510018 · 2022-11-22

Assignee

Inventors

Cpc classification

International classification

Abstract

A hearing system contains a hearing instrument and the hearing instrument is configured to support the hearing of a hearing-impaired user. The hearing instrument is operated via an operating method. The method includes capturing a sound signal from an environment of the hearing instrument, processing the captured sound signal to at least partially compensate the hearing-impairment of the user and outputting the processed sound signal to the user. The captured sound signal is analyzed to recognize speech intervals, in which the captured sound signal contains speech. During recognized speech intervals, at least one time derivative of an amplitude and/or a pitch of the captured sound signal is determined. The amplitude of the processed sound signal is temporarily increased, if the at least one derivative fulfills a predefined criterion.

Claims

1. A method for operating a hearing instrument configured to support hearing of a hearing-impaired user, which comprises the steps of: capturing a sound signal from an environment of the hearing instrument; processing a captured sound signal to at least partially compensate a hearing-impairment of the hearing-impaired user; analyzing the captured sound signal to recognize speech intervals, in which the captured sound signal contains speech; determining, during recognized speech intervals, at least one time derivative of an amplitude and/or a pitch of the captured sound signal; temporarily increasing the amplitude of a processed sound signal, if the at least one derivative fulfills a predefined criterion; and outputting the processed sound signal to the hearing-impaired user.

2. The method according to claim 1, which further comprises increasing the amplitude of the processed sound signal for a predefined time interval, if the at least one derivative fulfills the predefined criterion.

3. The method according to claim 2, wherein within the predefined time interval, the amplitude of the processed sound signal is continuously increased and/or continuously decreased.

4. The method according to claim 1, wherein according to the predefined criterion, the amplitude of the processed sound signal is temporarily increased if the at least one derivative exceeds a predefined threshold or is within a predefined range.

5. The method according to claim 1, wherein the at least one derivative is a time-averaged derivative of the amplitude and/or the pitch of the captured sound signal.

6. The method according to claim 1, wherein the at least one derivative has a first derivative.

7. The method according to claim 6, wherein the at least one derivative further has at least one higher order derivative.

8. The method according to claim 7, wherein: according to the predefined criterion, the amplitude of the processed sound signal is temporarily increased if the at least one first derivative exceeds a predefined threshold or is within a predefined range; and the predefined threshold or the predefined range is varied in dependence of the higher order derivative.

9. The method according to claim 1, wherein the amplitude of the processed sound signal is temporarily increased by an amount that is varied in dependence of the at least one derivative.

10. The method according to claim 1, wherein: recognized speech intervals are differentiated into own-voice intervals, in which the hearing-impaired user speaks, and foreign-voice intervals, in which at least one different speaker speaks; and the step of temporarily increasing the amplitude of the processed sound signal is only performed during the foreign-voice intervals.

11. A hearing instrument of a hearing system configured to support a hearing of a hearing-impaired user, the hearing instrument comprising: an input transducer disposed to capture a sound signal from an environment of the hearing instrument; a signal processor disposed to process a captured sound signal to at least partially compensate a hearing-impairment of the hearing-impaired user; an output transducer disposed to emit a processed sound signal to the user; a voice recognition unit configured to analyze the captured sound signal to recognize speech intervals, in which the captured sound signal contains speech; a derivation unit configured to determine, during recognized speech intervals, at least one time derivative of an amplitude and/or a pitch of the captured sound signal; and a speech enhancement unit configured to temporarily increase the amplitude of the processed sound signal, if the at least one derivative fulfills a predefined criterion to enhance speech accents.

12. The hearing system according to claim 11, wherein said speech enhancement unit is configured to increase the amplitude of the processed sound signal for a predefined time interval if the at least one derivative fulfills the predefined criterion.

13. The hearing system according to claim 12, wherein said speech enhancement unit is configured to continuously increase and/or continuously decrease the amplitude of the processed sound signal within the predefined time interval.

14. The hearing system according to claim 11, wherein said speech enhancement unit is configured to temporarily increase the amplitude of the processed sound signal, according to the predefined criterion, if the at least one derivative exceeds a predefined threshold or is within a predefined range.

15. The hearing system according to claim 11, wherein the at least one derivative is a time-averaged derivative of the amplitude and/or the pitch.

16. The hearing system according to claim 11, wherein the at least one derivative contains a first derivative.

17. The hearing system according to claim 16, wherein the at least one derivative has at least one higher order derivative.

18. The hearing system according to claim 17, wherein said speech enhancement unit is configured to: temporarily increase the amplitude of the processed sound signal, according to the predefined criterion, if the first derivative exceeds a predefined threshold or is within a predefined range; and vary the predefined threshold or the predefined range in dependence on the higher order derivative.

19. The hearing system according to claim 11, wherein said speech enhancement unit is configured to temporarily increase the amplitude of the processed sound signal by an amount that is varied in dependence on the at least one derivative.

20. The hearing system according to claim 11, wherein: said voice recognition unit is configured to differentiate recognized speech intervals into own-voice intervals, in which the hearing-impaired user speaks, and foreign-voice intervals, in which at least one different speaker speaks; and said speech enhancement unit temporarily increases the amplitude of the processed sound signal during the foreign-voice intervals only.

Description

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING

(1) FIG. 1 is a schematic representation of a hearing system containing a hearing aid (i.e. a hearing instrument to be worn in or at the ear of a user), the hearing aid containing an input transducer arranged to capture a sound signal from an environment of the hearing aid, a signal processor arranged to process the captured sound signal, and an output transducer arranged to emit the processed sound signal to the user;

(2) FIG. 2 is a flow chart of a method for operating the hearing aid of FIG. 1, the method containing, in a speech enhancement step, temporarily applying a gain and, thus, temporarily increasing the amplitude of the processed sound signal to enhance speech accents of a foreign-voice speech in the captured sound signal;

(3) FIG. 3 is a flow chart of a first embodiment of a method step for recognizing speech accents, which method step is a part of the speech enhancement step of the method according to FIG. 2;

(4) FIG. 4 is a flow chart of a second embodiment of the method step for recognizing speech accents;

(5) FIGS. 5 to 7 are graphs showing an amplitude of the processed sound signal over time in three different variants of temporarily increasing the amplitude of the processed sound signal; and

(6) FIG. 8 is a schematic representation of a hearing system containing a hearing aid according to FIG. 1 and a software application for controlling and programming the hearing aid, the software application being installed on a mobile phone.

DETAILED DESCRIPTION OF THE INVENTION

(7) Like reference numerals indicate like parts, structures and elements unless otherwise indicated.

(8) Referring now to the figures of the drawings in detail and first, particularly to FIG. 1 thereof, there is shown a hearing system 2 containing a hearing aid 4, i.e. a hearing instrument being configured to support the hearing of a hearing-impaired user that is configured to be worn in or at one of the ears of the user. As shown in FIG. 1, by way of example, the hearing aid 4 may be configured as a Behind-The-Ear (BTE) hearing aid. Optionally, the system 2 contains a second hearing aid (not shown) to be worn in or at the other ear of the user to provide binaural support to the user.

(9) The hearing aid 4 contains, inside a housing 5, two microphones 6 as input transducers and a receiver 8 as output transducer. The hearing aid 4 further contains a battery 10 and a signal processor 12. Preferably, the signal processor 12 contains both a programmable sub-unit (such as a microprocessor) and a non-programmable sub-unit (such as an ASIC). The signal processor 12 includes a voice recognition unit 14, that contains a voice activity detection (VAD) module 16 and an own voice detection (OVD) module 18. By preference, both modules 16 and 18 are configured as software components being installed in the signal processor 12.

(10) The signal processor 12 is powered by the battery 10, i.e. the battery 10 provides an electrical supply voltage U to the signal processor 12.

(11) During normal operation of the hearing aid 4, the microphones 6 capture a sound signal from an environment of the hearing aid 2. The microphones 6 convert the sound into an input audio signal I containing information on the captured sound. The input audio signal I is fed to the signal processor 12. The signal processor 12 processes the input audio signal I, i.e., to provide a directed sound information (beam-forming), to perform noise reduction and dynamic compression, and to individually amplify different spectral portions of the input audio signal I based on audiogram data of the user to compensate for the user-specific hearing loss. The signal processor 12 emits an output audio signal O containing information on the processed sound to the receiver 8. The receiver 8 converts the output audio signal O into processed air-borne sound that is emitted into the ear canal of the user, via a sound channel 20 connecting the receiver 8 to a tip 22 of the housing 5 and a flexible sound tube (not shown) connecting the tip 22 to an ear piece inserted in the ear canal of the user.

(12) The VAD module 16 generally detects the presence of voice (independent of a specific speaker) in the input audio signal I, whereas the OVD module 18 specifically detects the presence of the user's own voice. By preference, modules 16 and 18 apply technologies of VAD and OVD, that are as such known in the art, e.g. from U.S. patent publication 2013/0148829 A1 or international patent disclosure WO 2016/078786 A1. By analyzing the input audio signal I (and, thus, the captured sound signal), the VAD module 16 and the OVD module 18 recognize speech intervals, in which the input audio signal I contains speech, which speech intervals are distinguished (subdivided) into own-voice intervals, in which the user speaks, and foreign-voice intervals, in which at least one different speaker speaks.

(13) Furthermore, the hearing system 2 contains a derivation unit 24 and a speech enhancement unit 26. The derivation unit 24 is configured to derive a pitch P (i.e. the fundamental frequency) of the captured sound signal from the input audio signal I as a time-dependent variable. The derivation unit 24 is further configured to apply a moving average to the measured values of the pitch P, e.g. applying a time constant (i.e. size of the time window used for averaging) of 15 msec, and to derive the first (time) derivative D1 and the second (time) derivative D2 of the time-averaged values of the pitch P.

(14) For example, in a simple yet effective implementation, a periodic time series of time-averaged values of the pitch P is given by . . . , AP[n−2], AP[n−1], AP[n], . . . , where AP[n] is a current value, and AP[n−2] and AP[n−1] are previously determined values. Then, a current value D1[n] and a previous value D1[n−1] of the first derivative D1 may be determined as
D1[n]=AP[n]−AP[n−1]=D1,  a)
D1[n−1]=AP[n−1]−AP[n−2],  b)
and a current value D2[n] of the second derivative D2 may be determined as
D2[n]=D1[n]−D1[n−1]=D2.  c)

(15) The speech enhancement unit 26 is configured to analyze the derivatives D1 and D2 with respect of a criterion subsequently described in more detail in order to recognize speech accents in input audio signal I (and, thus, the captured sound signal). Furthermore, the speech enhancement unit 26 is configured to temporarily apply an additional gain G and, thus, increase the amplitude of the processed sound signal O, if the derivatives D1 and D2 fulfill the criterion (being indicative of a speech accent).

(16) By preference, both the derivation unit 24 and a speech enhancement unit 26 are configured as software components being installed in the signal processor 12.

(17) During normal operation of the hearing aid 4, the voice recognition unit 14, i.e. the VAD module 16 and the OVD module 18, the derivation unit 24 and the speech enhancement unit 26 interact to execute a method illustrated in FIG. 2.

(18) In a first step 30 of the method, the voice recognition unit 14 analyzes the input audio signal I for foreign voice intervals, i.e. it checks whether the VAD module 16 returns a positive result (indicative of the detection of speech in the input audio signal I), while the OVD module 18 returns a negative result (indicative of the absence of the own voice of the user in the input audio signal I).

(19) If a foreign voice interval is recognized (Y), the voice recognition unit 14 triggers the derivation unit 24 to execute a next step 32. Otherwise (N), step 30 is repeated.

(20) In step 32, the derivation unit 24 derives the pitch P of the captured sound from the input audio signal I and applies time averaging to the pitch P as described above. In a subsequent step 34, the derivation unit 24 derives the first derivative D1 and the second derivative D2 of the time-averaged values of the pitch P. Thereafter, the derivation unit 24 triggers the speech enhancement unit 26 to perform a speech enhancement step 36 which, in the example shown in FIG. 2, is subdivided into two steps 38 and 40.

(21) In the step 38, the speech enhancement unit 26 analyzes the derivatives D1 and D2 as mentioned above to recognize speech accents. If a speech accent is recognized (Y) the speech enhancement unit 26 proceeds to step 40. Otherwise (N), i.e. if no speech accent is recognized, the speech enhancement unit 26 triggers the voice recognition unit 14 to execute step 30 again.

(22) In step 40, the speech enhancement unit 26 temporarily applies the additional gain G to the processed sound signal. Thus, for a predefined time interval (called enhancement interval TE), the amplitude of the processed sound signal O is increased, thus enhancing the recognized speech accent. After expiration of enhancement interval TE, the gain G is reduced to 1 (0 dB). Subsequently, the speech enhancement unit 26 triggers the voice recognition unit 14 to execute step 30 and, thus, the method of FIG. 2 again.

(23) FIGS. 3 and 4 show in more detail two alternative embodiments of the accent recognition step 38 of the method of FIG. 2. For both embodiments, the before-mentioned criterion for recognizing speech accents involves a comparison of the first derivative D1 of the time-averaged pitch P with a (first) threshold T1 which comparison is further influenced by the second derivative D2.

(24) In the first embodiment, according to FIG. 3, the threshold T1 is offset (varied) in dependence of the second derivative D2. To this end, in a step 42, the speech enhancement unit 26 compares the second derivative D2 with a (second) threshold T2. If the second derivative D2 exceeds the threshold T2 (Y), the speech enhancement unit 26 sets the threshold T1 to a lower one of two pre-defined values (step 44). Otherwise (N), i.e. if the second derivative D2 does not exceed the threshold T2, the speech enhancement unit 26 sets the threshold T1 to the higher one of said two pre-defined values (step 46).

(25) In a subsequent step 48, the speech enhancement unit 26 checks whether the first derivative D1 exceeds the threshold T1 (D1>T1?). If so (Y), the speech enhancement unit 26 proceeds to step 40, as previously described with respect to FIG. 2. Otherwise (N), as also described with respect to FIG. 2, the speech enhancement unit 26 triggers the voice recognition unit 14 to execute step 30 again.

(26) In the second embodiment, according to FIG. 4, the first derivative D1 is weighted with a variable weight factor W which is determined in dependence of the second derivative D2. To this end, in a step 50, the speech enhancement unit 26 determines the weight factor W as a function of the second derivative D2. For example, W is set to a positive value W0 (W=W0 with W0>1) if D2 exceeds the threshold T2 whereas, otherwise, W is to 1 (W=1).

(27) In a step 52, the speech enhancement unit 26 multiplies the first derivative D1 with the weight factor W (D1.fwdarw.W.Math.D1).

(28) Subsequently, in a step 54, the speech enhancement unit 26 checks whether the weighted first derivative D1, i.e. the product W.Math.D1, exceeds the threshold T1 (W.Math.D1>T1?). If so (Y), the speech enhancement unit 26 proceeds to step 40, as previously described with respect to FIG. 2. Otherwise (N), as also described with respect to FIG. 2, the speech enhancement unit 26 triggers the voice recognition unit 14 to execute step 30 again.

(29) FIGS. 5 to 7 show three diagrams of the gain G over time t. Each diagram shows a different example of how to temporarily apply the gain G in step 40 and, thus, to increase the amplitude of the output audio signal O for the enhancement interval TE.

(30) In a first example according to FIG. 5, the speech enhancement unit 26 increases the gain G step-wise (i.e. as a binary function of time t). If, in step 38, a speech accent is recognized, the gain G is set to a positive value G0 exceeding 1 (G=G0 with G0>1). This value G0 is maintained for the whole enhancement interval TE. After expiration of the enhancement interval TE, the gain G is reset to a constant value of 1 (G=1). The value G0 may be predefined as a constant. Alternatively, the value G0 may be varied in dependence of the first derivative D1 or the second derivative D2. For example, the value G0 may be proportional to the first derivative D1 (and, thus, increase/decrease with increasing/decreasing value of the derivative D1).

(31) In a second example according to FIG. 6, if a speech accent is recognized, the gain G is step-wise (abruptly) set to the positive value G0. Thereafter, it is continuously decreased (having a linear or non-linear dependence of time) to reach G=1 at the end of the enhancement interval TE.

(32) In a third example according to FIG. 7, if a speech accent is recognized, the gain G is continuously increased and, thereafter, continuously decreased to reach G=1 at the end of the enhancement interval TE.

(33) FIG. 8 shows a further embodiment of the hearing system 2 in which the latter comprises the hearing aid 4 as described before and a software application (subsequently denoted “hearing app” 72), that is installed on a mobile phone 74 of the user. Here, the mobile phone 74 is not a part of the system 2. Instead, it is only used by the system 74 as a resource providing computing power and memory.

(34) The hearing aid 4 and the hearing application 72 exchange data via a wireless link 76, e.g. based on the Bluetooth standard. To this end, the hearing application 72 accesses a wireless transceiver (not shown) of the mobile phone 74, in particular a Bluetooth transceiver, to send data to the hearing aid 4 and to receive data from the hearing aid 4.

(35) In the embodiment according to FIG. 10, some of the elements or functionality of the before-mentioned hearing system 2 are implemented in the hearing application 72. E.g., a functional part of the speech enhancement unit 26 being configured to perform the step 38 is implemented in the hearing application 72.

(36) It will be appreciated by persons skilled in the art that numerous variations and/or modifications may be made to the invention as shown in the specific examples without departing from the spirit and scope of the invention as broadly described in the claims. The present examples are, therefore, to be considered in all aspects as illustrative and not restrictive.

LIST OF REFERENCES

(37) 2 (hearing) system 4 hearing aid 5 housing 6 microphones 8 receiver 10 battery 12 signal processor 14 voice recognition unit 16 voice detection module (VD module) 18 own voice detection module (OVD module) 20 sound channel 22 tip 24 derivation unit 26 speech enhancement unit 30 step 32 step 34 step 36 step 38 step 40 step 42 step 44 step 46 step 48 step 50 step 52 step 54 step 72 hearing app 74 mobile phone 76 wireless link t time D1 first derivative D2 second derivative G gain G0 value I input audio signal O output audio signal P pitch T1 threshold T2 threshold TE enhancement interval U supply voltage W weight factor W0 value