Method for operating a hearing instrument and a hearing system containing a hearing instrument

11206501 · 2021-12-21

Assignee

Inventors

Cpc classification

International classification

Abstract

A method operates a hearing instrument that is worn in or at the ear of a user. The method includes capturing a sound signal from an environment of the hearing instrument; analyzing the captured sound signal to recognize own-voice intervals, in which the user speaks, and foreign-voice intervals, in which at least one different speaker speaks; and determining, from the recognized own-voice intervals and foreign-voice intervals, at least one turn-taking feature. From the at least one turn-taking feature a measure of the sound perception by the user is derived. Predefined action for improving the sound perception is taken if the measure or the at least one turn-taking feature fulfill a predefined criterion.

Claims

1. A method for operating a hearing instrument being worn in or at an ear of a user, which comprises the following steps of: capturing a sound signal from an environment of the hearing instrument; analyzing the sound signal captured to recognize own-voice intervals, in which the user speaks, and foreign-voice intervals, in which at least one different speaker speaks; determining, from the own-voice intervals and the foreign-voice intervals, at least one turn-taking feature, the at least one turn-taking feature taking into account a temporal length or a temporal occurrence of overlaps, wherein an overlap is an interval in which both the user and the different speaker speak and which exceeds a predefined threshold; analyzing, during recognized own-voice intervals, the sound signal for at least one of a following acoustic features of an own voice of the user: a voice level; formant frequencies; a pitch frequency; a frequency distribution of the own voice; and a speed of speech; analyzing the sound signal for at least one of a following environmental acoustic features: a sound level of the sound signal; a signal-to-noise ratio; deriving a measure of sound perception by the user from a combination of: the at least one turn-taking feature; at least one of the acoustic features of the own voice of the user; and at least one of the environmental acoustic features; testing the measure of the sound perception with respect to a predefined criterion indicative of a poor sound perception; and taking a predefined action for improving the sound perception if the predefined criterion is fulfilled by automatically altering at least one parameter of a signal processing of the hearing instrument such that a noise reduction and/or a directionality are increased.

2. The method according to claim 1, wherein the measure of the sound perception is determined based on at least one of the following: predetermined reference values of turn-taking features taken in quiet; audiogram values representing a hearing ability of the user; at least one uncomfortable level of the user; and information concerning an environmental noise sensitivity and/or distractibility of the user.

3. The method according to claim 1, wherein the at least one turn-taking feature further takes into consideration at least one of: a temporal length or a temporal occurrence of turns of the user and/or a temporal length or a temporal occurrence of turns of the different speaker, wherein a turn is a temporal interval in which the user or the different speaker speak without a pause, while a respective interlocutor is silent; a temporal length or a temporal occurrence of pauses of the user and/or a temporal length or a temporal occurrence of pauses of the different speaker, wherein a pause is an interval without speech separating two consecutive turns of the user or two consecutive turns of the different speaker, the temporal length of which exceeds a predefined threshold; a temporal length or a temporal occurrence of lapses, wherein a lapse is an interval without speech separating a turn of the different speaker and a consecutive turn of the user or between a turn of the user and a consecutive turn of the different speaker, the temporal length of which exceeds a predefined threshold; a temporal occurrence of switches, wherein a switch is a transition from a turn of the different speaker to a consecutive turn of the user or from a turn of the user to a consecutive turn of the different speaker within a predefined time interval; and a combination of a plurality of above mentioned features.

4. The method according to claim 1, wherein the predefined action for improving the sound perception further comprises automatically creating and outputting a feedback to the user by means of the hearing instrument and/or an electronic communication device linked with the hearing instrument for data exchange, the feedback indicating the poor sound perception and/or suggesting the user to visit an audio care professional.

5. The method according to claim 1, wherein the environmental acoustic features further include: a reverberation time; a number of different speakers; and a direction of at least one of the different speakers.

6. A method for operating a hearing instrument that is worn in or at an ear of a user, which comprises the following steps of: capturing a sound signal from an environment of the hearing instrument; analyzing the sound signal captured to recognize own-voice intervals, in which the user speaks, and foreign-voice intervals, in which a different speaker speaks; determining, from the own-voice intervals and the foreign-voice intervals, at least one turn-taking feature, the at least one turn-taking feature taking into account a temporal length or a temporal occurrence of overlaps, wherein an overlap is an interval in which both the user and the different speaker speak and which exceeds a predefined threshold; analyzing, during recognized own-voice intervals, the sound signal for at least one of a following acoustic features of an own voice of the user: a voice level; formant frequencies; a pitch frequency; a frequency distribution of voice; and a speed of speech; analyzing the sound signal for at least one of a following environmental acoustic features: a sound level of the sound signal captured; and a signal-to-noise ratio; testing the at least one turn-taking feature with respect to a predefined criterion indicative of a poor sound perception, wherein the predefined criterion is based on a combination of: the at least one turn-taking feature; at least one of the acoustic features of the own voice of the user; and at least one of the environmental acoustic features; and taking a predefined action for improving the poor sound perception if the predefined criterion is fulfilled, wherein the predefined action for improving the poor sound perception includes automatically altering at least one parameter of a signal processing of the hearing instrument such that noise reduction and/or a directionality are increased.

7. The method according to claim 6, wherein the predefined criterion further depends on at least one of a following: predetermined reference values of turn-taking features taken in quiet; audiogram values representing a hearing ability of the user; at least one uncomfortable level of the user; and information concerning an environmental noise sensitivity and/or distractibility of the user.

8. The method according to claim 6, wherein the at least one turn-taking feature further takes into consideration at least one of: a temporal length or a temporal occurrence of turns of the user and/or a temporal length or a temporal occurrence of turns of the different speaker, wherein a turn is a temporal interval in which the user or the different speaker speak without a pause, while a respective interlocutor is silent; a temporal length or a temporal occurrence of pauses of the user and/or a temporal length or a temporal occurrence of pauses of the different speaker, wherein a pause is an interval without speech separating two consecutive turns of the user or two consecutive turns of the different speaker, the temporal length of which exceeds a predefined threshold; a temporal length or a temporal occurrence of lapses, wherein a lapse is an interval without speech separating a turn of the different speaker and a consecutive turn of the user or between a turn of the user and a consecutive turn of the different speaker, the temporal length of which exceeds a predefined threshold; the temporal occurrence of switches, wherein a switch is a transition from a turn of the different speaker to a consecutive turn of the user or from a turn of the user to a consecutive turn of the different speaker within a predefined time interval; and a combination of a plurality of above mentioned features.

9. The method according to claim 6, wherein the predefined action for improving the poor sound perception further comprises automatically creating and outputting a feedback to the user by means of the hearing instrument and/or an electronic communication device linked with the hearing instrument for data exchange, the feedback indicating the poor sound perception and/or suggesting the user to visit an audio care professional.

10. A hearing system, comprising: a hearing instrument to be worn in or at an ear of a user, said hearing instrument containing: an input transducer disposed to capture a sound signal from an environment of said hearing instrument; a signal processor disposed to process the sound signal; an output transducer disposed to emit a processed sound signal into the ear of the user; a voice recognition unit configured to analyze the sound signal to recognize own-voice intervals, in which the user speaks, and foreign-voice intervals, in which a different speaker speaks; and a controller configured to determine, from the own-voice intervals and the foreign-voice intervals, at least one turn-taking feature, the at least one turn-taking feature taking into account a temporal length or a temporal occurrence of overlaps, wherein an overlap is an interval in which both the user and the different speaker speak and which exceeds a predefined threshold; said signal processor configured to: analyze the sound signal captured, during the own-voice intervals, for at least one of a following acoustic features of an own voice of the user: a voice level; formant frequencies; a pitch frequency; a frequency distribution; and a speed of speech; analyze the sound signal captured for at least one of a following environmental acoustic features: a sound level of the sound signal; and a signal-to-noise ratio; said controller further configured to: derive a measure of a sound perception by the user from a combination of: the at least one turn-taking feature; at least one of the acoustic features of the own voice of the user; and at least one of the environmental acoustic features; test the measure of the sound perception with respect to a predefined criterion indicative of a poor sound perception; and take a predefined action for improving the sound perception if the predefined criterion is fulfilled by automatically altering at least one parameter of a signal processing of said hearing instrument such that noise reduction and/or a directionality are increased.

11. The hearing system according to claim 10, wherein said controller is configured to determine the measure of the sound perception based on at least one of a following: predetermined reference values of turn-taking features taken in quiet; audiogram values representing a hearing ability of the user; at least one uncomfortable level of the user; and information concerning an environmental noise sensitivity and/or distractibility of the user.

12. The hearing system according to claim 10, wherein the at least one turn-taking feature takes into consideration at least one of: a temporal length or a temporal occurrence of turns of the user and/or a temporal length or a temporal occurrence of turns of the different speaker, wherein a turn is a temporal interval in which the user or the different speaker speak without a pause, while a respective interlocutor is silent; a temporal length or a temporal occurrence of pauses of the user and/or a temporal length or a temporal occurrence of pauses of the different speaker, wherein a pause is an interval without speech separating two consecutive turns of the user or two consecutive turns of the different speaker, the temporal length of which exceeds a predefined threshold; a temporal length or a temporal occurrence of lapses, wherein a lapse is an interval without speech separating a turn of the different speaker and a consecutive turn of the user or between a turn of the user and a consecutive turn of the different speaker, the temporal length of which exceeds a predefined threshold; a temporal occurrence of switches, wherein a switch is a transition from a turn of the different speaker to a consecutive turn of the user or from a turn of the user to a consecutive turn of the different speaker within a predefined time interval; and a combination of a plurality of above mentioned features.

13. The method according to claim 10, wherein the predefined action for improving the sound perception further comprising automatically creating and outputting a feedback to the user by means of said hearing instrument and/or an electronic communication device linked with said hearing instrument for data exchange, the feedback indicating a poor sound perception and/or suggesting the user to visit an audio care professional.

14. A hearing system, comprising: a hearing instrument worn in or at an ear of a user, said hearing instrument containing: an input transducer disposed to capture a sound signal from an environment of said hearing instrument; a signal processor disposed to process the sound signal captured; and an output transducer disposed to emit a processed sound signal into the ear of the user; a voice recognition unit configured to analyze the sound signal to recognize own-voice intervals, in which the user speaks, and foreign-voice intervals, in which a different speaker speaks; and a controller configured to determine, from the own-voice intervals and the foreign-voice intervals, at least one turn-taking feature, the at least one turn-taking feature taking into account a temporal length or a temporal occurrence of overlaps, wherein an overlap is an interval in which both the user and the different speaker speak and which exceeds a predefined threshold; said signal processor is configured to: analyze, during recognized own-voice intervals, the sound signal for at least one of a following acoustic features of an own voice of the user: a voice level; formant frequencies; a pitch frequency; a frequency distribution; and a speed of speech; analyze the sound signal for at least one of a following environmental acoustic features: a sound level of the sound signal; and a signal-to-noise ratio; wherein said controller configured to: test the at least one turn-taking feature with respect to a predefined criterion indicative of a poor sound perception, wherein the predefined criterion is based on a combination of: the at least one turn-taking feature; at least one of the acoustic features of the own voice of the user; and at least one of the environmental acoustic features; and take a predefined action for improving the poor sound perception if the predefined criterion is fulfilled, by automatically altering at least one parameter of a signal processing of said hearing instrument such that noise reduction and/or a directionality are increased.

15. The hearing system according to claim 14, wherein the predefined criterion further depends on at least one of a following: predetermined reference values of turn-taking features taken in quiet; audiogram values representing a hearing ability of the user; at least one uncomfortable level of the user; and information concerning an environmental noise sensitivity and/or distractibility of the user.

16. The hearing system according to claim 14, wherein the at least one turn-taking feature further takes into account at least one of: a temporal length or a temporal occurrence of turns of the user and/or a temporal length or a temporal occurrence of turns of the different speaker, wherein a turn is a temporal interval in which the user or the different speaker speak without a pause, while a respective interlocutor is silent; a temporal length or a temporal occurrence of pauses of the user and/or a temporal length or a temporal occurrence of pauses of the different speaker, wherein a pause is an interval without speech separating two consecutive turns of the user or two consecutive turns of the different speaker, the temporal length of which exceeds a predefined threshold; a temporal length or a temporal occurrence of lapses, wherein a lapse is an interval without speech separating a turn of the different speaker and a consecutive turn of the user or between a turn of the user and a consecutive turn of the different speaker, the temporal length of which exceeds a predefined threshold; a temporal occurrence of switches, wherein a switch is a transition from a turn of the different speaker to a consecutive turn of the user or from a turn of the user to a consecutive turn of the different speaker within a predefined time interval; and a combination of a plurality of above mentioned features.

17. The hearing system according to claim 14, wherein the predefined action for improving the sound perception further comprising automatically creating and outputting a feedback to the user by means of said hearing instrument and/or an electronic communication device linked with said hearing instrument for data exchange, the feedback indicating the poor sound perception and/or suggesting the user to visit an audio care professional.

Description

BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING

(1) FIG. 1 is a schematic representation of a hearing system having a hearing aid to be worn in or at an ear of a user and a software application for controlling and programming the hearing aid, the software application being installed on a smartphone;

(2) FIG. 2 is a flow chart showing a method for operating the hearing instrument of FIG. 1 according to the invention; and

(3) FIG. 3 is a flow chart of an alternative embodiment of the method for operating the hearing instrument.

DETAILED DESCRIPTION OF THE INVENTION

(4) In the figures, like reference numerals indicate like parts, structures and elements unless otherwise indicated.

(5) Referring now to the figures of the drawings in detail and first, particularly to FIG. 1 thereof, there is shown a hearing system 1 having a hearing aid 2, i.e. a hearing instrument being configured to support the hearing of a hearing impaired user, and a software application (subsequently denoted “hearing app” 3), that is installed on a smartphone 4 of the user. Here, the smartphone 4 is not a part of the system 1. Instead, it is only used by the system 1 as a resource providing computing power and memory. Generally, the hearing aid 2 is configured to be worn in or at one of the ears of the user. As shown in FIG. 1, the hearing aid 2 may be configured as a behind-the-ear (BTE) hearing aid. Optionally, the system 1 contains a second hearing aid (not shown) to be worn in or at the other ear of the user to provide binaural support to the user.

(6) The hearing aid 2 contains two microphones 5 as input transducers and a receiver 7 as output transducer. The hearing aid 2 further contains a battery 9 and a signal processor 11. Preferably, the signal processor 11 contains both a programmable sub-unit (such as a microprocessor) and a non-programmable sub-unit (such as an ASIC). The signal processor 11 includes a voice recognition unit 12, that contains a voice detection (VD) module 13 and an own voice detection (OVD) module 15. By preference, both modules 13 and 15 are configured as software components being installed in the signal processor 11.

(7) During operation of the hearing aid 2, the microphones 5 capture a sound signal from an environment of the hearing aid 2. Each one of the microphones 5 converts the captured sound signal into a respective input audio signal that is fed to the signal processor 11. The signal processor 11 processes the input audio signals of the microphones 5, i.a., to provide a directed sound information (beam-forming), to perform noise reduction and to individually amplify different spectral portions of the audio signal based on audiogram data of the user to compensate for the user-specific hearing loss. The signal processor 11 emits an output audio signal to the receiver 7. The receiver 7 converts the output audio signal into a processed sound signal that is emitted into the ear canal of the user.

(8) The VD module 13 generally detects the presence of voice (independent of a specific speaker) in the captured audio signal, whereas the OVD module 15 specifically detects the presence of the user's own voice. By preference, modules 13 and 15 apply technologies of VD (also called speech activity detection, VAD) and OVD, that are as such known in the art, e.g. from U.S. patent publication No. 2013/0148829 A1 or international patent disclosure WO 2016/078786 A1.

(9) The hearing aid 2 and the hearing app 3 exchange data via a wireless link 16, e.g. based on the Bluetooth standard. To this end, the hearing app 3 accesses a wireless transceiver (not shown) of the smartphone 4, in particular a Bluetooth transceiver, to send data to the hearing aid 2 and to receive data from the hearing aid 2. In particular, during operation of the hearing aid 2, the VD module 13 sends signals indicating the detection or non-detection of general voice activity to the hearing app 3. In a preferred embodiment, the VD module 13 provides spatial information concerning detected voice activity, i.e. information on the direction or directions in which voice activity is detected. In order to derive such spatial information, the VD module 13 separately analyzes the signal of different beam formers. On the other hand, the OVD module 15 sends signals indicating the detection or non-detection of own voice activity to the hearing app 3.

(10) Own-voice intervals, in which the user speaks, and foreign-voice intervals, in which at least one different speaker speaks, are derived from the signals of VD module 13 and the signals of the OVD module 15. As, in the preferred embodiment, the signal of the VD module 13 contains a spatial information, different speakers can be distinguished from each other. Using this spatial information, the hearing aid 2 or the hearing app 3 derives information on the number of speakers speaking in the same own-voice interval or foreign-voice interval. Moreover, using the spatial information provided by the VD module 13 and the signal of the OVD module 15, the hearing aid 2 or the hearing app 3 recognize overlaps in which the user and the at least one different speaker speak simultaneously.

(11) The hearing app 3 includes a control unit 17 that is configured to derive at least one of the turn-taking features specified above, from the own-voice intervals and foreign-voice intervals. In a preferred example, the control unit 17 derives from the own-voice intervals, foreign-voice intervals and overlaps: a) the relation T.sub.TU/T.sub.TS of the average temporal length T.sub.TU of turns of the user and the average temporal length T.sub.TS of turns of the different speaker; b) the relation h.sub.LU/h.sub.TU of the average temporal occurrence h.sub.LU of lapses (i.e. the average number of lapses per minute) between a turn of the different speaker and a consecutive turn of the user and the average temporal occurrence h.sub.TU of turns of the user; and c) the relation h.sub.OU/h.sub.TU of the average temporal occurrence h.sub.OU of overlaps (i.e. the average number of overlaps per minute) between a turn of the different speaker and a consecutive turn of the user and the average temporal occurrence h.sub.TU of turns of the user.

(12) The control unit 17 combines the above mentioned turn-taking features in a variable which, subsequently, is denoted the turn-taking behavior TT. The turn-taking behaviour TT may be represented by a vector (TT={T.sub.TU/T.sub.TS; h.sub.LU/h.sub.TU; h.sub.OU/h.sub.TU}).

(13) Moreover, the control unit 17 may receive from the signal processor 11 of the hearing aid 2 at least one of the acoustic features of the own voice of the user specified above. In the preferred example, the control unit 17 receives values of the pitch frequency F of the user's own voice, measured by the signal processor 11 during own-voice intervals.

(14) Finally, the control unit 17 may receive from the signal processor 11 of the hearing aid 2 at least one of the environmental acoustic features specified above. In the preferred example, the control unit 17 receives measured values of the general sound level L (i.e. volume) of the captured sound signal.

(15) Taking into account the information specified above, in particular the turn-taking behavior TT, pitch frequency F and sound level L, the control unit 17 decides whether or not to automatically take at least one predefined action to improve the sound perception by the user.

(16) As will be explained in the following, this decision is based on: a) a predetermined reference value TT.sub.ref of the turn-taking behavior TT; b) a predetermined reference value F.sub.ref of the pitch frequency F of the user's own voice; and c) a predefined threshold L.sub.T of the sound level L of the captured audio signal.

(17) The reference values TT.sub.ref and F.sub.ref are determined by analyzing the turn-taking behavior TT and pitch frequency F of the user's own voice when speaking to a different speaker in a quiet environment, during a training period preceding the real life use of the hearing system 1. Preferably, the threshold value L.sub.T is pre-set by the manufacturer of the system 1.

(18) In detail, the system 1 automatically performs the method as described hereafter.

(19) In a first step 20, preceding the real life use of the hearing aid 2, the control unit 17 starts a training period of, e.g. ca. 5 min, during which the control unit 17 determines the reference values TT.sub.ref (TT.sub.ref={[T.sub.TU/T.sub.TS].sub.ref; [h.sub.LU/h.sub.TU].sub.ref; [h.sub.OU/h.sub.TU].sub.ref}) and F.sub.ref. The reference values TT.sub.ref and F.sub.ref are determined by averaging over values of the turn-taking behavior TT and the pitch frequency F that have been recorded by the signal processor 11 and the control unit 17 during the training period.

(20) The step 20 is started on request of the user. Upon start of the training period, the control unit 17 informs the user, e.g. by a text message output via a display of the smartphone 4, that the training period is to be performed during a conversation in quiet. After having determined the reference values TT.sub.ref and F.sub.ref, the control unit 17 persistently stores the reference values TT.sub.ref and F.sub.ref in the memory of the smartphone 4.

(21) In the real life use of the hearing aid 2, in a step 22 during a conversation of the user with a different speaker (i.e. a person different from the user), the control unit 17 triggers the signal processor 11 to track the own-voice intervals, foreign-voice intervals, the pitch frequency F of the user's own voice and the sound level L of the captured audio signal for a given time interval (e.g. 3 minutes). The control unit 17 temporarily stores the tracked data in the memory of the smartphone 4. The control unit 17 may be configured to automatically recognize a communication by a frequent alternation between own-voice intervals and foreign-voice intervals in the captured sound signal.

(22) In a subsequent step 24, the control unit 17 derives the turn-taking behavior TT, i.e. the relations T.sub.TU/T.sub.TS, h.sub.LU/h.sub.TU and h.sub.OU/h.sub.TU, from an analysis of the tracked own-voice intervals and foreign-voice intervals.

(23) In order to make a decision, whether or not to take an action for improving the sound perception by the user, the control unit 17 uses a criterion that is defined as a three-step decision chain.

(24) In a step 26, the control unit 17 tests whether the deviation |TT−TT.sub.ref| of the turn-taking behavior TT, as determined in step 24, from the reference value TT.sub.ref exceeds a predetermined threshold Δ.sub.TT (|TT−TT.sub.ref|>Δ.sub.TT). E.g., the deviation |TT−TT.sub.ref| may be expressed in terms of the vector distance (Euclidian distance) between TT and TT.sub.ref:

(25) a ) ( T TU T TS - [ T TU T TS ] ref ) 2 + ( h LU h TU - [ h LU h TU ] ref ) 2 + ( h OU h TU - [ h OU h TU ] ref ) 2 > Δ TT eq . 1

(26) If above condition is found to be fulfilled (Y), i.e. if the turn-taking behavior TT is found to strongly deviate from a normal turn-taking behavior in quiet (what may indicative of a poor sound perception by the user), then the control unit 17 proceeds to a step 28.

(27) Else (N), i.e. when the deviation |TT−TT.sub.ref| is found to be within the threshold Δ.sub.TT, then the negative result of the test is considered an indication to the fact that the user's turn-taking-behavior and, hence, his sound perception are sufficiently good. Accordingly, the control unit 17 decides not to take any actions and terminates the method in a step 30.

(28) In order to verify the positive result of step 26, the control unit 17 tests in step 28 whether the deviation F−F.sub.ref of the pitch frequency F of the user's voice, as measured in step 22, from the reference value F.sub.ref exceeds a predetermined threshold Δ.sub.F (F−F.sub.ref>Δ.sub.F).

(29) If above condition is found to be fulfilled (Y), i.e. if the pitch frequency F of the user is found to strongly deviate from a normal pitch frequency in quiet (being indicative of a negative emotional state of the user), then the control unit 17 proceeds to a step 32.

(30) Else (N), i.e. when the deviation F−F.sub.ref is found to be within the threshold Δ.sub.F, then the negative result of the test is considered an indication to the fact that the unusual turn-taking-behavior, determined in step 26, is not correlated with a negative emotional state of the user. In this case, the unusual turn-taking-behavior will probably be caused by circumstances other that a poor sound perception by the user (for example, an apparent unusual turn-taking behavior that is not related to a poor sound perception may have been caused by the user speaking with himself while watching TV). Therefore, in case of a negative result of the test performed in step 28, the control unit 17 decides not to take any actions and terminates the method (step 30).

(31) In order to further verify the positive results of steps 26 and 28, the control unit 17 tests in step 32 whether the sound level L of the captured sound signal, as measured in step 22 exceeds the predetermined threshold L.sub.T (L>L.sub.T).

(32) If above condition is found to be fulfilled (Y), i.e. if the sound level L found to exceed the threshold L.sub.T (being indicative of a difficult hearing situation), then the control unit 17 proceeds to a step 34.

(33) Else (N), i.e. when the sound level L is found not to exceed the threshold L.sub.T, then the negative result of the test is considered an indication to the fact that the unusual turn-taking-behavior, determined in step 26, and the negative emotional state of the user, as detected in step 28, is not correlated with a difficult hearing situation. In this case, the unusual turn-taking-behavior and the negative emotional state of the user will probably be caused by circumstances other that a poor sound perception by the user. For example, the user may be in a dispute the content of which causes the negative emotional state and, hence, the unusual turn-taking. Therefore, in case of a negative result of the test performed in step 32, the control unit 17 decides not to take any actions and terminates the method (step 30).

(34) If all steps 26, 28 and 32 yield a positive result, i.e. if the tested criterion is fulfilled, then the control unit 17 decides to take predefined actions to improve the sound perception by the user.

(35) To this end, in step 34, the control unit 17 informs the user, e.g. by a text message output via a display of the smartphone 4, that his sound perception is found to drop under usual, and suggests an automatic change of signal processing parameters of the hearing aid 2.

(36) If the user confirms the suggestion, e.g. by touching an “OK” button created by the control unit 17 on display of the smartphone 4, then, in a step 36, the control unit 17 induces a predefined change of at least one signal processing parameter of the hearing aid 2 and terminates the method. E.g. the control unit 17 may: a) enhance directionality of the processed sound signal, and/or b) enhance noise reduction during signal processing.

(37) Preferably, the method according to steps 22 to 36 is repeated in regular time intervals or every time a new conversation is recognized.

(38) In another example, the control unit 17 is configured to conduct a method according to FIG. 3. Steps 20 to 24 and 30 to 36 of this method resemble the same steps of the method shown in FIG. 2.

(39) The method of FIG. 3 deviates from the method of FIG. 2 in that, in a step 40 (following step 24), the control unit 17 calculates a measure M of the sound perception by the user.

(40) The measure M is configured as a variable that may assume one of three values “1” (indicating a good sound perception), “0” (indication a neutral sound perception) and “−1” (indicating a poor sound perception).

(41) The value “1” (good sound perception) is assigned to the measure M, if: a) the deviation |TT−TT.sub.ref| of the turn-taking behavior TT, as determined in step 24, from the reference value TT.sub.ref does not exceed a first threshold Δ.sub.TT1 (|TT−TT.sub.ref|≤Δ.sub.TT1); and b) the deviation F−F.sub.ref of the pitch frequency F of the user's voice, as measured in step 22, from the reference value F.sub.ref does not exceed the threshold Δ.sub.F (F−F.sub.ref≤Δ.sub.F); and c) the sound level L of the captured sound signal, as measured in step 22, exceeds the threshold L.sub.T (L>L.sub.T).

(42) The value “−1” (poor sound perception) is assigned to the measure M, if: a) the deviation |TT−TT.sub.ref| exceeds a second threshold Δ.sub.TT2 (|TT−TT.sub.ref|>Δ.sub.TT2); and b) the deviation F−F.sub.ref exceeds the threshold Δ.sub.F (F−F.sub.ref>Δ.sub.F); and c) the sound level L of the captured sound signal, as measured in step 22 exceeds the threshold L.sub.T (L>L.sub.T).

(43) The value “0” (neutral sound perception) is assigned to the measure M in all other cases.

(44) The thresholds Δ.sub.TT1 and Δ.sub.TT2 are selected so that the threshold Δ.sub.TT2 exceeds the threshold Δ.sub.TT1 (Δ.sub.TT2>Δ.sub.TT1).

(45) The control unit 17 persistently stores the values of the measure M in the memory of the smartphone 4 as part of a data logging function. The stored values of the measure M are stored for a later evaluation by an audio care professional.

(46) In a subsequent step 42, the control unit 17 tests whether the current value of the measure M correspond to −1 (M=−1).

(47) If above condition is found to be fulfilled (Y), being indicative of a poor sound perception, then the control unit 17 proceeds to step 34. Else (N), i.e. if the measure M has a value of “0” or “1”, then the control unit 17 decides not to take any actions and terminates the method in step 30.

(48) It will be appreciated by persons skilled in the art that numerous variations and/or modifications may be made to the invention as shown in the specific examples without departing from the spirit and scope of the invention as broadly described in the claims. The present examples are, therefore, to be considered in all aspects as illustrative and not restrictive.

LIST OF REFERENCES

(49) 1 (hearing) system 2 hearing aid 3 hearing app 4 smartphone 5 microphones 7 receiver 9 battery 11 signal processor 12 voice recognition unit 13 voice detection module (VD module) 15 own voice detection module (OVD module) 16 wireless link 17 control unit 20 step 22 step 24 step 26 step 28 step 30 step 32 step 34 step 36 step 38 step 40 step 42 step T.sub.TU/T.sub.TS relation h.sub.LU/h.sub.TU relation h.sub.OU/H.sub.TU relation [T.sub.TU/T.sub.TS].sub.ref reference value [h.sub.LU/h.sub.TU].sub.ref reference value [h.sub.OU/h.sub.TU].sub.ref reference value TT turn-taking behavior TT.sub.ref reference value F pitch frequency L sound level F.sub.ref reference value L.sub.T threshold |TT−TT|.sub.ref deviation Δ.sub.TT threshold F−F.sub.ref deviation Δ.sub.F threshold M measure Δ.sub.TT1 threshold Δ.sub.TT2 threshold