HEARING DEVICE COMPRISING A STRESS EVALUATOR

20220272465 · 2022-08-25

Assignee

Inventors

Cpc classification

International classification

Abstract

A hearing device configured to be worn by a user comprises one or more microphones, a processing unit, a speaker, a wireless communication unit, and stress evaluator. The user wearing the hearing device is in an environment defined by an acoustic scene. The microphones receive audio signals from audio sources in the environment and provide them to the processing unit which applies processing parameters to thereby process the audio signals. The speaker provides the processed audio signals to the user. The stress evaluator generates an indication of stress of the user, stress of the user being related to the acoustic scene. The processing unit then decides whether to perform an action on the basis of the received audio signals and the indication of stress of the user.

Claims

1. A hearing device configured to be worn by a user in an environment, the environment being defined by an acoustic scene, the hearing device comprising: a processing unit configured to apply a processing parameter to process audio signals; a speaker configured to provide sound to the user based on the processed audio signals; a wireless communication unit; and a stress evaluator configured to measure a stress parameter relating to a stress of the user, the stress of the user being related to the acoustic scene; wherein the processing unit is configured to decide, based on the audio signals and the measured stress parameter, whether to perform an action.

2. The hearing device according to claim 1, wherein the stress evaluator comprises a temperature sensor, a heart rate sensor, a skin resistance sensor, one or more microphones, or any combination of the foregoing.

3. The hearing device according to claim 1, wherein the stress evaluator comprises one or more microphones.

4. The hearing device according to claim 3, wherein the one or more microphones are configured to receive sound in the environment, and provide the audio signals.

5. The hearing device according to claim 1, wherein the stress evaluator or the processing unit is configured to generate an indication of stress based on the measured stress parameter.

6. The hearing device according to claim 1, wherein the stress evaluator or the processing unit is configured to generate an indication of stress based on at least a speech of the user.

7. The hearing device according to claim 1, wherein the action comprises adjusting the processing parameter.

8. The hearing device according to claim 1, wherein the processing unit is configured to change the processing parameter based on a change in the acoustic scene.

9. The hearing device according to claim 1, wherein the action comprises providing a request to the user to adjust the processing parameter.

10. The hearing device according to claim 1, wherein the hearing device is configured to forward speech data indicating a speech of the user to an external device.

11. The hearing device according to claim 1, wherein the hearing device is communicatively coupled to a database comprising historical data, the historical data relating to the perceptual hearing of the user and/or a general perceptual hearing, wherein the hearing device is configured to determine an indication of stress based on the historical data.

12. The hearing device according to claim 1, wherein the processing unit is configured to detect a hearing deficit of the user, an uncompensated hearing loss of the user, a change in a hearing capability of the user, or any combination of the foregoing.

13. The hearing device according to claim 12, wherein the processing unit is configured to detect the hearing deficit, the uncompensated hearing loss of the user, the change in the hearing capability of the user, or any combination of the foregoing, based on the acoustic scene and an indication of stress.

14. The hearing device according to claim 1, wherein the hearing device is a hearing aid configured to compensate for a hearing loss of the user.

15. The hearing device according to claim 1, wherein the hearing device is configured to receive a data signal comprising acoustic scene information, the processing unit being configured to process the data signal, and based on the data signal, determine whether to perform the action.

16. The hearing device according to claim 1, wherein the processing unit is configured to determine an indication of stress based on the measured stress parameter; and wherein the processing unit is configured to decide, based indirectly on the measured stress parameter, whether to perform an action, by making a decision based on the indication of stress.

17. A method performed by a hearing device, the hearing device being configured to be worn by a user in an environment, the environment defined by an acoustic scene, the hearing device comprising a processing unit, a speaker, a wireless communication unit, and a stress evaluator, the method comprising: receiving, by the processing unit, audio signals; applying, by the processing unit, a processing parameter to process the audio signals; measuring, by the stress evaluator, a stress parameter related to stress of the user; providing the measured stress parameter to the processing unit; generating an indication of stress of the user by the processing unit, the stress of the user being related to the acoustic scene; deciding, by the processing unit, whether to perform an action, wherein the act of deciding is performed based on the audio signals and the indication of stress of the user.

18. The method according to claim 17, wherein the act of generating the indication of stress of the user comprises analysing a speech of the user.

19. The method according to claim 17, further comprising determining a complexity of the acoustic scene, wherein the act of deciding is performed based on the complexity of the acoustic scene.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0049] The above and other features and advantages will become readily apparent to those skilled in the art by the following detailed description of exemplary embodiments thereof with reference to the attached drawings, in which:

[0050] FIG. 1 schematically illustrates an exemplary hearing device,

[0051] FIG. 2 schematically illustrates an exemplary environment with a user wearing a hearing device,

[0052] FIG. 3 schematically illustrates an exemplary hearing device communicatively coupled to an external device,

[0053] FIG. 4 schematically illustrates an exemplary method executed by the hearing device of FIG. 1,

[0054] FIG. 5 schematically illustrates a hearing device which uses user's voice for stress evaluation

[0055] FIG. 6 illustrates dependency of a stress level and a complexity of an acoustic scene, and

[0056] FIGS. 7a and 7b illustrate a detection of an uncompensated hearing loss.

DETAILED DESCRIPTION

[0057] Various embodiments are described hereinafter with reference to the figures. Like reference numerals refer to like elements throughout. Like elements will, thus, not be described in detail with respect to the description of each figure. It should also be noted that the figures are only intended to facilitate the description of the embodiments. They are not intended as an exhaustive description of the claimed invention or as a limitation on the scope of the claimed invention. In addition, an illustrated embodiment needs not have all the aspects or advantages shown. An aspect or an advantage described in conjunction with a particular embodiment is not necessarily limited to that embodiment and can be practiced in any other embodiments even if not so illustrated, or if not so explicitly described. Throughout, the same reference numerals are used for identical or corresponding parts.

[0058] FIG. 1 schematically illustrates an exemplary hearing device 2. The hearing device 2 comprises a microphone 4, a processing unit 6, a speaker 8, a wireless communication unit 10, and a stress evaluator 12. The hearing device 2 may comprise more than one microphone 4. The microphone 4 is configured to receive audio signals 14 from audio sources in the environment and provide the audio signals 14 to the processing unit 6. The processing unit 6 is configured to apply processing parameters to thereby process the audio signals 14. The speaker 8 may be directly connected to the processing unit 6 and the processing unit 6 may provide the processed audio signal to the speaker 8. The speaker 8 may then convert the processed audio signal into a sound for the user, i.e. the speaker 8 is configured to provide the processed audio signals 16 to the user. The stress evaluator 12 is configured to generate an indication of stress 18 of the user. Stress of the user is related to the acoustic scene. The processing unit 6 is configured to decide whether to perform an action, the decision being based on the received audio signals 14 and the indication of stress 18 of the user. The stress evaluator 12 may comprise a temperature sensor, a skin resistance sensor, or similar. In some embodiments, the microphone 4 may serve the purpose of the stress evaluator 12.

[0059] FIG. 2 schematically illustrates an exemplary environment 20 with a user 22 wearing a hearing device 2. The environment 20 is defined by an acoustic scene 24. The acoustic scene 24 comprises a plurality of audio sources 26, such as a person talking 26a, music source 26b, noise source 26c, loudspeaker 26d. Each of the audio sources 26a, 26b, 26c, and 26d generates a corresponding audio signal 28a, 28b, 28c, and 28d. The environment 20 may also comprise a plurality of visual sources which contribute to the user's cognitive load, attention, and therefore stress. Some of the audio sources, e.g. the person talking 26a and the loudspeaker 26d at the same time represent the visual sources as the user 22 may make notice of them while being in the environment. The arrangement of the audio sources 26a, 26b, 26c, and 26d may also affect the user's stress. For instance, if the noise source 26c is in a close proximity of the user 22, the user's stress level may be increased compared to the situation if the noise source 26c was far away. The hearing device 2 receives the audio signals 28a, 28b, 28c, and 28d via the one or more microphones (not shown). The audio signals 28a, 28b, 28c, and 28d are then processed by the processing unit of the hearing device 2. The processing unit may reconstruct the acoustic scene 24 and determine its complexity based on the received audio signals 28a, 28b, 28c, and 28d from the received acoustic signals 28a, 28b, 28c, and 28d, the processing unit may predict whether the user 22 is expected to be stressed or not. Alternatively, the processing unit of the hearing device 2 may estimate a stress level of the user 22 given the acoustic scene 24. This estimate may then be compared with the stress evaluator output, i.e. the indication of stress, to finally decide on whether to perform an action.

[0060] FIG. 3 schematically illustrates an exemplary hearing device 2 communicatively coupled to an external device 32. The communication link 30 may be a wireless link or it may be a wired connection. The external device 32 may be the user's smart phone, the user's computer, a server being a part of cloud, etc. The hearing device 2 may simultaneously be connected with more than one external device 32. The hearing device 2 may send data to the external device 32 through a first communication channel 34. The data sent from the hearing device 2 may include packages 38 comprising received audio signals and corresponding measurements from the stress evaluator. The packages 38 may therefore relate to users' perceptual hearing for a given environment. These data may be used for building up a database with historical data in the external device 32. The external device 32 may communicate with other hearing devices used by other users (not shown) which can then further contribute to the database and historical data. The external device 32 may then send these historical data to the hearing device 2 through another communication channel 36. The processing unit may then generate the decision based on the historical data. In one embodiment, the hearing device 2 may detect/identify the user's speech and forward it to the external device 32 for processing and/or analysis. The external device 32 may then send back the analysis of the speech which may be used by the processing unit in determination of the indication of stress.

[0061] FIG. 4 schematically illustrates an exemplary method 100 executed by the hearing device shown on FIG. 1. The method comprises receiving 101 audio signals at the one or more microphones of the hearing device. The audio signals originate from the acoustic sources arranged in the environment. The method further comprises providing the received audio signals to the processing unit (not shown). The processing unit then applies processing parameters to thereby process 102 the audio signals and provide them to the speaker. The stress evaluator then measures 103 stress parameters related to stress of the user which are then provided to the processing unit. The method further comprises generating 104 an indication of stress of the user by the processing unit, stress of the user being related to the acoustic scene. The processing unit then decides whether to perform an action. The decision on whether to perform the action is based on the received audio signals and the indication of stress of the user. The decision about performing and action may be based on a predetermined criteria, i.e. the indication of stress and the received audio signals may be compared 105 with the predetermined criteria and the result of the comparison may be that the user is not stressed. The processing unit may then check later again 107 whether the user is stressed, by performing the same steps again. If the outcome of the comparison is positive, i.e. the user is stressed, the processing unit may change 106 the processing parameters in order to reduce the stress. After the processing parameters are changed the method 100 may be performed again in order to check whether the change in the processing parameters resulted in reduction of stress. If stress is reduced but the user is still stressed the processing parameters may be changed further. If stress of the user is increased the processing parameters may need to be reset to previous values.

[0062] FIG. 5 schematically illustrates a hearing device 2 using the user's voice for stress evaluation. One or more microphones 4 of the hearing device 2 is configured to detect various acoustic signals from the environment and send these to the processing unit 6. The processing unit applies processing parameters to the received audio signals and outputs processed audio signals to the speaker 8. In this embodiment, the microphones 4 also detect the user's voice, thereby serving as a part of a stress evaluator 12. The microphones 4 also send the audio signals to a signal processor 12a forming part of the stress evaluator 12. The signal processor is configured to detect and extract the user's voice from the received audio signals 14. The extracted user's voice is sent to a speech analyser 12b also forming part of the stress evaluator 12. The speech analyser 12b is configured to determine the indication of stress and send it to the processing unit 6. The processing unit 6 then, based on the indication of stress obtained from the stress evaluator 12 and the received audio signals 14, decides whether to perform an action, such as changing the processing parameters.

[0063] FIG. 6 illustrates dependency of a stress level (y-axis) and a complexity of an acoustic scene (x-axis). From the graph it can be seen that the more complex the acoustic scene is, the stress level will be higher. Such dependency may form part of historical data. The historical data may define expected stress parameter. The expected stress parameter may depend on the complexity of the acoustic scene.

[0064] FIGS. 7a) and 7b) illustrate a detection of an uncompensated hearing loss. The graph in FIGS. 7a) and 7b) shows dependency of the stress level (y-axis) on the complexity of an acoustic scene (x-axis). The regular (brighter) curve shows historical data generated over time based on the user's previous experience or based on other users with similar profile. The irregular (darker) curve shows actual stress level measured by the stress evaluator. The irregular (darker) curve shows that the user starts to show higher stress relative to the historical average of acoustic scenes with similar complexity. Such behaviour may be the sign of an uncompensated hearing loss.

[0065] Although particular features have been shown and described, it will be understood that they are not intended to limit the claimed invention, and it will be made obvious to those skilled in the art that various changes and modifications may be made without departing from the scope of the claimed invention. The specification and drawings are, accordingly to be regarded in an illustrative rather than restrictive sense. The claimed invention is intended to cover all alternatives, modifications and equivalents.

LIST OF REFERENCES

[0066] 2 hearing device
4 microphone
6 processing unit
8 speaker
10 wireless communication unit
12 stress evaluator
14 audio signals
16 processed audio signals
18 indication of stress
20 environment
22 user
24 acoustic scene
26 audio sources
28 audio signals
30 communication link
32 external device
34 first communication channel
36 second communication channel
38 package
100 method executed by the hearing device
101 method step of receiving audio signals
102 method step of processing audio signals
103 method step of measuring stress parameters
104 method step of generating an indication of stress
105 method step of determining whether the user is stressed?
106 method step of changing processing parameters
107 method step of checking stress later