Selectively collecting and storing sensor data of a hearing system
11457320 · 2022-09-27
Assignee
Inventors
- Andreas Breitenmoser (Zurich, CH)
- Peter Derleth (Hinwil, CH)
- Harald Krueger (Affoltern, CH)
- Aliaksei Tsitovich (Stäfa, CH)
Cpc classification
A61B5/165
HUMAN NECESSITIES
A61B5/7264
HUMAN NECESSITIES
H04R25/50
ELECTRICITY
H04R25/554
ELECTRICITY
A61B5/0816
HUMAN NECESSITIES
H04R2225/43
ELECTRICITY
H04R2225/55
ELECTRICITY
A61B5/02438
HUMAN NECESSITIES
H04R25/30
ELECTRICITY
A61B5/0205
HUMAN NECESSITIES
H04R2225/41
ELECTRICITY
International classification
Abstract
A method for collecting and storing sensor data (56, 64) of a hearing system (10) comprises: receiving the sensor data (56, 64) of at least one sensor (20, 32, 34) of a hearing device (12) of the hearing system (10), wherein the hearing device (12) is worn by a user; detecting a situation (72) of interest by classifying at least a part of the sensor data (56, 64) with a classifier (61) implemented in the hearing system (10); collecting the sensor data (56, 64), when the hearing system (10) is in a situation (72) of interest; and sending the collected sensor data (76) to a storage system (54) in data communication with the hearing system (10).
Claims
1. A method for collecting and storing sensor data of a hearing system, the method comprising: receiving the sensor data of a sensor of a hearing device of the hearing system, wherein the hearing device is worn by a user; detecting a situation of interest by classifying at least a part of the sensor data with a classifier implemented in the hearing system; collecting the sensor data, when the hearing system is in a situation of interest; and sending the collected sensor data to a storage system in data communication with the hearing system, wherein: the sensor data comprises at least one of motion data acquired with a motion sensor, position data acquired with a position sensor, or medical data acquired with a medical data sensor; and the classifier comprises an additional data classifier into which at least one of the motion data, position data, or the medical data are input.
2. The method of claim 1, wherein the classifier further comprises a sound classifier for classifying audio data, which is received by the hearing device and which is output by the hearing device to the user; and wherein classification values of the sound classifier are used for selecting hearing programs of the hearing device.
3. The method of claim 1, wherein user input into the hearing system is input into the classifier.
4. The method of claim 1, wherein classification values generated by the classifier are compared with threshold values for detecting a situation of interest.
5. The method of claim 1, further comprising: detecting a situation of disinterest by classifying at least a part of the sensor data with the classifier; wherein the sensor data is discarded, when the hearing system is in a situation of disinterest.
6. The method of claim 5, wherein the sensor data is collected, when the hearing system is not in a situation of disinterest.
7. The method of claim 1, further comprising: sending configuration parameters for the classifier for a specific situation of interest from the storage system to a plurality of hearing systems.
8. The method of claim 1, wherein first sensor data from a first sensor is collected; wherein the classifier generates a classification of second sensor data from a second sensor; wherein the first sensor data is labelled with the classification of the second sensor.
9. The method of claim 1, wherein the storage system comprises a further classifier for classifying whether the collected sensor data is sensor data of interest; and wherein the storage system solely stores collected sensor data of interest.
10. A method for collecting and storing sensor data of a hearing system, the method comprising: receiving the sensor data of a sensor of a hearing device of the hearing system, wherein the hearing device is worn by a user; detecting a situation of interest by classifying at least a part of the sensor data with a classifier implemented in the hearing system; collecting the sensor data, when the hearing system is in a situation of interest; and sending the collected sensor data to a storage system in data communication with the hearing system, wherein the hearing system comprises a mobile device carried by the user; wherein the mobile device is in wireless data communication with the hearing device; wherein at least a part of the classifier is implemented in the mobile device.
11. A method for collecting and storing sensor data of a hearing system, the method comprising: receiving the sensor data of a sensor of a hearing device of the hearing system, wherein the hearing device is worn by a user; detecting a situation of interest by classifying at least a part of the sensor data with a classifier implemented in the hearing system; collecting the sensor data, when the hearing system is in a situation of interest; and sending the collected sensor data to a storage system in data communication with the hearing system, wherein the storage system is connected via the Internet with the hearing system; wherein the collected sensor data is transmitted to the storage system in selected time windows.
12. A non-transitory computer-readable medium storing instructions, which when executed by a processor cause a device to perform a method, the method comprising: receiving sensor data of a sensor of a hearing device of a hearing system, wherein the hearing device is worn by a user; detecting a situation of interest by classifying at least a part of the sensor data with a classifier implemented in the hearing system; collecting the sensor data, when the hearing system is in a situation of interest; and sending the collected sensor data to a storage system in data communication with the hearing system, wherein: the sensor data comprises at least one of motion data acquired with a motion sensor, position data acquired with a position sensor, or medical data acquired with a medical data sensor; and the classifier comprises an additional data classifier into which at least one of the motion data, position data, or the medical data are input.
13. The non-transitory computer-readable medium of claim 12, wherein the classifier further comprises a sound classifier for classifying audio data, which is received by the hearing device and which is output by the hearing device to the user; and wherein classification values of the sound classifier are used for selecting hearing programs of the hearing device.
14. The non-transitory computer-readable medium of claim 12, wherein user input into the hearing system is input into the classifier.
15. The non-transitory computer-readable medium of claim 12, wherein classification values generated by the classifier are compared with threshold values for detecting a situation of interest.
16. The non-transitory computer-readable medium of claim 12, the method further comprising: detecting a situation of disinterest by classifying at least a part of the sensor data with the classifier, wherein the sensor data is discarded, when the hearing system is in a situation of disinterest.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Below, embodiments of the present technology are described in more detail with reference to the attached drawings.
(2)
(3)
(4) The reference symbols used in the drawings, and their meanings, are listed in summary form in the list of reference symbols. In principle, identical parts are provided with the same reference symbols in the figures.
DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
(5)
(6) The hearing device 12 comprises a part 15 behind the ear and a part 16 to be put in the ear channel of a user. The part 15 and the part 16 are connected by a tube 18. In the part 15, a microphone 20, a sound processor 22 and a sound output device 24, such as a loudspeaker, are provided. The microphone 20 may acquire environmental sound of the user and may generate a sound signal, the sound processor 22 may amplify the sound signal and the sound output device 24 may generate sound that is guided through the tube 18 and the in-the-ear part 16 into the ear channel of the user.
(7) The hearing device 12 may comprise a processor 26, which is adapted for adjusting parameters of the sound processor 22, such that an output volume of the sound signal is adjusted based on an input volume. These parameters may be determined by a computer program run in the processor 26. For example, with a knob 28 of the hearing device 12, a user may select a modifier (such as bass, treble, noise suppression, dynamic volume, etc.) and levels and/or values of these modifiers may be selected, which influence the frequency dependent gain and the dynamic volume of the sound processor 22. All these functions may be implemented as computer programs stored in a memory 30 of the hearing device 12, which computer programs may be executed by the processor 22.
(8) The hearing device 12 furthermore may comprise an acceleration sensor 32 and/or a medical sensor 34. With the acceleration sensor 32, a movement of the head of the user of the hearing device 12 may be determined. With the medical sensor 34, which may be a photoplethysmography (PPG) sensor, a heart pulse and/or further medical conditions of the user, such as a breathing speed, can be determined.
(9) The hearing device 12 also may comprise a sender/receiver 36 for (for example wireless) data communication with a sender/receiver 38 of the mobile device 14, which may be a smartphone or tablet computer. As the hearing device 12, the mobile device 14 comprises a processor 40 and memory 42, in which programs are stored, which may be executed by the processor 40. The mobile device 14 may comprise a microphone 44, which may acquire environmental sound of the user. The mobile device may comprise a loudspeaker 46, which may be used for outputting sound, such as during a telephone call. Furthermore, the mobile device 14 may comprise a position sensor 48, such as a GPS sensor.
(10) With a user interface 50, the user of the hearing system 10 may interact with programs of the hearing system 10, and for example may adjust modifiers of the hearing device 12, which influence the frequency dependent gain and the dynamic volume of the sound processor.
(11) The mobile device 14 may be adapted for data communication via the Internet 52. For example, as described above and below, collected sensor data may be sent via the Internet to a storage system 54, where the collected sensor data can be used for configuration, training and fitting of further hearing devices.
(12)
(13)
(14) The hearing programs 57, which may be run by the sound processor 22 and/or the processor 26 of the hearing device 12, process the audio data 56 to adapt it to the hearing needs of the user. For example, the sound programs 57 may attenuate specific frequency of the audio data 56, may compress and shift frequencies, etc.
(15) Which hearing program 57 is selected and/or how processing parameters of the hearing programs are tuned are determined by a sound classifier 59. The sound classifier 59, which may be a program module of the hearing device 10, receives the audio data 56 and generates sound classification values 60, which for example, encode, whether the audio data 56 contains speech, noise, speech in noise, music, wind noises, etc. The sound classification values 60 are then used to select a suitable hearing program 57 and/or to tune it.
(16) The sound classifier 59 may be part of a situation classifier 61, which, in general, is a classifier, which receives sensor data 56, 64 of at least one sensor 20, 32, 34 of the hearing device 12 and/or the mobile device 14. The situation classifier 61 may comprise the sound classifier 59 as subcomponent for classifying audio data 56, which is received by the hearing device 12 and which is output by the hearing device 12 to the user.
(17) Beside the audio data 56, which may be seen as sensor data, additional sensor data 64, such as acceleration data 64a acquired with an acceleration sensor 32, position data 64b acquired with a position sensor 48 and medical data 64c acquired with a medical data sensor 34, may be received and processed by the situation classifier 61.
(18) As shown in
(19) It also may be that user input 65, which is input into the hearing system 10 by the user, for example via the knob 28 and/or the user interface 50, is further input data of the situation classifier 61. The user input 65 may be input into the additional data classifier 62, which also may classify its input data based on the user input 65. For example, it may be evaluated, how often a user has selected a specific program or that modifiers have been adjusted manually.
(20) The user input 65 may also provide ground truth or suggest labels for the data 56, 64. The user may attribute, confirm, and/or enter data labels as well as properties of the specific situation of interest via the knob 28 and/or the user interface 50.
(21) The sound classification values 60 and the additional data classification values 66 are input into a situation identifier 68, which may be seen as a further subcomponent of the situation classifier 61. For example, the situation identifier 68 may be implemented as program module in the hearing device 12 or the mobile device 14.
(22) The situation identifier 68 may classify the sound classification values 60 and the additional data classification values 66 into situations 72, in particular into situation 72 of interest and situation 72 of disinterest. This classification may be performed by comparing the classification values 60, 66 with threshold values for detecting the situation 72. However, also more complicated ways of classifying the situation 72, for example, via parameterizable decision trees, are possible.
(23) The situation identifier 68 or more general the situation classifier 61 comprise configuration parameters 70, which are used for defining and/or encoding situation 72 and/or whether these situations 72 are of interest or disinterest. For example, the configuration parameters 70 may comprise the above-mentioned thresholds.
(24) The configuration parameters 70 may be changed and/or adapted during the operation of the hearing system 10 in the field. The configuration parameters 70 for the classifier 61 for a specific situation 72 of interest or disinterest may be sent from the storage system 54 to a plurality of hearing systems 10. In such a way, situations 72 of interest or disinterest may be defined at the site of the storage system 54, for example by a hearing aid manufacture, which then may collect interesting sensor data, as will be described below.
(25) In summary, with the situation classifier 61, a situation 72 of interest or disinterest may be detected by classifying at least a part of the sensor data 56, 64. It has to be noted that not all sensor data 56, 64 generated may be used for classification. It may be that solely the sound data 56 is classified into situations 72 and that the additional data 64 is also collected (see below).
(26) The situation classifier 61 may be designed differently as described with respect to
(27) The detected situation 72, which may be a simple yes/no-value (situation of interest yes/no), is input into a sensor data collector 74. The sensor data collector 74 may collect the sensor data 56, 64, when the hearing system 10 is in a situation 72 of interest and may discard the sensor data 56, 64, when the hearing system 10 is in a situation 72 of disinterest. Collecting in this context may mean that the sensor data 56, 64 is buffered in a memory, such as 30, 42. Discarding may mean that the sensor data 56, 64 is not buffered.
(28) The classifier 61 also may be adapted for labelling the sensor data 56, 64. For example, it may be that first sensor data from a first sensor 20 is collected, such as audio data 56 from the microphone 20 of the hearing device 12. The classifier 61 may generate a classification of second sensor data 56 from a second sensor 44, such as audio data 56 from the microphone of the mobile device 14. The first sensor data may then be labelled with the classification of the second sensor 44. The labelling may be collected together with the sensor data 56, 64. In selected time windows, for example when the hearing system 10 is in a mode of reduced operation, for example during night, the collected sensor data 76 is sent to the storage system 54. There, the collected and optionally labelled sensor data 76 is stored in a memory 80 of the storage system 54, where it may be further used for fitting, configuring and/or training of hearing devices 12.
(29) It may be that the storage system 54 comprises a storage system classifier 78 for classifying, whether the collected sensor data 76 is sensor data of interest and that the storage system 54 solely stores collected sensor data of interest. The storage system classifier 78 may classify the collected sensor data 76 in different ways as the hearing system 10 and/or with more computational demanding classification algorithms. It also may be that automatic labelling of collected sensor data 76 is performed by the storage system classifier 78.
(30) While the present technology has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the present technology is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art and practising the claimed present technology, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. A single processor or controller or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope.
LIST OF REFERENCE SYMBOLS
(31) 10 hearing system 12 hearing device 14 mobile device 15 part behind the ear 16 part in the ear 18 tube 20 microphone 22 sound processor 24 sound output device 26 processor 28 knob 30 memory 32 acceleration sensor 34 medical sensor 36 sender/receiver 38 sender/receiver 40 processor 42 memory 44 microphone 46 loudspeaker 48 position sensor 50 user interface 52 Internet 54 storage system 56 audio data 57 hearing programs 58 output audio signal 59 sound classifier 60 sound classification values 61 situation classifier 62 sensor data classifier 64 additional sensor data 64a acceleration data 64b position data 64c medical data 65 user input 66 sensor data classification values 68 situation identifier 70 configuration parameters 72 situation 74 sensor data collector 76 collected sensor data 78 storage system classifier 80 memory