Personalization of algorithm parameters of a hearing device
11671769 · 2023-06-06
Assignee
Inventors
- Thomas Lunner (Redmond, WA, US)
- Gary Jones (Smørum, DK)
- Lars BRAMSLØW (Smørum, DK)
- Michael Syskind Pedersen (Smørum, DK)
- Pauli Minnaar (Frederiksberg, DK)
- Jesper Jensen (Smørum, DK)
- Michael Kai Petersen (Hørsholm, DK)
- Peter Sommer (Smørum, DK)
- Hongyan Sun (Smørum, DK)
- Jacob Schack Larsen (Smørum, DK)
Cpc classification
H04R25/30
ELECTRICITY
H04R25/70
ELECTRICITY
H04R2225/39
ELECTRICITY
H04R2225/49
ELECTRICITY
International classification
Abstract
A method of personalizing one or more parameters of a processing algorithm for use in a hearing aid of a specific user comprises Performing a predictive test for estimating a hearing ability of the user when listening to signals having different characteristics; Analyzing results of said predictive test for said user and providing a hearing ability measure for said user; Selecting a specific processing algorithm of said hearing aid, Selecting a cost-benefit function related to said user's hearing ability in dependence of said different characteristics for said algorithm; and Determining, for said user, one or more personalized parameters of said processing algorithm in dependence of said hearing ability measure and said cost-benefit function.
Claims
1. A method of personalizing one or more parameters of a processing algorithm for use in a processor of a hearing aid for a specific user, the method comprising performing a predictive test for estimating a hearing ability of the user when listening to test signals having different characteristics; analyzing results of said predictive test for said user and providing a hearing ability measure for said user; selecting a specific processing algorithm comprising a directionality algorithm of said hearing aid; selecting a cost-benefit function for said specific processing algorithm related to said user's hearing ability in dependence of said characteristics of said test signals, wherein said cost-benefit function provides a tradeoff between the benefits of directionality and the costs of directionality, wherein said directionality algorithm tends to provide a benefit to said specific user when a target signal is at a location that is relatively enhanced by beamforming and to incur costs to said specific user when attending to locations that are strongly attenuated by beamforming; and determining, for said user, one or more personalized parameters of said specific processing algorithm in dependence of said hearing ability measure and said cost-benefit function.
2. A method according to claim 1 wherein said hearing ability measure comprises a speech intelligibility measure or a frequency discrimination measure or an amplitude discrimination measure, or a frequency selectivity measure or a temporal selectivity measure.
3. A method according to claim 1 wherein said different characteristics of the test signals are represented by one or more of different signal-to-noise ratios (SNR), different modulation depths or modulation indices, or different detection thresholds of tones in broadband, bandlimited or band-stop noise, describing frequency selectivity, different detection thresholds for temporal gaps in broadband or bandlimited noise, describing temporal selectivity, different depths or indices of amplitude modulation as a function of modulation frequency, different frequency or depth of spectral modulation, sensitivity to frequency modulation at varying center frequencies and bandwidths, and direction of frequency modulation including discrimination of positive from negative phase of Schroeder-phase stimuli.
4. A method according to claim 1 comprising selecting the predictive test for estimating a degree of hearing ability of the user.
5. A method according to claim 1 wherein said predictive test is selected from the group comprising Spectro-temporal modulation test, Triple Digit Test, Gap detection, Notched noise test, TEN test, and Cochlear compression.
6. A method according to claim 1 wherein said processing algorithm further comprises one or more of a noise reduction algorithm, a feedback control algorithm, a speaker separation and a speech enhancement algorithm.
7. A method according to claim 1 forming part of a fitting session wherein the hearing aid is adapted to the needs of the user.
8. A method according to claim 1 wherein the step of performing the predictive test comprises initiating a test mode of an auxiliary device; and executing said predictive test via said auxiliary device.
9. A method according to claim 8 wherein said step of performing the predictive test is initiated by said user.
10. A method according to claim 1, wherein the cost-benefit function is configured to quantify the user's costs and benefits of helping systems.
11. A method according to claim 1, wherein the cost-benefit function relates to the benefit of speech intelligibility, sound quality and listening effort.
12. A method according to claim 1, wherein the cost-benefit function is estimated as the improvement due to directionality for targets from the front minus the decrement due to directionality for off-axis targets.
13. A method according to claim 1, further comprising providing an assessment of where the user's cost-benefit function crosses over from net benefit to net cost.
14. A method according to claim 1, further comprising providing an assessment of at which signal-to-noise ratio the user's cost-benefit function crosses over from net benefit to net cost.
15. A method according to claim 1, wherein said cost-benefit function is expressed as a function of signal to noise ratio (SNR) for a direction algorithm (MVDR) exhibiting off-axis costs and on-axis benefits.
16. A method according to claim 15, wherein said cost-benefit function is estimated as the improvement due to directionality for targets from the front minus the decrement due to directionality for off-axes targets.
17. A method according to claim 1, wherein said cost-benefit function relates to aspects of hearing aid outcome including one or more of speech intelligibility, sound quality, and listening effort.
18. A hearing aid configured to be worn at or in an ear of a user and/or for being at least partially implanted in the head of a user, the hearing aid comprising: a forward path for processing an electric input signal representing sound provided by an input unit, and for presenting a processed signal perceivable as sound to the user via an output unit, the forward path comprising a processor for performing said processing by executing one or more configurable processing algorithms, wherein parameters of said one or more configurable processing algorithms are personalized to the specific needs of the user by performing a predictive test for estimating a hearing ability of the user when listening to test signals having different characteristics; analyzing results of said predictive test for said user and providing a hearing ability measure for said user; selecting a specific processing algorithm comprising a directionality algorithm of said hearing aid; selecting a cost-benefit function for said specific processing algorithm related to said user's hearing ability in dependence of said characteristics of said test signals, wherein said cost-benefit function provides a tradeoff between the benefits of directionality and the costs of directionality, wherein said directionality algorithm tends to provide a benefit to said specific user when a target signal is at a location that is relatively enhanced by beamforming and to incur costs to said specific user when attending to locations that are strongly attenuated by beamforming; and determining, for said user, one or more personalized parameters of said specific processing algorithm in dependence of said hearing ability measure and said cost-benefit function.
19. A hearing aid according to claim 18 being constituted by or comprising an air-conduction type hearing aid, a bone-conduction type hearing aid, a cochlear implant type hearing aid, or a combination thereof.
20. A hearing system comprising: a hearing aid according to claim 10; and an auxiliary device, the hearing system being adapted to establish a communication link between the hearing aid and the auxiliary device to provide that data can be exchanged or forwarded from one to the other, wherein the auxiliary device is configured to execute an application implementing a user interface for the hearing aid and allowing a predictive test for estimating a hearing ability of a user to be initiated by the user and executed by the auxiliary device including a) playing sound elements of said predictive test via a loudspeaker of the auxiliary device, or b) transmitting sound elements of said predictive test via said communication link to said hearing device for being presented to the user via an output unit of the hearing aid, and wherein the user interface is configured to receive responses of the user to the predictive test, and wherein the auxiliary device is configured to store said responses of the user to the predictive test.
21. A hearing system according to claim 20 wherein the auxiliary device comprises a remote control, a smartphone, or other portable or wearable electronic device.
22. A hearing system according to claim 20 wherein the auxiliary device comprises or forms part of a fitting system for adapting the hearing aid to a particular user's needs.
23. A hearing system according to claim 20 wherein the auxiliary device is configured to estimate a speech reception threshold of the user from the responses of the user to the predictive test.
24. A hearing system according to claim 20 wherein the auxiliary device is configured to execute the predictive test as a triple digit test where sound elements of said predictive test comprise digits a) played at different signal to noise ratios, or b) digits played at a fixed signal to noise ratio, but with different hearing aid parameters.
25. A non-transitory application, termed an APP, comprising executable instructions configured to be executed on an auxiliary device to implement a user interface for a hearing system comprising a hearing aid, wherein the APP is configured to allow a user to perform the following steps select and initiate a predictive test for estimating a hearing ability of the user when listening to test signals having different characteristics; initiate an analysis of results of said predictive test for said user and providing a hearing ability measure for said user; select a specific processing algorithm comprising a directionality algorithm of said hearing aid, select a cost-benefit function for said algorithm related to said user's hearing ability in dependence of said different characteristics of said test signals, wherein said cost-benefit function provides a tradeoff between the benefits of directionality and the costs of directionality, wherein said directionality algorithm tends to provide a benefit to said specific user when a target signal is at a location that is relatively enhanced by beamforming and to incur costs to said specific user when attending to locations that are strongly attenuated by beamforming; and determine, for said user, one or more personalized parameters of said processing algorithm in dependence of said hearing ability measure and said cost-benefit function.
26. A non-transitory application according to claim 25, configured to allow the user to apply said personalized parameters to said processing algorithm.
27. A non-transitory application according to claim 26, configured to allow the user to check the result of said personalized parameters when applied to an input sound signal provided by an input unit of the hearing aid and when the resulting signal is played for the user via an output unit of the hearing aid; and accept or reject the personalized parameters.
Description
BRIEF DESCRIPTION OF DRAWINGS
(1) The aspects of the disclosure may be best understood from the following detailed description taken in conjunction with the accompanying figures. The figures are schematic and simplified for clarity, and they just show details to improve the understanding of the claims, while other details are left out. Throughout, the same reference numerals are used for identical or corresponding parts. The individual features of each aspect may each be combined with any or all features of the other aspects. These and other aspects, features and/or technical effect will be apparent from and elucidated with reference to the illustrations described hereinafter in which:
(2)
(3) in the bottom graph schematically illustrates that a high performing listener with low SRTs can be expected to enjoy a net benefit of beamforming at considerably lower SNRs than a lower performing listener with higher SRTs does.
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13) The figures are schematic and simplified for clarity, and they just show details which are essential to the understanding of the disclosure, while other details are left out. Throughout, the same reference signs are used for identical or corresponding parts.
(14) Further scope of applicability of the present disclosure will become apparent from the detailed description given hereinafter. However, it should be understood that the detailed description and specific examples, while indicating preferred embodiments of the disclosure, are given by way of illustration only. Other embodiments may become apparent to those skilled in the art from the following detailed description.
DETAILED DESCRIPTION OF EMBODIMENTS
(15) The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. Several aspects of the apparatus and methods are described by various blocks, functional units, modules, components, circuits, steps, processes, algorithms, etc. (collectively referred to as “elements”). Depending upon particular application, design constraints or other reasons, these elements may be implemented using electronic hardware, computer program, or any combination thereof.
(16) The electronic hardware may include micro-electronic-mechanical systems (MEMS), integrated circuits (e.g. application specific), microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), gated logic, discrete hardware circuits, printed circuit boards (PCB) (e.g. flexible PCBs), and other suitable hardware configured to perform the various functionality described throughout this disclosure, e.g. sensors, e.g. for sensing and/or registering physical properties of the environment, the device, the user, etc. Computer program shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
(17) The present application relates to the field of hearing devices, e.g. hearing aids.
(18)
(19) The present disclosure proposes to apply a cost-benefit function as a way of quantifying each individual's costs and benefits of helping systems. The present disclosure further discloses a method for using predictive measures in order to achieve better individualization of the settings for the individual patient.
(20) Cost-Benefit Function:
(21) In the example in the
(22) As seen in
(23) The cost-benefit function may relate to many aspects of hearing aid outcome, benefit, or ‘quality’, e.g. speech intelligibility, sound quality and listening effort, etc.
(24) Note that the illustration at the upper right of
(25) Predictive Measures:
(26) Predictive measures may e.g. include psychoacoustic tests, questionnaires, subjective evaluations by a hearing care professional (HCP) and/or patient, etc.
(27) Potential predictive tests include, but are not limited to the following: Spectro-temporal modulation (STM) test Triple Digit Test Personal preference questionnaire Listening preference questionnaire HCP slider or similar HCP assessment tool Client slider or similar client self-assessment tool Acceptable noise level test SWIR test Listening effort assessment Reading Span test Test of Everyday Attention Auditory Verbal Learning Test Text Reception Threshold Other cognitive assessment tests Speech-in-noise test SNR loss assessment Temporal fine structure sensitivity Temporal modulation detection Frequency selectivity Critical bandwidth Notched-noise bandwidth estimation Threshold Equalizing Noise (TEN) test (se e.g. [Moore et al.; 2000]). Spectral ripple discrimination Frequency modulation detection Gap detection Cochlear compression estimates Questionnaires: SSQ Questionnaires: Self-reported handicap Binaural masked detection Lateralization Listening effort Spatial awareness test Spatial localization test(s) Test of apparent source width Demographic information such as age, gender, languages spoken by patient, etc.
(28) Predictive measures are e.g. used to estimate the individual patient's need for help and to adjust the patient's settings of corresponding processing algorithms accordingly. Such assessment may e.g. be made during a fitting session where hearing aid processing parameters are adapted to the individual persons' needs.
(29) An assessment of where (e.g. at which SNR) an individual patient's cost-benefit function crosses over from net benefit to net cost may be performed according to the present disclosure.
(30) In the following aspects of the present disclosure are exemplified in the context of a directionality algorithm. However, this approach is also intended for other helping systems such as noise reduction, etc.
(31) In the present example, the benefits and costs are measured in the domain of speech intelligibility (SI) (benefits are measured as increase in SI, costs as a decrease in SI). The benefits and costs with respect to speech intelligibility may be measured by evaluating specific listening tests for a given user wearing hearing aid(s) with parameter settings representing different modes operation of the directional system (e.g. under different SNR).
(32) However, this approach is also intended for use with a wide range of outcome measures, e.g. including, but not limited to: The cognitive load listening places on the patient Listening effort Mental energy Ability to remember what was said Spatial awareness Spatial perception The patient's perception of sound quality The patient's perception of listening comfort
(33)
(34) In other words,
(35)
(36) While the approach described above has value for optimizing hearing aid fitting for the individual, constraints on time and on the equipment available across audiological clinics will most likely require that we apply this method indirectly via a predictive test rather than take the direct approach of calculating full cost-benefit functions for each patient. The reason for choosing this indirect method (i.e., use of a predictive test) is that in clinical practice it is rarely if ever possible to collect the large amount of data needed to calculate full cost/benefit functions for all patients. Thus, one uses a predictive test that is correlated with one or more key features of cost/benefit; this could include but is not limited to the zero-crossing point of the cost-benefit function or an identifying feature or features of one or more of the psychometric functions from which the cost-benefit function is derived. One does this by collecting data on a test population for the cost-benefit analysis described above as well as for predictive tests and then identifying good predictive tests with the help of correlational analysis. The predictive tests could include, for example, the Triple Digit Test, Spectro-Temporal Modulation Test and others.
(37)
(38) The SNR range is exemplary and may vary according to the specific application or acoustic situation.
(39) Measuring Thresholds in Predictive Test
(40) A method of estimating thresholds may comprise the following steps. Run predictive test (e.g. the Triple Digit Test and/or a Spectro-temporal modulation (STM) test); Vary the input parameter (e.g., modulation depth for STM or SNR for the Triple Digit Test); Find threshold (e.g. as the modulation depth or SNR for which the listener achieves a pre-determined target level of performance, where possible target levels of performance could be 50% correct, 80% correct or other).
(41) The Triple Digit Test is sometimes also called “digits-in-noise” test. Target sounds are 3 digits, e.g., “2” . . . “7” . . . “5”. SNR may be varied by varying the level of one or more ‘Masker sounds’, e.g. modulated noise, a recorded scene or other.
(42) Mapping Predictive Test to Automatics
(43) An aim of the present disclosure is to give the hearing aid user access to sound around the user without removing sound if not considered necessary from a perception (e.g. speech intelligibility) point of view as regards a target (speech) signal.
(44) A speech intelligibility of 50% understanding may be considered as a key marker (e.g. defining Speech Reception Thresholds (SRT)). It may also serve as a marker of when the listener has access to sounds, a view that may be supported by pupillometry data. If we use the region around 50% intelligibility in this way, then from
(45) 1. An Example, Providing Personalization Data:
(46) In the following, alternative or supplementary schemes for collecting data, which can be used to fine tune (e.g. personalize) the parameters in the hearing instrument, are outlined.
(47) Modern hearing devices do not necessarily only consist of hearing instruments attached to the ears, but may also include or be connected to additional computational power, e.g. available via auxiliary devices such as smartphones. Other auxiliary devices, e.g. tablets, laptops, and other wired or wirelessly connected communication devices may be available too as resources for the hearing instrument(s). Audio signals may be transmitted (exchanged) between the hearing instruments and auxiliary devices, and the hearing instruments may be controlled via a user interface, e.g. a touch display, on the auxiliary devices.
(48) It is proposed to use training sounds to fine tune the settings of the hearing instruments. The training sounds may e.g. represent acoustic scenes, which the listener finds difficult. Such situations may be recorded by the hearing instrument microphones, and wirelessly transmitted to the auxiliary device. The auxiliary device may analyse the recorded acoustic scene and suggest one or more improved sets of parameters to the hearing instrument, which the listener may listen to and compare to the sound processed by a previous set of parameters. Based on a (e.g. by the user) chosen set of parameters, a new set of parameters may be proposed (e.g. by the hearing instrument or the auxiliary device) and compared to the previous set of parameters. Hereby, based on the feedback from the listener, an improved set of processing parameters may be stored in the hearing instrument and/or applied whenever a similar acoustic environment is recognized. The final improved set of processing parameters may be transmitted back to the auxiliary device to allow it to update its recommendation rules, based on this user feedback.
(49) Another proposal is to estimate the hearing aid user's ability to understand speech. Speech intelligibility tests are usually too time consuming to do during the hearing instrument fitting, but a speech intelligibility test and/or other predictive tests can as well be made available via an auxiliary device, hereby enabling the hearing instrument user to find his or her speech reception threshold (SRT). Based on the estimated or predicted speech reception threshold as well as the audiogram, the hearing instrument parameters (such as e.g. the aggressiveness of the noise reduction system) can be fine-tuned to the individual listener. Such a predictive test (e.g. the ‘triple digit test’ or a ‘Spectro-temporal modulation’-(STM-) test) can be performed with several different kinds of background noise, representing different listening situations. In this way hearing aid settings can be optimised to ensure the best speech intelligibility in many different situations.
(50) Other proposals involve measuring the listener's ability to localize sound sources simulated by the hearing aids, or his/her preferences for noise suppression and/or reverberation suppression, or his/her ability to segregate several sound sources etc.
(51)
(52)
(53) The personalization decision may be based on supervised learning (e.g. a neural network). The personalization parameters (e.g. the amount of noise reduction) may e.g. be determined by a trained neural network, where the input features are a set of predictive measures (e.g. measured SRTs, an audiogram, etc.).
(54) The joint input/preferred settings (e.g. obtained as exemplified in
(55) Related to
(56) 2. An Example, Personalization of Hearing Aid Indicators:
(57) A scheme for allowing a hearing aid user to select and personalize the tones/patterns of a hearing aid to his or her liking is proposed in the following. This can be done either during fitting of the hearing aid to the user's needs (e.g. at a hearing care professional (HCP)), or after fitting, e.g. via an APP of a mobile phone or other processing device (e.g. a computer). A collection of tones and LED patterns may be made available (e.g. in the cloud or in a local device) to the user. The user may browse, select and try out a number of different options (tone and LED patterns), before choosing the preferred ones. The selected (chosen) ones are then stored in the hearing aid of the user, replacing possible default ones. The user may further be allowed to compose and generate own audio (e.g. tone patterns, music or voice clips) and/or visual (e.g. LED) patterns. This approach allows the user to select the set of personal interested indicators with personalized indicator patterns, and further it enables more use cases than what are known today, for example, but not limited to: Configure and personalize indicators for health alerts or other notifications (utilizing hearing instrument sensors info or AI predict info (AI=Artificial Intelligence)), Integrated with “if this then that” (IFTTT) so that the personalized events can trigger the indicators.
(58)
(59) 3. An Example, Adaptive Personalization of Hearing Aid Parameters Using Context Information.
(60) Hearing aid fitting may e.g. be personalized by defining general preferences for low, medium or high attenuation of ambient sounds thus determining auditory focus and noise reduction based on questionnaire input and/or listening tests (e.g. the triple digit test or an STM test, etc.) but these settings do not adapt to the user's cognitive capabilities throughout the day; e.g. the ability to separate voices when in a meeting might be better in the morning or the need for reducing background noise in a challenging acoustical environment could increase in the evening. These threshold values are rarely personalized due to the lack of clinical resources in hearing healthcare, although patients are known to exhibit differences of up to 15 dB (e.g. over the course of a specific time period, e.g. a day) in ability to understand speech in noise. Additionally, hearing aids are calibrated based on pure tone hearing threshold audiograms, which do not capture the large differences in loudness functions (e.g. loudness growth functions) among users. Rationales (VAC+, NAL) converting audiograms to frequency specific amplification are based on average loudness functions (or loudness growth functions), while patients in reality vary by up to 30 dB in in how they binaurally perceive loudness of sounds. Combining internet connected hearing aids with a smartphone app make it feasible to dynamically adapt the thresholds for beamforming or modify gain according to each user's loudness perception.
(61) Even though it is possible to define “if this then that” (IFTTT) rules for changing programs on hearing aids connected via Bluetooth to a smartphone, in such configuration there is no feedback loop for assessing whether the user is satisfied with the hearing aid settings in a given context. Nor does the hearing aid learn from the data in order to automatically adapt the settings to the changing context.
(62) Furthermore, audiological individualization has so far been based on predictive methods, as e.g. currently a questionnaire or a listening test. While this can be a good starting point, it might be expected that a more precise estimation of the individuals' abilities can be achieved via a profiling of individual preferences in various sound environments. Further, an estimation of the individuals Speech Reception Threshold (SRT), or of a full psychometric function, might be possible though a client preference profiling conducted in her/his “real” sound environments.
(63) Based on the above, a better individualized hearing instrument adjustment, using information additional to the audiogram may become possible.
(64) Hearing aids, which are able to store alternative fitting profiles as programs, or other assemblies of settings, make it possible to adapt the auditory focus and noise reduction settings dependent on the context and time of the day. Defining the context based on sound environment (detected by the hearing aid including e.g. SNR and level), smartphone location and calendar data (IFTTT triggers: iOS location, Google calendar event, etc.) allows for modeling user behavior as time series parameters i.e. ‘trigger A’, ‘location B’, ‘event C’, ‘time D’, “Sound environment type F” which are associated with the preferred hearing aid action ‘setting low/medium/high’ as exemplified by:
(65) [‘exited’, ‘Mikkelborg’, ‘bike’, ‘morning’, ‘high’, ‘SNR value (dB)’]
(66) [‘entered’, ‘Eriksholm’, ‘office’, ‘morning’, ‘low, ‘SNR value (dB)’’]
(67) [‘calendar’, ‘Eriksholm’, ‘lunch’, ‘afternoon’, ‘medium’, ‘SNR value (dB)’] . . . .
(68) In addition to low level signal parameters like SPL or SNR, we classify the soundscape based on audio spectrograms generated by the hearing aid signal processing. This enables not only identifying an environment e.g. ‘office’, but also differentiating between intents like e.g. ‘conversation’ (2-3 persons, own voice) versus ‘ignore speech’ (2-3 persons, own voice not detected). The APP may be configured to
(69) 1) automatically adjust the low/medium/high thresholds (SPL, SNR) defining when the beamforming and attenuation should kick in, and
(70) 2) dynamically personalize the underlying rationales (VAC+, NAL), by adapting the frequency specific amplification dependent on the predicted environment and intents.
(71) The APP may combine the soundscape ‘environment+intent’ classification with the user selected preferences, to predict when to modify the rationale by generating an offset in amplification, e.g. +/−6 dB, which is added to or subtracted from the average rationale across e.g. 10 frequency bands from 200 Hz to 8 kHz, as exemplified by:
(72) [‘office’, ‘conversation’, −2 dB, −1 dB, 0 dB, +2 dB, +2 dB, +2 dB, +2 dB, +2 dB, +2 dB, +2 dB’]
(73) [‘cafe’, ‘socializing’, +2 dB, +2 dB, +1 dB, 0 dB, 0 dB, 0 dB, −1 dB, −2 dB, −2 dB, −2 dB]
(74) That is, the APP may, in dependence on the ‘environment+intent’ classification, personalize rationales (VAC+, NAL) by overwriting and thereby
(75) 1) shaping the gain according to e.g. ‘office+conversation’ enhance high frequency gain to facilitate speech intelligibility, or
(76) 2) modify the gain to individually learned loudness functions based on e.g. ‘cafe+socializing’ preferences for reducing the perceived loudness of a given environment.
(77) Modeling user behavior as time series parameters (‘trigger A’, ‘location B’, ‘event C’, ‘time D’, ‘setting low/medium/high’) provides a foundation for training a decision tree algorithm to predict the optimal setting when encountering a new location or event type.
(78) Applying machine learning techniques to the context data by using the parameters as input for training a classifier would enable prediction of the corresponding change of hearing aid program or change of other assemblies of settings (IFTTT action). Subsequently implementing the trained classifier as an “if this then that” algorithm in a smartphone APP (decision tree), would facilitate prediction and automatic selection of the optimal program whenever the context changes. That is, even when encountering a new location or event, the algorithm will predict the most likely setting based on previously learned behavioral patterns. As an end result, this may improve the individuals' general preference of the Hearing instrument, and/or improve the individual's objective benefit of using the hearing instruments, as e.g. speech intelligibility (SI).
(79) The APP should additionally provide a simple feedback interface (accept/decline) enabling the user to indicate if the setting is not satisfactory to assure that the parameters are continuously updated and that the classifier is retrained. Even with little training data the APP would thus be able to adapt the hearing aid settings to the user's cognitive capabilities and changing sound environments throughout the day. Likewise, the generated data and user feedback might provide valuable insights, such as which hearing aids settings are selected in which context. Such information may be useful in order to further optimize the embedded signal processing capabilities within the hearing aids.
(80)
(81) The database may be generated during a learning mode of the hearing aid, where the user encounters a number of relevant acoustic situations (environments) in various states (e.g. at different times of day). In the learning mode, the user may be allowed to influence processing parameters of selected algorithms, e.g. noise reduction (e.g. thresholds for attenuating noise) or directionality (e.g. thresholds for applying directionality).
(82) An algorithm (e.g. an artificial neural network, e.g. a deep neural network) may e.g. be trained using a database of ‘ground truth’ data as outline above in an iterative process, e.g. by applying a cost function. The training may e.g. be performed by using numerical optimization methods, such as e.g. (iterative) stochastic gradient descent (or ascent), or Adaptive Moment Estimation (Adam). A thus trained algorithm may be applied to the processor of the hearing aid during its normal use. Alternatively or additionally, a trained (possibly continuously updated) algorithm may be available during normal use of the hearing aid, e.g. via a smartphone, e.g. located in the cloud. A possible delay introduced by performing some of the processing in another device (or on a server via a network, e.g. ‘the cloud’) may be acceptable, because is not necessary to apply modifications (personalization) of processing of the hearing aid within milli-seconds or seconds.
(83) During normal use, the data that are referred to in steps S3-S6 may be generated and fed to a trained algorithm whose output may be (estimated) volume and/or program settings and/or personalized parameters of a processing algorithm for the given environment and mental state of the user.
(84)
(85)
(86) The APP may comprise further screens or functions, e.g. allowing a user to evaluate the determined personalized parameters before accepting them (via the APPLY parameters ‘button’), e.g. as outlined in
(87) The hearing aids (HD1, HD2) are shown in
(88) In an embodiment, the remote control APP is configured to interact with a single hearing aid (instead of with a binaural hearing aid system).
(89) In the embodiment of
(90)
(91) In the embodiment of a hearing aid in
(92) The substrate (SUB) further comprises a configurable signal processor (DSP, e.g. a digital signal processor), e.g. including a processor for applying a frequency and level dependent gain, e.g. providing beamforming, noise reduction, filter bank functionality, and other digital functionality of a hearing aid, e.g. implementing features according to the present disclosure. The configurable signal processor (DSP) is adapted to access the memory (MEM) e.g. for selecting appropriate parameters for a current configuration or mode of operation and/or listening situation and/or for writing data to the memory (e.g. algorithm parameters, e.g. for logging user behavior) and/or for accessing the database of personalized parameters according to the present disclosure. The configurable signal processor (DSP) is further configured to process one or more of the electric input audio signals and/or one or more of the directly received auxiliary audio input signals, based on a currently selected (activated) hearing aid program/parameter setting (e.g. either automatically selected, e.g. based on one or more sensors, or selected based on inputs from a user interface). The mentioned functional units (as well as other components) may be partitioned in circuits and components according to the application in question (e.g. with a view to size, power consumption, analogue vs. digital processing, acceptable latency, etc.), e.g. integrated in one or more integrated circuits, or as a combination of one or more integrated circuits and one or more separate electronic components (e.g. inductor, capacitor, etc.). The configurable signal processor (DSP) provides a processed audio signal, which is intended to be presented to a user. The substrate further comprises a front-end IC (FE) for interfacing the configurable signal processor (DSP) to the input and output transducers, etc., and typically comprising interfaces between analogue and digital signals (e.g. interfaces to microphones and/or loudspeaker(s), and possibly to sensors/detectors). The input and output transducers may be individual separate components, or integrated (e.g. MEMS-based) with other electronic circuitry.
(93) The hearing aid (HD) further comprises an output unit (e.g. an output transducer) providing stimuli perceivable by the user as sound based on a processed audio signal from the processor or a signal derived therefrom. In the embodiment of a hearing aid in
(94) The electric input signals (from input transducers M.sub.BTE1, M.sub.BTE2, M.sub.ITE) may be processed in the time domain or in the (time-) frequency domain (or partly in the time domain and partly in the frequency domain as considered advantageous for the application in question).
(95) All three (M.sub.BTE1, M.sub.BTE2, M.sub.ITE) or two of the three microphones (M.sub.BTE1, M.sub.ITE) may be included in the ‘personalization’-procedure according to the present disclosure. The ‘front’-BTE-microphone (M.sub.BTE1) may be selected as a reference microphone.
(96) In the embodiment of
(97) The embodiment of a hearing aid (HD) exemplified in
(98) In the present disclosure a scheme for personalizing settings has been described in the framework of processing algorithms (e.g. directional or noise reduction algorithms) using predictive tests. One could, however, also use these types of tests for the prescription of physical acoustics, including for example a ventilation channel (‘vent’).
(99) It is intended that the structural features of the devices described above, either in the detailed description and/or in the claims, may be combined with steps of the method, when appropriately substituted by a corresponding process.
(100) As used, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well (i.e. to have the meaning “at least one”), unless expressly stated otherwise. It will be further understood that the terms “includes,” “comprises,” “including,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will also be understood that when an element is referred to as being “connected” or “coupled” to another element, it can be directly connected or coupled to the other element but an intervening element may also be present, unless expressly stated otherwise. Furthermore, “connected” or “coupled” as used herein may include wirelessly connected or coupled. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. The steps of any disclosed method are not limited to the exact order stated herein, unless expressly stated otherwise.
(101) It should be appreciated that reference throughout this specification to “one embodiment” or “an embodiment” or “an aspect” or features included as “may” means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the disclosure. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects.
(102) The claims are not intended to be limited to the aspects shown herein but are to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more.
(103) Accordingly, the scope should be judged in terms of the claims that follow.
REFERENCES
(104) Bernstein, J. G. W., Mehraei, G., Shamma, S., Gallun, F. J., Theodoroff, S. M., and Leek, M. R. (2013). “Spectrotemporal modulation sensitivity as a predictor of speech intelligibility for hearing-impaired listeners,” J. Am. Acad. Audiol., 24, 293-306. doi:10.3766/jaaa.24.4.5. [ANSI/ASA S3.5; 1997] “American National Standard Methods for the Calculation of the Speech Intelligibility Index,” ANSI/ASA S3.5, 1997 Edition, Jun. 6, 1997. [Taal et al.; 2010] Cees H. Taal; Richard C. Hendriks; Richard Heusdens; Jesper Jensen, “A short-time objective intelligibility measure for time-frequency weighted noisy speech”, ICASSP 2010 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 4214-4217. [Moore et al.; 2000] Moore, B. C. J., Huss, M., Vickers, D. A., Glasberg, B. R., and Alcantara, J. I. (2000). “A test for the diagnosis of dead regions in the cochlea,” Br. J. Audiol., doi: 10.3109/03005364000000131. doi:10.3109/03005364000000131. [Elberling et al.; 1989] C. Elberling, C. Ludvigsen and P. E. Lyregaard, “DANTALE: A NEW DANISH SPEECH MATERIAL”, Scand. Audiol. 18, pp. 169-175, 1989. [Bernstein et al.; 2016] Bernstein, J. G. W., Danielsson, H., Hallgren, M., Stenfelt, S., Ronnberg, J., & Lunner, T., “Spectrotemporal Modulation Sensitivity as a Predictor of Speech-Reception Performance in Noise With Hearing Aids”, Trends in Hearing, vol. 20, pp. 1-17, 2016.