HEARING SYSTEM AND A METHOD FOR PERSONALIZING A HEARING AID
20230037356 · 2023-02-09
Assignee
Inventors
- Niels Henrik PONTOPPIDAN (Smørum, DK)
- James Michael HARTE (Smørum, DK)
- Hamish INNES-BROWN (Smørum, DK)
- Lorenz FIEDLER (Smørum, DK)
Cpc classification
H04R2225/55
ELECTRICITY
H04R25/70
ELECTRICITY
H04R2225/39
ELECTRICITY
H04R25/606
ELECTRICITY
H04R25/50
ELECTRICITY
H04R2225/41
ELECTRICITY
International classification
Abstract
A hearing system includes a processing device, a hearing aid adapted to be worn by a user, and a data logger. The hearing aid includes an input transducer providing an electric input signal representing sound in the environment of the user, and a hearing aid processor executing a processing algorithm in dependence of a specific parameter setting. The data logger stores time segments of said electric input signal, and data representing a corresponding user intent. The processing device comprises a simulation model of the hearing aid. The simulation model is based on a learning algorithm configured to provide a specific parameter setting optimized to the user's needs in dependence of a hearing profile of the user, the logged data, and a cost function. A method of determining a parameter setting for a hearing aid is further disclosed.
Claims
1. A hearing system comprising a processing device, and a hearing aid adapted to be worn by a user, the hearing aid comprising an input transducer configured to provide an electric input signal representing sound in the environment of the user, a hearing aid processor configured to execute at least one processing algorithm configured to modify said electric input signal and providing a processed signal in dependence thereof, said at least one processing algorithm being configurable in dependence of a specific parameter setting, and a user interface allowing a user to control functions of the hearing aid and to indicate user intent related to a preferred processing of a current electric input signal; a data logger storing time segments of said electric input signal, or estimated parameters that characterizes said electric input signal, and data representing said corresponding user intent while the user is wearing the hearing aid during normal use; said hearing system comprises a communication interface between said processing device and said hearing aid, the communication interface being configured to allow said processing device and said hearing aid to exchange data between them, the processing device comprising a simulation processor comprising a simulation model of the hearing aid, the simulation model being based on a learning algorithm configured to determine said specific parameter setting for said hearing aid in dependence of a hearing profile of the user, a multitude of time segments of electric input signals representing different sound environments, a plurality of user intentions each being related to one of said multitude of time segments, said user intentions being related to a preferred processing of said time segments of electric input signals, wherein the hearing system is configured to feed said time segments of said electric input signal and data representing corresponding user intent from said data logger, or data representative thereof, to said simulation model via said communication interface to thereby allow said simulation model to optimize said specific parameter setting with data from said hearing aid and said user in an iterative procedure wherein a current parameter setting for said simulation model of said hearing aid is iteratively changed in dependence of a cost function, and wherein said optimized simulation-based hearing aid setting is determined as the parameter setting optimizing said cost function.
2. A hearing system according to claim 1 wherein the processing device forms part of or constitutes a fitting system.
3. A hearing system according to claim 1 wherein the user interface of the hearing aid comprises an APP configured to be executed on a portable electronic device.
4. A hearing system according to claim 1 wherein at least a part of the functionality of the processing device is accessible via a communication network.
5. A hearing system according to claim 1 configured to determine an initial, simulation-based hearing aid setting in dependence of a) the hearing profile of the user, b) the simulation model of the hearing aid, c) a set of recorded sound segments, and to transfer the simulation-based hearing aid setting to said hearing aid via said communication interface, and to apply the simulation-based hearing aid setting to said hearing aid processor for normal use of the hearing aid, at least in an initial learning period.
6. A hearing aid system according to claim 1 wherein the simulation model comprises a model of acoustic scenes.
7. A hearing aid system according to claim 6 wherein the learning algorithm is configured to determine said specific parameter setting for said hearing aid in dependence of a variety of different acoustic scenes created by mixing said time segments of the electric input signals in accordance with said model of acoustic scenes.
8. A hearing aid system according to claim 1 comprising at least one detector or sensor for detecting a current property of the user or of the environment around the user.
9. A hearing aid system according to claim 8 wherein current data from the at least one detector are stored in the datalogger and associated with other current data stored in the data logger.
10. A hearing aid system according to claim 1 wherein the cost function comprises a speech intelligibility measure.
11. A hearing aid system according to claim 1 wherein the hearing aid is constituted by or comprises an air-conduction type hearing aid, a bone-conduction type hearing aid, a cochlear implant type hearing aid, or a combination thereof.
12. A method of determining a parameter setting for a specific hearing aid of a particular user, the method comprising S1. Providing a simulation-based hearing aid setting in dependence of a) a hearing profile of the user, b) a digital simulation model of the hearing aid, the simulation model comprising configurable processing parameters of the hearing aid, c) a set of recorded sound segments, d) determining said hearing aid setting by optimizing said processing parameters in an iterative procedure in dependence of said recorded sound segments, said hearing profile, said simulation model, and a cost function, S2. Transferring the simulation-based hearing aid setting to an actual version of said specific hearing aid, S3. Using the simulation-based hearing aid setting on said actual hearing aid, when worn by the user, S4. Logging data from the actual hearing aid, said data including data representing encountered sound environments and the user's classification thereof, S5. Transferring the logged data to the simulation model, S6. Optimizing said simulation-based hearing aid setting determined in step S1 based on said logged data, optionally mixed with said recorded sound segments, S7. Transferring the optimized simulation-based hearing aid setting to the actual version of said specific hearing aid.
13. A method according to claim 12 wherein steps S4-S7 are repeated.
14. A method according to claim 12 wherein step S4 further comprises logging data from one or more of the activities of the user, the intent of the user, and the priorities of the user.
15. A method according to claim 12 wherein the cost function comprises and auditory perception measure.
16. A data processing system comprising a processor and program code means for causing the processor to perform the method of claim 12.
17. A non-transitory computer-readable medium storing computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method of claim 12.
18. A method of determining a hearing aid setting comprising a parameter setting, or set of parameter settings, for a specific hearing aid of a particular user, the method comprising: S1. Providing a multitude of simulated acoustic scenes in dependence of meta-data of the hearing aid characterizing sound environments encountered by the user mixed with recorded sounds from a database; S2. Providing hearing aid processed simulated acoustic scenes according to a current set of parameter settings based on a digital simulation model of the user's hearing aid and said multitude of simulated acoustic scenes from S1; S3. Providing hearing loss-deteriorated hearing aid processed simulated acoustic scenes based on a digital simulation of the direct impact on the hearing aid processed simulated acoustic scenes from S2 due to the user's hearing loss based on the hearing profile; S4. Providing a resulting listening measure of the user's perception of said simulated acoustic scenes based on a hearing model that simulates the perception of the user of said hearing loss-deteriorated hearing aid processed simulated acoustic scenes from S3; S5. Optimizing the resulting listening measure from S4 by changing the current set of parameter settings from S2 under a cost function constraint, wherein the cost function is the resulting listening measure; S6. Repetition of S2-S6 until convergence, or a set performance, is reached; S7. Transferring the optimized simulation-based hearing aid setting(s) to the actual version of said specific hearing aid; S8. Using the simulation-based hearing aid setting on said actual hearing aid, when worn by the user; S9. Logging data from the actual hearing aid, said data including data representing encountered sound environments and the user's classification thereof; S10. Transferring the logged data to the digital simulation model; S11. Optimizing said simulation-based hearing aid setting based on said logged data following steps S1-S7.
19. A method according to claim 18 wherein the resulting listening measure comprises one of a speech intelligibility measure, a listening effort measure, or other comfort based metrics.
20. A method according to claim 18 wherein the cost function constraint comprises maximizing the speech intelligibility measure or a comfort measure, or minimizing the listening effort measure.
Description
BRIEF DESCRIPTION OF DRAWINGS
[0133] The aspects of the disclosure may be best understood from the following detailed description taken in conjunction with the accompanying figures. The figures are schematic and simplified for clarity, and they just show details to improve the understanding of the claims, while other details are left out. Throughout, the same reference numerals are used for identical or corresponding parts. The individual features of each aspect may each be combined with any or all features of the other aspects. These and other aspects, features and/or technical effect will be apparent from and elucidated with reference to the illustrations described hereinafter in which:
[0134]
[0135]
[0136]
[0137]
[0138]
[0139]
[0140]
[0141]
[0142]
[0143]
[0144] The figures are schematic and simplified for clarity, and they just show details which are essential to the understanding of the disclosure, while other details are left out. Throughout, the same reference signs are used for identical or corresponding parts.
[0145] Further scope of applicability of the present disclosure will become apparent from the detailed description given hereinafter. However, it should be understood that the detailed description and specific examples, while indicating preferred embodiments of the disclosure, are given by way of illustration only. Other embodiments may become apparent to those skilled in the art from the following detailed description.
DETAILED DESCRIPTION OF EMBODIMENTS
[0146] The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. Several aspects of the apparatus and methods are described by various blocks, functional units, modules, components, circuits, steps, processes, algorithms, etc. (collectively referred to as “elements”). Depending upon particular application, design constraints or other reasons, these elements may be implemented using electronic hardware, computer program, or any combination thereof.
[0147] The electronic hardware may include micro-electronic-mechanical systems (MEMS), integrated circuits (e.g. application specific), microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), gated logic, discrete hardware circuits, printed circuit boards (PCB) (e.g. flexible PCBs), and other suitable hardware configured to perform the various functionality described throughout this disclosure, e.g. sensors, e.g. for sensing and/or registering physical properties of the environment, the device, the user, etc. Computer program shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
[0148] The present application relates to the field of hearing aids, in particular to personalizing processing of a hearing aid to its current user.
[0149] In the present disclosure, the current solutions for obtaining personalized preferences from applying AI and ML to the aforementioned data types are proposed to be extended by adding at least one (e.g. a majority, or all) of four further steps (cf. I, II, III, IV, below) to the current process where manufacturers provide standard settings, audiologists fine-tune standard settings or start from scratch, and hearing instrument wearers report back to audiologist about preferences or where preferences are monitored through data logging (possibly extended with bio-signals, e.g. EEG, temperature, etc.).
[0150] I. A First Step May Comprise Determining and Verifying a Simulation-Based Hearing Aid Setting:
[0151] Ia: Simulation Based Optimization of Prescribed Hearing Aid Settings with Respect to Speech Intelligibility or Other Domains Like Audibility, Comfort, Spatial Clarity, Etc.
[0152] Consider a hearing loss and outcome simulation engine (one particular embodiment is denoted FADE (described in [Schädler et al.; 2018], [Schadler et al.; 2016]), which handles hearing loss simulation, processing simulation, and estimation of intelligibility (involving automatic speech recognition), which is used as the example embodiment hereafter). The simulation engine FADE takes a set of recorded and transcribed sentences (e.g. both audio and text is available), a set of background noises (as audio), parameters describing an individual's hearing loss, an instance of a hearing aid (either physical instance or a digital equivalent) fitted to the individual hearing loss. The process starts by processing sounds from a database with prescribed settings and passing this mixture through the hearing loss and hearing outcome simulation, where FADE predicts the speech understanding performance. Analyzing the impact on the performance as a function of the hearing aid settings, a preference recommender learning tool then optimizes the settings of the hearing aid instance so that the automatic speech recognizer gets the best understanding (as predicted by FADE) for a particular hearing loss.
[0153] Ib: Check Optimized Hearing Aid Settings on Actual Hearing Aid(s) when Worn by the User).
[0154] The optimized settings may be subject to approval by the audiologist or directly. The optimized settings from the step Ia are then transferred to actual hearing aids worn by the individuals (e.g. a particular user). And here the traditional analytical method that combines context and ratings is used to confirm or reject whether the optimized settings are indeed optimal taking usage patterns into account.
[0155] II. A Second Step May Comprise Optimization of Hearing Aid Settings Based on Data from Actual Use.
[0156] IIa: Optimization of Hearing Aid Settings Based on Behavioral Speech- and Non-Speech-Auditory Performance Measures.
[0157] A new range of optimization metrics independent of the automatic speech recognizer used in FADE is introduced. These optimization metrics combine behavioral speech and non-speech auditory performance measures, e.g. detection thresholds for spectro-temporal modulation (STM) (like Audible Contrast Threshold (ACT)) or spectral contrasts (ripples or frequency resolution tests), transmission of auditory salient cues (interaural level, time, and phase cues, etc.), or correlated psychophysiological measures, such as EEG or objective measures of listening effort and sound quality (cf. e.g. validation step 2A in
[0158] IIb: Optimization of Hearing Aid Settings Based on User Preferences.
[0159] We also introduce a new set of scales and criteria with which the individual hearing aid user can choose to report their preferences in a given situation. In one situation, e.g., it is not the perceived speech recognition that the hearing aid user decides is of importance; instead the user reports on clarity of the sound scene, and this metric may hereafter be given more weight in the simulation of the present sound scene and possibly in similar scenes, cf. e.g. validation step 2 (2A, 2B) in
[0160] III. A Third Step May Provide Feedback to the Simulation Model of Logged Data Captured During Wear of Hearing Aid(s) by the User which May Spawn a New Round of Optimization with the Simulated Sound Scenes that Statistically Match the Encountered Scenes.
[0161] A third step may comprise that data logged from hearing aids that describe sound scenes in level, SNR, etc., are used to augment the scenes, which are used for the simulation and optimization of hearing aid settings, cf. e.g. validation step 3 in
[0162] IV. A Third Step May Provide Optimization of Hearing Aid Settings Based on Personality Traits.
[0163] A fourth step may comprise that the simulation model estimates personality traits of each individual from questionnaires or indirectly from data and uses this in the optimization of hearing aid settings. The estimated personality traits may further be used during testing and validating the proposed settings. Recently an interesting finding how especially neuroticism and extraversion among the Big5 (here the 5 most probable of the 5 most frequently occurring) personality traits impact the acceptance of noise, performance in noise, and perceived performance in noise (cf. e.g. [Wöstmann et al.; 2021], and regarding the ‘Big Five personality traits’, see e.g. Wikipedia at https://en.wikipedia.org/wiki/Big_Fivepersonalitytraits), cf. e.g. validation step 4 in
[0164]
[0165] The general function of the method and hearing system illustrated in
[0166] An aim of the hearing system and method is to determine a personalized parameter setting for one or more audio processing algorithms used in the particular hearing aid to process input signals according to the user's needs (e.g. including to compensate for the user's hearing impairment). A ‘personalized parameter setting’ is intended to mean a parameter setting that allows the user to benefit optimally from the processing of an audio signal picked up in a given acoustic environment. In other words, a personalized parameter setting may be a parameter setting that provides a compromise between an optimal compensation for the user's hearing impairment (e.g. to provide maximum intelligibility of speech) while considering the user's personal properties and intentions in a current acoustic environment.
[0167]
[0168] The embodiment of a hearing system shown in
[0169] The hearing system comprises a communication interface between the processing device (hosting the model of the physical environment) and the hearing aid of the particular user to allow the processing device and the hearing aid to exchange data between them (cf. arrows ‘S7’) from ‘Model of physical environment’ (processing device) to ‘Physical environment’ (hearing aid, or an intermediate device in communication with the hearing aid)).
[0170] A HCP may be involved in the transfer of the model based hearing aid setting to the actual hearing aid, e.g. in a fitting session (cf. ‘Hearing care professional’, and callouts indicating an exchange of information between the HCP and the user of the hearing aid, cf. ‘Particular user’ in
[0171] When the simulation-based hearing aid setting has been transferred to the actual version of said specific hearing aid and applied to the appropriate processing algorithms, the user wears the hearing aid in a learning period where data are logged. The logged data may e.g. include data representing encountered sound environments (e.g. time segments of an electric input signal, or signals or parameters derived therefrom, e.g. as meta-data) and the user's classification thereof and/or the user's intent when present in given sound environment. After a period of time (or continuously, or according to a predefined scheme, or at a session with a HCP), data are transferred from the data logger to the simulation model via the communication interface (cf. arrow ‘Validation’ in
[0172] The 2.sup.nd loop can be repeated continuously or with a predefined frequency, or triggered by specific events (e.g. power-up, data logger full, consultation with HCP (e.g. initiated by HCP), initiated by the user via a user interface, etc.).
[0173]
[0174]
[0175]
[0176] These data are schematically illustrated in
Current Process Example
[0177] User Alice schedules and appointment for hearing aid fitting with audiologist Bob, and have her hearing measured and characterized by standard procedures like audiograms, questionnaires, specific speech tests, and in-clinic simulation of scenes and settings.
[0178] Alice then leaves Bob having one (or a few) distinct hearing aid settings on her hearing instruments and starts using the hearing instruments in her everyday situations.
[0179] After a while Alice returns to Bob for a follow-up session where they talk about the situations that Alice has encountered both the good and less good experiences. Based on this dialogue, and possibly assisted by looking at usage data (duration, sound environments, and relative use of the different settings) as well as experience and insights of Bob, Bob then adjust the settings in the hearing instrument so that the palette of settings better matches what Bob believes will benefit Alice. However, Bob is not aware of an update to the noise reduction and is therefore not capable of utilizing this to increase the benefits of the hearing instruments.
[0180] Alice now returns to using her hearing instruments in her everyday situations.
[0181] After another while Alice returns to Bob again and goes through the same process as last time. Still, Bob is not aware of an update to the noise reduction and is therefore not capable of utilizing this to the full extent.
Process Example According to the Present Disclosure
[0182] User Alice schedules and appointment for hearing aid fitting with audiologist Bob, and have her hearing measured and characterized by standard procedures like audiograms, questionnaires, specific speech tests, and in-clinic simulation of scenes and settings.
[0183] Alice then leaves Bob having one (or a few) distinct hearing aid settings on her hearing instruments and starts using the hearing instruments in her everyday situations.
[0184] While Alice uses the hearing instruments, the hearing instruments and the APP (e.g. implemented on a smartphone or other appropriate processing comprising display and data entry functionality) collects data about the sound environments and possibly intents of Alice in those situations (cf. ‘Data logger’ in
[0185] Meanwhile, the cloud service simulates sound environments and situations with the data that describes her hearing, her sound environments, intents, and priorities collected with the smartphone and the hearing instruments. The simulation model may be implemented as one part of the cloud service where logged data are used as inputs to the model related to the situations to be simulated. Another part of the cloud service may be the analysis of the metrics to learn the preference for the tested settings (cf. e.g. validation step 2 (2A, 2B) in
[0186] When Alice returns to Bob for a follow-up session they talk about the situations that Alice has encountered—both the good and less good experiences. Based on this dialogue Bob reviews the proposals of optimal settings and selects the ones which in his experience together with the description of the situations fit Alice's needs and situations the best. Since the devices were given to Alice, the noise reduction was updated and the optimization suggested a setting that utilizes this. The hearing instrument(s) may e.g. be (firmware-)updated during use, e.g. when recharged. The hearing instrument(s) may e.g. be firmware updated out of this cycle (e.g. at a (physical or remote) consultation with a hearing care professional). The hearing instrument(s) may not need to have firmware updates if a “new” feature is just launched by enabling a feature in the fitting software.
[0187] When Alice returns to Bob for another follow-up session, they can also see which of the individual settings that Alice rated as good and which ones she has used either a lot or for specific situations.
Further Examples
[0188] Embodiments of the present disclosure may include various combinations of the following features:
[0189] 1) The cloud service may simulate sound scenes and optimize hearing instrument settings that provide the best outcome for the individual user given their hearing characteristics, sound environments, preferences, and priorities. The sound environments, preferences, and priorities are collected from features 3) and 4).
[0190] 2) A fitting interface may enable the audiologist to select among the proposed optimized hearing instrument settings and thereafter store these settings on the individual user's hearing aid.
[0191] 3) In a learning period and/or during normal use, the smartphone APP may collect user ratings (how good is this setting, how important is comfort vs speech in noise understanding vs. sensing the scene) and buffer data from feature 4) for use in feature 1). Moreover, the smartphone can add missing data types, if not available from the hearing instrument, e.g. movement data (e.g. acceleration data) and/or location data (e.g. GPS coordinates). The smartphone APP may also collect intents of the user in different sound environments. This may e.g. be done during a learning period and/or continuously during normal use. 4) The hearing instrument may process the incoming audio according to the currently selected settings. The hearing instrument may also provide data describing the sound environment for feature 1).
[0192]
[0193] The hearing care professional (HCP) has access to a fitting system comprising the model of the physical environment including the AI-simulation model. A number of interfaces between the fitting system and the hearing aid and an associated processing device serving the hearing aid, e.g. a smartphone (running an APP forming part of a user interface for the hearing aid, denoted ‘HA-User interface (APP)’ in
[0198] In the embodiment of a hearing system shown in
[0199]
[0200] Thereby, a highly flexible hearing system capable of providing an initial simulation-based hearing aid setting, which can be personalized during use of the hearing aid can be provided. By having access to processing power at different levels, partly in the hearing aid, partly on the handheld or portable processing device, and partly on a network server, the hearing system is capable of utilizing computationally demanding tasks, e.g. involving artificial intelligence, e.g. learning algorithms based on machine learning techniques, e.g. neural networks. Processing tasks may hence be allocated to an appropriate processor taking into account computational intensity AND timing of the outcome of the processing task to provide a resulting output signal to the user with an acceptable quality and latency.
[0201]
[0202] The method may comprise some or all of the following steps (S1-S7).
[0203] The specific hearing aid may e.g. be of a specific style (e.g. a ‘receiver in the ear’ style having a loudspeaker in the ear canal and a processing part located at or behind pinna, or any other known hearing aid style). The specific hearing aid may be a further specific model of the style that the particular user is going to wear (e.g. exhibiting particular audiological features (e.g. regarding noise reduction/directionality, connectivity, access to sensors, etc.), e.g. according to a specific price segment (e.g. a specific combination of features)).
[0204] S1. Providing a simulation-based hearing aid setting in dependence of
[0205] a) a hearing profile of the user,
[0206] b) a (digital) simulation model of the hearing aid, the simulation model comprising configurable processing parameters of the hearing aid,
[0207] c) a set of recorded sound segments (e.g. with known content, or possibly mixed with recorded sound segments experienced by the user).
[0208] The hearing profile may e.g. comprise an audiogram (showing a hearing threshold (or hearing loss) versus frequency for the (particular) user. The hearing profile may comprise further data related to the user's hearing ability (e.g. frequency and/or level resolution, etc.). A simulation model of the specific hearing aid may e.g. be configured to allow a computer simulation of the forward path of the hearing aid from an input transducer to an output transducer to be made. The set of recorded sound segments may e.g. comprise recorded and transcribed sentences (e.g. making both audio and text available), and a set of background noises (as audio). Thereby a multitude of electric input signals may be generated by mixing recorded sentences (of known content) with different noise types and levels of noise (relative to the target signal (sentence)). The simulation model may e.g. include an automatic speech recognition algorithm that estimates the content of the (noisy) sentences. Since the contents are known, an estimate of the intelligibility of each (noisy sentence) can be estimated. The simulation model may e.g. allow the simulation-based hearing aid setting to be optimized with respect to speech intelligibility. An optimal hearing aid setting for the particular user may e.g. be determined by optimizing the processing parameters of the simulation model in an iterative procedure in dependence of the recorded sound segments, the hearing profile, the simulation model, and a cost function (see e.g.
[0209] S2. Transferring the simulation-based hearing aid setting to an actual version of said specific hearing aid.
[0210] The simulation model may e.g. run on a specific processing device, e.g. a laptop or tablet computer or a portable device, e.g. a smart phone. The processing device and the actual hearing aid may comprise antenna and transceiver circuitry allowing the establishment of a wireless link between them to provide that an exchange of data between the hearing aid and the processing device can be provided. The simulation-based hearing aid setting may be applied to a processor of the hearing aid and used to process the electric input signal provided by one or more input transducers (e.g. microphones) to provide a processed signal intended for being presented to the user, e.g. via an output transducer of the hearing aid. The actual hearing aid may have a user-interface, e.g. implemented as an APP of a portable processing device, e.g. a smartphone. The user interface may be implemented on the same device as the simulation model. The user interface may be implemented on another device than the simulation model.
[0211] S3. Using the simulation-based hearing aid setting on said actual hearing aid, when worn by the user.
[0212] The simulation-based hearing aid setting is determined solely based on the hearing profile of the user and model data (e.g. including recorded sound segments). This simulation-based hearing aid setting is intended for use during an initial (learning) period, where data during normal use of the hearing aid, when worn by the particular user for which it is to be personalized, can be captured. Thereby an automized (learning) hearing system may be provided.
[0213] S4. Logging data from the actual hearing aid, said data including data representing encountered sound environments and the user's classification thereof.
[0214] A user interface, e.g. comprising an APP executed on a portable processing device, may be used as an interface to the hearing aid (and thus to the processing device). Thereby the user's inputs may be captured. Such inputs may e.g. include the user's intent in a given sound environment, and/or a classification of such sound environment. The step S4 may e.g. further comprise logging data from the activities of the user, the intent of the user, and the priorities of the user. The latter feature is shown in
[0215] S5. Transferring the logged data to the simulation model.
[0216] Thereby data from the user's practical use of the hearing aid can be considered by the simulation model (validation).
[0217] S6. Optimizing said simulation-based hearing aid setting based on said logged data.
[0218] A 2.sup.nd loop of the learning algorithm is executed using input data from the hearing aid reflecting acoustic environments experienced by the user while wearing the hearing aid (optionally mixed with recorded sound segments with known characteristics, see e.g. step S1), and the user's evaluation of these acoustic environments and/or his or her intent while being exposed to said acoustic environments. Again, an optimal hearing aid setting for the particular user may be determined by optimizing the processing parameters of the simulation model in an iterative procedure in dependence of the user logged and possibly pre-recorded sound segments, the hearing profile, the simulation model, and a cost function, e.g. related to an estimated speech intelligibility (see e.g.
[0219] S7. Transferring the optimized simulation-based hearing aid setting to the actual version of said specific hearing aid
[0220] The optimized simulation-based hearing aid setting thus represents a personalized setting of parameters that builds on the initial model data and data extracted from the user's wear of the hearing aid in the acoustic environment that he or she encounters during normal use.
[0221] Steps S4-S7 may be repeated, e.g. according to a predefined or adaptively determined scheme, or initiated via a user interface (as indicated by the dashed arrow from step S7 to step S4) or continuously.
[0222]
[0223] sound environments . . . ’) in
[0224] An Exemplary Method of Determining a Hearing Aid Setting:
[0225]
[0226] The method is configured to determine a set of parameter settings (setting(s) for brevity in the following) for a specific hearing aid of a particular user covering encountered listening situations. The steps S1-S11 of the method are described in the following:
[0227] S1. Meta-data charactering the encountered sound environments and listening situations (from HA data logging) leading to a set of simulated sound environments and listening situations from mixing sounds from a database.
[0228] S2. A digital simulation model of the user's own hearing aid that processes the sounds from S1 according to a current set of parameter settings.
[0229] S3. A digital simulation of the user's hearing loss based on the hearing profile of the user that simulates the direct impact on the sound due to e.g. deterioration from limited audibility, limited spectral resolution, etc.
[0230] S4. An AI-Hearing model that simulates the perception of the impaired hearing, e.g. 1) speech intelligibility based on automatic speech recognizers or metrics like E-STOI, listening effort, comfort based on established metrics.
[0231] S5. An optimization of outcomes from S4, e.g. maximization of intelligibility or comfort or sound quality, or minimization of listening effort updating the parameter settings of S2.
[0232] S6. Repetition of steps S2-S6 until convergence or set performance is reached (see arrow in
[0233] S7. Transferring the optimized simulation-based hearing aid setting(s) to the actual version of said specific hearing aid.
[0234] S8. Using the simulation-based hearing aid setting on said actual hearing aid, when worn by the user.
[0235] S9. Logging data from the actual hearing aid, said data including data representing encountered sound environments and the user's classification thereof.
[0236] S10. Transferring the logged data to the simulation model.
[0237] S11. Optimizing said simulation-based hearing aid setting based on said logged data following S1-S7 (see arrow in
[0238] S1 can be influenced by logging data obtained with same hearing aid or other hearing aid without it having been part of the loop.
[0239] The method comprises two loops: An ‘inner loop’: S2-S6 (denoted S6 in
[0240] The simulation model of the hearing aid (user's or other) is a digital simulation of a hearing aid that processes sound represented in digital format with a set of hearing aid settings. It takes sounds (e.g. provided as meta-data) and current (adaptable) settings as input and outputs sound.
[0241] Embodiments of the disclosure may e.g. be useful in applications such as fitting of a hearing aid or hearing aids to a particular user.
[0242] It is intended that the structural features of the devices described above, either in the detailed description and/or in the claims, may be combined with steps of the method, when appropriately substituted by a corresponding process.
[0243] As used, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well (i.e. to have the meaning “at least one”), unless expressly stated otherwise. It will be further understood that the terms “includes,” “comprises,” “including,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will also be understood that when an element is referred to as being “connected” or “coupled” to another element, it can be directly connected or coupled to the other element, but an intervening element may also be present, unless expressly stated otherwise. Furthermore, “connected” or “coupled” as used herein may include wirelessly connected or coupled. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. The steps of any disclosed method are not limited to the exact order stated herein, unless expressly stated otherwise.
[0244] It should be appreciated that reference throughout this specification to “one embodiment” or “an embodiment” or “an aspect” or features included as “may” means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the disclosure. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects.
[0245] The claims are not intended to be limited to the aspects shown herein but are to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more.
REFERENCES
[0246] [Schädler et al.; 2018] Schädler, M. R., Warzybok, A., Ewert, S. D., Kollmeier, B., 2018. Objective Prediction of Hearing Aid Benefit Across Listener Groups Using Machine Learning: Speech Recognition Performance With Binaural Noise Reduction Algorithms. Trends in Hearing, vol. 22, pp. 1-21. [0247] [Schädler et al.; 2016] Schädler, M. R., Warzybok, A., Ewert, S. D., Kollmeier, B., 2016. A simulation framework for auditory discrimination experiments: Revealing the importance of across-frequency processing in speech perception. J. Acoust. Soc. Am. 139, 2708-2723. [0248] [Wöstmann et al.; 2021] Wöstmann, M., Erb, J., Kreitewolf, J., Obleser, J., 2021. Personality captures dissociations of subjective versus objective noise tolerance. [0249] [ANSI S3.5; 1995] American National Standards Institute, “ANSI S3.5, Methods for the Calculation of the Speech Intelligibility Index,” New York 1995. [0250] [Jensen & Taal; 2016] J. Jensen and C. H. Taal, “An Algorithm for Predicting the Intelligibility of Speech Masked by Modulated Noise Maskers,” IEEE/ACM Transactions on Audio, Speech, and Language Processing, vol. 24, no. 11, pp. 2009-2022, November 2016.