Optimization tool for auditory devices
11701516 · 2023-07-18
Assignee
Inventors
Cpc classification
H04R2225/67
ELECTRICITY
H04R25/70
ELECTRICITY
International classification
Abstract
A system comprises an auditory device processor, an auditory device output mechanism, an auditory input sensor, a database including a reference bank of environmental sounds and corresponding sound profiles, and a memory. The auditory device processor is configured to: while the auditory input sensor is detecting a first environmental sound, receive a sound selection from the user, wherein the sound selection is associated with the first environmental sound; store a first sound profile in the reference bank corresponding to the first environmental sound; receive a second environmental sound detected by the auditory input sensor; analyze a frequency content of the second environmental sound; compare the frequency content of the second environmental sound with the reference bank of environmental sounds and corresponding sound profiles stored in the database; in response to the comparison, select one of the sound profiles corresponding to the second environmental sound; and automatically adjust the parameter settings.
Claims
1. A system for controlling parameter settings of an auditory device for a user comprising: an auditory device processor; an auditory device output mechanism controlled by the auditory device processor, the auditory device output mechanism including one or more modifiable parameter settings; an auditory input sensor configured to detect an environmental sound and communicate with the auditory device processor; a database in communication with the auditory device processor, the database including a reference bank of environmental sounds and corresponding sound profiles, each sound profile including an associated set of parameter settings; a memory in communication with the auditory device processor and including instructions that, when executed by the auditory device processor, cause the auditory device processor to: while the auditory input sensor is detecting a first environmental sound, receive a sound selection from the user, wherein the sound selection is a set of parameter settings, wherein the sound selection is associated with the first environmental sound; store a first sound profile in the reference bank corresponding to the first environmental sound, and wherein the set of parameter settings of the sound selection is associated with the first sound profile; receive a second environmental sound detected by the auditory input sensor; analyze a frequency content of the second environmental sound; compare the frequency content of the second environmental sound with the reference bank of environmental sounds and corresponding sound profiles stored in the database; in response to the comparison, select one of the sound profiles corresponding to the second environmental sound; and automatically adjust the parameter settings of the auditory device output mechanism to match the set of parameter settings associated with the selected sound profile.
2. The system of claim 1, wherein the auditory device processor is configured to use a wavelet scattering transform to analyze the frequency content of the second environmental sound.
3. The system of claim 1, wherein the auditory device processor is configured to use a Fourier transform to compute the frequency content of the second environmental sound.
4. The system of claim 1, wherein the frequency content of the second environmental sound includes one or more properties selected from the group comprising a signal-to-noise ratio, an amplitude range, and a pitch range.
5. The system of claim 4, wherein one or more properties selected from the group comprising a signal-to-noise ratio, an amplitude range, and a pitch range of the frequency content of the second environmental sound matches the one or more properties selected from the group comprising a signal-to-noise ratio, an amplitude range, and a pitch range of one of the sound profiles.
6. The system of claim 1 wherein the auditory device output mechanism is one of an electrode of a cochlear implant and a speaker of a hearing aid.
7. The system of claim 1 wherein the auditory input sensor is one of a microphone of a cochlear implant and a microphone of a hearing aid.
8. The system of claim 1 wherein each set of parameter settings includes amplification settings, compression settings, and directional noise rejection settings.
9. The system of claim 1 wherein each sound profile is associated with a stored geolocation, the system further comprises a location sensing mechanism in communication with the auditory device processor, and wherein the processor is further configured to: in response to selecting one of the sound profiles, compare a present geolocation of the auditory device output mechanism identified by the location sensing mechanism with the stored geolocations.
10. The system of claim 1 wherein the auditory device processor is further configured to create a further sound selection if none of the sound profiles in the reference bank corresponds to the second environmental sound, wherein the further sound selection is a further set of parameter settings, and wherein the further sound selection is associated with the second environmental sound.
11. A system for controlling parameter settings of an auditory device for a user comprising: an auditory device processor; an auditory device output mechanism controlled by the auditory device processor, the auditory device output mechanism including one or more modifiable parameter settings; an auditory input sensor configured to detect an environmental sound and communicate with the auditory device processor; a database in communication with the auditory device processor, the database including a reference bank of environmental sounds and corresponding sound profiles, each sound profile including an associated set of parameter settings, including a first set of parameter settings corresponding to a first sound profile and a second set of parameter settings corresponding to a second sound profile; a memory in communication with the auditory device processor and including instructions that, when executed by the auditory device processor, cause the auditory device processor to: while the auditory input sensor is detecting a first environmental sound, receive a sound selection from the user, wherein the sound selection is a set of parameter settings, wherein the sound selection is associated with the first environmental sound; store the first sound profile in the reference bank corresponding to the environmental sound, and wherein the set of parameter settings of the sound selection is associated with the first sound profile; receive a second environmental sound detected by the auditory input sensor; analyze a frequency content of the second environmental sound; compare the frequency content of the second environmental sound with the first and second sound profiles of the reference bank of environmental sounds stored in the database and, in response to the comparison, select one of the first, second, and third sound profiles; and automatically adjust the parameter settings of the auditory device output mechanism to match the set of parameter settings corresponding with the selected first and second sound profile.
12. The system of claim 11, wherein the auditory device processor is configured to use a wavelet scattering transform to analyze the frequency content of the second environmental sound.
13. The system of claim 11, wherein the auditory device processor is configured to use a Fourier transform to compute the frequency content of the second environmental sound.
14. The system of claim 11, wherein the frequency content of the second environmental sound includes one or more properties selected from the group comprising a signal-to-noise ratio, an amplitude range, and a pitch range.
15. The system of claim 14, wherein the one or more properties selected from the group comprising a signal-to-noise ratio, an amplitude range, and a pitch range of the frequency content of the second environmental sound matches the one or more properties selected from the group comprising a signal-to-noise ratio, an amplitude range, and a pitch range of one of the sound profiles.
16. The system of claim 11 wherein the auditory device output mechanism is an electrode of a cochlear implant and the auditory input sensor is a microphone of a cochlear implant.
17. The system of claim 11 wherein the auditory device output mechanism is a speaker of a hearing aid and the auditory input sensor is a microphone of a hearing aid.
18. The system of claim 11 wherein each set of parameter settings includes amplification settings, compression setting, and directional noise rejection settings.
19. The system of claim 11 wherein the auditory device processor is further configured to create a further sound selection if none of the sound profiles in the reference bank corresponds to the second environmental sound, wherein the further sound selection is a further set of parameter settings, and wherein the further sound selection is associated with the second environmental sound.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The foregoing and other objects, features, and advantages of the present disclosure set forth herein will be apparent from the following description of particular embodiments of those inventive concepts, as illustrated in the accompanying drawings. Also, in the drawings the like reference characters refer to the same parts throughout the different views. The drawings depict only typical embodiments of the present disclosure and, therefore, are not to be considered limiting in scope.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
DETAILED DESCRIPTION
(11) The present application provides an optimization system that optimizes the parameters of an auditory device based on the individual's specific needs to improve the user's ability to hear.
(12)
(13) In the example shown in
(14) The auditory device processor 108 on the controller 106 controls the pulse generator 110 to deliver electrical pulses (i.e., neurostimulation) according to a selected stimulation parameter set (e.g., pulse amplitude, pulse width, pulse frequency, etc.) and/or other instructions to applicable regions of the nervous system. Neurostimulation programs or coding strategies based on variable parameters that are used in the delivery of neurostimulation therapy (i.e., stimulation) may be stored in a memory 120, in the form of executable instructions and/or software, for execution by the auditory device processor 108. The auditory device or NSD 102 may also include a global positioning system (GPS) chip 121 and a database of reference sound profiles 123, which may be utilized in the programming stored on the memory 120 as described below.
(15) In some embodiments, the controller 106 may contain a machine-learning logic unit (“MLU”) 122 that is trained to perform machine-learning operations involving the generation of various predictions that may be used to optimize the functionality of the NDS 102 and/or initiate and optimize neurostimulation therapy provided to a patient via the NDS 102. The MLU 122 may process data received from users interacting with the NDS 102 when generating such predictions. Although the controller 106 is illustrated as being included within the NSD 102, in some embodiments, it may be implemented in a computing device that is separate from the NDS 102. In such an embodiment, the controller 106 may communicate with the NSD 102 remotely, such as through a communications network 124, which may be a telecommunications network, the Internet, an intranet, a local area network, a wireless local network, radio frequency communications protocol, or any other type of communications network, as well as combinations of networks.
(16) The NSD 102 may be communicatively connected to an optimization device 126 and/or an audiologist device 128 locally or via the communications network 124 to receive input that may be processed to optimize neurostimulation therapies and/or optimal functions of the NDS 102. Each of the optimization device 126 and the audiologist device 128 provides user interface(s) that enable a patient or user to provide the input (e.g., data) to the NSD 102 that defines, qualifies, and/or quantifies aspects of the neurostimulation therapy provided by the NDS 102. More specifically, variables of the equations that are part of computer program stored in the memory 120 of the NSD 102 are set by the optimization interface and/or the audiologist interface of the optimization device 126 and the audiologist device 128, respectively. Each of the devices 126, 128 may include a processor-based platform that operates on an operating system, such as Microsoft® Windows®, Linux®, iOS®, Android®, and/or the like that is capable of executing and/or otherwise generating the interfaces.
(17) The user or operator of the optimization device 126 works with the patient wearing the NSD 102 to gather user feedback in response to audio tests as shown in
(18) The audiologist operates the device 128 to directly adjust the programming or instructions on the memory 120 of the NSD 102. Specifically, the audiologist may provide input in the form of a set of stimulation parameters that define various parameters, such as pulse amplitude, pulse width, pulse frequency, etc., any of which may be used to automatically determine a specific neurostimulation therapy (e.g., parameter space) for a particular patient. Based on such input, the controller 106 logically directs the pulse generator 110 to modify internal parameters and vary the characteristics of stimulation pulses transmitted to the nervous system. The audiologist may interact with the optimization device 126 to provide feedback regarding the success of the simulation (e.g., better, same, or worse) in comparison to previous neurostimulation therapies, to modify parameters of the current simulation, etc.
(19) Each of the above identified instructions and applications can correspond to a set of instructions for performing one or more functions described herein. These instructions need not be implemented as separate software programs, procedures, or modules. The memory 130 can include additional instructions or fewer instructions. Furthermore, various functions of the system 100 may be implemented in hardware and/or in software, including in one or more signal processing and/or application specific integrated circuits.
(20) In one example, the memory 120 includes stored instructions that, when executed by the auditory device processor 108, cause it to deconstruct acoustic waves into discrete electrical signals and to generate electrical pulses through the pulse generator. In one example, U.S. Pat. No. 9,717,901 discloses a frequency-modulated phase coding (FMPC) strategy to encode acoustical information in a cochlear implant 102. The entirety of the disclosure provided by U.S. Pat. No. 9,717,901 is incorporated herein. The FMPC strategy utilizes the following equation that describes the relationship between the sound level at the outer ear canal and the corresponding rate of action potentials that can be recorded from a single auditory nerve fiber. This function is expressed below and includes cochlear nonlinearities and depends on five critical parameters: a spontaneous rate (a.sub.0), a maximum rate (a.sub.1), a threshold for stimulation (a.sub.2), a level for nonlinear behavior (a.sub.3), and a value describing the slope after the level for nonlinear behavior (a.sub.4).
(21)
where R is the mean discharge rate, and d is
(22)
where the variables denote the following: a.sub.0=the spontaneous discharge rate of the primary afferent, a.sub.1=the maximum increase of the discharge rate, a.sub.2=the sound pressure of the half maximum discharge rate, a.sub.3=the sound pressure at which nonlinear behavior occurs, a.sub.4=the exponent of the power-law slope in the nonlinear region, p the sound pressure level at the tympanic membrane, and
(23) p=10*log 10(abs(S1(frequency))), where S1 is the Short Time Fourier Transform (STFT) of the acoustic signal.
(24) Each of
(25) TABLE-US-00001 TABLE 1 Parameter values for FIGS. 3A-3E a.sub.0 a.sub.1 a.sub.2 a.sub.3 a.sub.4 FIG. 3A 0:0.1:1 1 20 50 0.5 FIG. 3B 0 0:0.1:1 20 50 0.5 FIG. 3C 0 1 5:5:50 50 0.5 FIG. 3D 0 1 20 20:10:120 0.5 FIG. 3E 0 1 20 50 0.1:0.1:1
(26) Traces in
(27) The above variables are examples of the types parameters that are adjusted during the audiologist tuning sessions. Any hearing device can have more or fewer parameters noted above depending on the coding strategy.
(28) In the systems of the present application, the optimization system 200 is used to optimize the values of the parameters of the coding strategy programmed on the memory 120 of the NSD 102. In the primary example provided, the optimization system 200 is described as being embodied in first, second, and third modules 202, 204, 206. It is understood that any one or more of the three modules 202, 204, 206 can be used independently or in any combination to describe the features and functions described herein. It is also understood that all three modules 202, 204, 206 could be a single system, independent systems, or combinations thereof.
(29) Referring to
(30) The first condition determines the patient's threshold for detecting speech. A sound is provided to the patient and gradually increases in volume. The patient indicates when he or she first detects the noise against a quiet background.
(31) The second condition determines the patient's preference for the most comfortable decibel level. A sound bite of speech is provided to the patient and gradually increases in volume. The patient indicates when he or she first understands the speech clearly at a comfortable level, such as listening to an audiobook.
(32) The third condition determines the patient's threshold for recognizing speech. A sound bite of speech is provided to the patent at a high decibel level and gradually decreases in volume. The patient indicates when he or she can no longer understand what is being said.
(33) The fourth condition determines the patient's threshold for the most uncomfortable decibel level. A sound bite of speech is provided to the patient and gradually increases in volume. The patient indicates when the speech reaches a level that it is uncomfortable to hear.
(34) The fifth condition determines the patient's threshold for understanding speech while raising the signal to noise ratio. A sound bite of speech is played as the background noise is gradually increased (or the SNR is gradually decreased). The patient indicates when the speech is no longer recognizable due to the background noise.
(35) The GUI instructions 132 on the memory 130 of the optimization device 126 provide algorithmic processing that compares the patient's threshold levels 210 for each of the five conditions with the threshold levels for normal hearing listeners. The average levels of a normal hearing listener are based on a database of audiological waves representing speech having a variety of pitches and frequencies against various levels of background noise. If the threshold levels 210 are outside of an acceptable range for each condition, the patient is deemed hearing impaired. An output 212 of the first module 202 is a plurality of ranges of decibel levels the patient has indicated as being at an acceptable level or within an acceptable range per condition.
(36)
(37) The first module 202 may be tailored to test for specific aspects of the cochlear implant NSD 102. For example, the threshold levels for the various conditions are tested for an auditory wave that is a complex summation of many different wave forms that affect a plurality of channels of the electrode array. In some embodiments, the electrode array of the cochlear implant is tested as a collective. In other embodiments, the conditions are tested separately for each channel.
(38) The second module 204 includes a plurality of user interfaces 500, 600, 700 of
(39) In the first embodiment shown in
(40) In a second embodiment shown in
(41) In user interfaces 500, 600 the search is initiated by presenting the patient with a small number of device parameters which he or she is asked to rate on a scale relative to each other. In one embodiment, about half of these initial parameters are drawn randomly and uniformly from the parameter space while the other half are drawn at random within a parameter space closely related to the original device settings of the cochlear implant user. The relative ratings for each parameter are then used as inputs for a fitness function which determines which of the settings should be ‘selected’ to be recombined with other surviving parameters to create ‘child’ parameters that will then undergo the same pruning and recombination procedure in the next generation. These iterations proceed for about 15-20 generations at which point the majority of the recommendations made are appealing to the user.
(42) For example, in the embodiment 500 shown in
(43) In the third embodiment shown in
(44) The second module 204, and the one or more user interfaces 500, 600, 700 employed, provide specific parameter settings 222 associated with specific sounds or environments.
(45) Referring back to
(46) Optimized parameter settings 222 associated with specific environments that are output from the second module 204 are provided as input to the third module 206. The optimized parameter settings 222 are matched to clusters within the reference bank of sounds in order to associate the parameter settings with a greater range of environments. Simultaneously, the acoustic environment received on the microphone or auditory input sensor of the cochlea implant or other hearable device is compared with the reference bank of sounds to identify a comparable environment having associated parameter settings. The associated parameter settings 230 are output to the memory of the cochlear implant and automatically factored into the coding strategy of the cochlear implant.
(47) The third module controls the parameter settings of the auditory device or the NSD. In one embodiment, the auditory device 102 includes an auditory device processor 106, an auditory device output mechanism including one or more modifiable parameter settings, and an auditory input sensor 104 that detects an environmental sound and communicates with the auditory device processor 108. The auditory device output mechanism is any output mechanism of an auditory device, such as one or more electrodes 112, 114 of a cochlear implant or a speaker on a hearing aid device. The auditory input sensor 104 may be a microphone positioned on the auditory device. The system also includes a database 123 of reference sound profiles and a plurality of sets of parameter settings, each of which is paired with a corresponding sound profile. The database 123 may be stored directly on the auditory device 192 or remotely on the patient's mobile device or on a remote server.
(48) The auditory device 102 also includes a memory 120 in communication with the processor 108 and including instructions that, when executed by the processor, cause the processor 108 to undertake certain steps that match the environmental sound detected by the auditory input sensor 104 with the reference bank of sounds 123 to identify a comparable environment having associated parameter settings.
(49) More specifically, the processor 108 first receives the environmental sound detected by the auditory input sensor 104 and analyzes a frequency content of the environmental sound. The system may determine the frequency content of the environmental sound by using a wavelet scattering transform to analyze the frequency content of the environmental sound, using a Fourier transform to compute the frequency content of the environmental sound, or any other suitable classifier algorithm to determine the frequency content of the acoustic environment.
(50) The processor 108 compares the frequency content of the environmental sound with the sound profiles stored in the database 123. In response to the comparison, the system selects one of the sound profiles and automatically adjusts the parameter settings of the auditory device output mechanism, such as electrodes 112, 114, to match the set of parameter settings associated with the selected sound profile. Each set of the plurality of sets of parameter settings may include amplification settings, compression settings, and directional noise rejection settings.
(51)
(52) Each sound profile may also be associated with a stored geolocation. A location sensing mechanism in communication with the auditory device processor determines the present geolocation of the auditory device. After the system selects a sound profile that corresponds to the environmental sound, the processor may further compare a present geolocation of the auditory device output mechanism identified by the location sensing mechanism with the stored geolocations. The geolocation may identify a subset sound profile with an associated set of parameter settings. The geolocation may be particularly useful in maintaining consistency in settings as there are times the positional location will be more stable than the sound environment. As such, it may be the case that based on a given geolocation, the processor is instructed to only choose between a limited number of settings. For example, in the “office” geolocation, the processor may be restricted to choosing between the (i) office desk, (ii) office conference room, and (iii) office cafeteria, settings. A more complex application may include recognizing the geolocation (for example, the user's home), which limits the possible sound profiles from which to choose, then recognizes the background noise (for example, the user's living room with the television on) then recognizes the user's spouse's voice to apply a sound profile matching settings optimized for the user to hear the user's spouse in the user's living room with the television on in the background.
(53) If the acoustic environment received by the microphone does not correspond closely with any of the reference bank of sounds 906, a new environmental setting may be created. In one embodiment, the patient could update his or her parameter preferences for the new acoustic environment either through the hearing device itself or using a mobile application associated with the optimization system of the present application, either through a phone or tablet connected to his or her hearing aid or cochlear implant. In some embodiments, the first and second modules are accessible by the patient through a mobile application on a mobile device. The patient can use the mobile application to tune the parameters to the present environment and store the set of parameter settings associated with the specific environmental sound profile in the database of sound profiles 906.
(54) The patient may also add to the reference bank of sounds associated with specific parameter settings by simulating the sounds during the patient's visit to an audiologist. For example, an audiologist would place a hearing-impaired user in a sound booth and play speech-in-noise or speech-in-babble or even more specific acoustic environments, such as speech on an airplane or speech-in-wind. Using the second module of the optimization system, the patient sets his or her preferred parameters. When the patient is in the real-world environment, all parameter settings are updated based on the current environment's similarity to the previously simulated environments.
(55) The foregoing description merely illustrates the principles of the disclosure. Various modifications and alterations to the described embodiments will be apparent to those skilled in the art in view of the teachings herein. It will thus be appreciated that those skilled in the art will be able to devise numerous systems, arrangements and methods which, although not explicitly shown or described herein, embody the principles of the disclosure and are thus within the spirit and scope of the present disclosure. From the above description and drawings, it will be understood by those of ordinary skill in the art that the particular embodiments shown and described are for purposes of illustrations only and are not intended to limit the scope of the present disclosure. References to details of particular embodiments are not intended to limit the scope of the disclosure.