HEARING AID COMPRISING A SIGNAL PROCESSING NETWORK CONDITIONED ON AUXILIARY PARAMETERS

20230353958 · 2023-11-02

    Inventors

    Cpc classification

    International classification

    Abstract

    A hearing aid adapted to be worn in or at an ear of a hearing aid user and/or to be fully or partially implanted in the head of the hearing aid user is disclosed. The hearing aid comprises a processing unit connected to said input unit and to said output unit, where the processing unit comprises a neural network, and where the processing unit is configured to determine signal processing parameters of the hearing aid based on weights of the neural network. A hearing system and a corresponding method is furthermore disclosed.

    Claims

    1. Hearing aid adapted to be worn in or at an ear of a hearing aid user and/or to be fully or partially implanted in the head of the hearing aid user, the hearing aid comprising: an input unit for receiving an input sound signal from an acoustic environment of a hearing aid user and providing at least one electric input signal representing said input sound signal, an output unit for providing at least one set of stimuli perceivable as sound to the hearing aid user based on processed versions of said at least one electric input signal, a processing unit connected to said input unit and to said output unit, where the processing unit comprises a neural network, and where the processing unit is configured to determine signal processing parameters of the hearing aid based on weights of the neural network, whereby the processing unit provides processed versions of said at least one electric input signal, a memory storing said weights of said neural network, and an antenna and a transceiver circuitry for establishing a communication link to an auxiliary device, wherein said weights of the neural network is adaptively adjustable weights, and wherein the hearing aid is configured to receive configuration data from the auxiliary device regarding adjustment of said adaptively adjustable weights, and wherein the processing unit is configured to adjust the adaptively adjustable weights of the neural network based on said configuration data.

    2. Hearing aid according to claim 1, wherein said configuration data is based on a hearing ability of the hearing aid user, and/or a sound scene classification of the sound environment of the hearing aid user, and/or on a physiological parameter of the hearing aid user.

    3. Hearing aid according to claim 1, wherein the configuration data comprises a further neural network, and where the adaptively adjustable weights of the neural network of the processing unit are adjusted by replacing the neural network of the processing unit by said further neural network of the configuration data.

    4. Hearing aid according to claim 1, wherein the configuration data comprises weights of a neural network, and where the adaptively adjustable weights of the neural network of the processing unit are adjusted by replacing said weights by the weights of said configuration data.

    5. Hearing aid according to claim 1, wherein the configuration data comprises a plurality of coefficients, and where said adaptively adjustable weights of the neural network of the processing unit are adjusted based on weights resulting from a linear combination of said plurality of coefficients and a plurality of matrices each comprising a plurality of weights, where said plurality of matrices are stored on the memory of the hearing aid.

    6. Hearing aid according to claim 1, wherein, based on the configuration data, the processing unit is configured to determine signal processing parameters relating to noise reduction, hearing loss compensation, and/or feedback reduction of the hearing aid user.

    7. Hearing aid according to claim 1, wherein the hearing aid comprises a sound scene classifier configured to classify said acoustic environment of the hearing aid user into a number of different sound scene classes, and to provide a current sound scene class in dependence of a current representation, e.g. extracted features, of said at least one electric input signal.

    8. Hearing aid according to claim 1, wherein the auxiliary device is a hearing aid, a smart phone, or a server device such as a cloud server.

    9. Hearing aid according to claim 1, wherein the hearing aid further comprises a signal-to-noise ratio (SNR) estimator configured to determine SNR in the environment of the hearing aid user, and/or a sound pressure level (SPL) estimator for measuring the level of sound at the input unit, and/or at least one physiological sensor, and/or at least one accelerometer.

    10. Hearing system comprising a hearing aid according to claim 1 and an auxiliary device, wherein each of the hearing aid and the auxiliary device including an antenna and a transceiver circuitry for establishing a communication link there between, and thereby allowing the exchange of information between the hearing aid and the auxiliary device.

    11. Hearing system according to claim 10, wherein the auxiliary device comprises a weight generating network for determining said configuration data.

    12. Hearing system according to claim 11, wherein the weight generating network is configured to determine said configuration data based on one or more auxiliary parameters, where said auxiliary parameters comprises a hearing ability of the hearing aid user, and/or a sound scene classification of the sound environment of the hearing aid user, and/or on a physiological parameter of the hearing aid user.

    13. Hearing system according to claim 10, wherein the auxiliary device comprises a sound scene classifier configured to classify said acoustic environment of the hearing aid user into a number of different sound scene classes, and to provide a current sound scene class in dependence of a current representation, e.g. extracted features, of a sound signal from the acoustic environment of the hearing aid user, and where the sound scene classifier is configured to provide said current sound scene class as input to said weight generating network.

    14. Hearing system according to claim 10, wherein the auxiliary device comprises an SNR estimator, an SPL estimator, at least one physiological sensor, and/or at least one accelerometer, and where the weight generating network is configured to determine said configuration data based on the one or more auxiliary parameters from said SNR estimator, SPL estimator, at least one physiological sensor, and/or at least one accelerometer.

    15. Hearing system according to claim 10, wherein the weight generating network for determining said configuration data is initiated by input from the hearing aid user.

    16. Hearing system according to claim 10, wherein the weight generating network for determining said configuration data is initiated based on the current sound scene class or data from said SNR estimator, SPL estimator, at least one physiological sensor, and/or at least one accelerometer exceeding respective threshold values.

    17. Method comprising: receiving an input sound signal from an acoustic environment of a hearing aid user and providing at least one electric input signal representing said input sound signal, by an input unit, providing at least one set of stimuli perceivable as sound to the hearing aid user based on processed versions of said at least one electric input signal, by an output unit, determining signal processing parameters of the hearing aid based on weights of a neural network, by a processing unit connected to said input unit and to said output unit and comprising the neural network, providing processed versions of said at least one electric input signal, by the processing unit, storing said weights of the hearing aid, by a memory, and establishing a communication link to an auxiliary device, by an antenna and a transceiver circuitry, wherein said weights of the neural network is adaptively adjustable weights, and wherein the hearing aid receives configuration data from the auxiliary device regarding adjustment of said adaptively adjustable weights, and wherein the processing unit adjusts the adaptively adjustable weights of the neural network based on said configuration data.

    18. A data processing system comprising a processor and program code means for causing the processor to perform at least some of the steps of the method of claim 17.

    19. A computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method of claim 17.

    Description

    BRIEF DESCRIPTION OF DRAWINGS

    [0170] The aspects of the disclosure may be best understood from the following detailed description taken in conjunction with the accompanying figures. The figures are schematic and simplified for clarity, and they just show details to improve the understanding of the claims, while other details are left out. Throughout, the same reference numerals are used for identical or corresponding parts. The individual features of each aspect may each be combined with any or all features of the other aspects. These and other aspects, features and/or technical effect will be apparent from and elucidated with reference to the illustrations described hereinafter in which:

    [0171] FIG. 1 shows an exemplary hearing system according to the present application.

    [0172] FIG. 2 shows an exemplary hearing system according to the present application.

    [0173] FIG. 3 shows an exemplary training of a neural network of the processing unit of the hearing aid according to the present application.

    [0174] FIG. 4 shows an exemplary training of a neural network of the processing unit of the hearing aid according to the present application.

    [0175] FIG. 5 shows an exemplary weight generating network according to the present application.

    [0176] The figures are schematic and simplified for clarity, and they just show details which are essential to the understanding of the disclosure, while other details are left out. Throughout, the same reference signs are used for identical or corresponding parts.

    [0177] Further scope of applicability of the present disclosure will become apparent from the detailed description given hereinafter. However, it should be understood that the detailed description and specific examples, while indicating preferred embodiments of the disclosure, are given by way of illustration only. Other embodiments may become apparent to those skilled in the art from the following detailed description.

    DETAILED DESCRIPTION OF EMBODIMENTS

    [0178] The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. Several aspects of the apparatus and methods are described by various blocks, functional units, modules, components, circuits, steps, processes, algorithms, etc. (collectively referred to as “elements”). Depending upon particular application, design constraints or other reasons, these elements may be implemented using electronic hardware, computer program, or any combination thereof.

    [0179] FIG. 1 shows an exemplary hearing system according to the present application.

    [0180] In FIG. 1, a hearing aid 1 and an auxiliary device 2 are shown. The hearing aid 1 and the auxiliary device may together form a hearing system.

    [0181] Hearing aid 1 may be adapted to be worn in or at an ear of a hearing aid user and/or to be fully or partially implanted in the head of the hearing aid user.

    [0182] The auxiliary device 2 may comprise another hearing aid located at the other ear of the hearing aid user. Alternatively, the auxiliary device 2 may comprise a smart phone or a server device.

    [0183] The hearing aid may comprise an input unit 3 for receiving an input sound signal 4 from an acoustic environment of a hearing aid user and provide at least one electric input signal 5A,5B representing said input sound signal.

    [0184] In FIG. 1, it is shown that the input unit 3 may also comprise two or more input transducers 6A,6B, e.g. microphones, for converting said input sound signals 4 to said at least one electric input signal 5A,5B.

    [0185] The hearing aid may comprise an output unit 7 for providing at least one set of stimuli 7A perceivable as sound to the hearing aid user based on processed versions of said at least one electric input signal 5A,5B.

    [0186] The hearing aid may comprise a processing unit 8 connected to said input unit 3 and to said output unit 7.

    [0187] The processing unit may comprise a neural network 9, and where the processing unit 8 is configured to determine signal processing parameters of the hearing aid 1 based on weights of the neural network. The weights may be adaptively adjustable weights.

    [0188] Thereby, the processing unit 8 provides processed versions of said at least one electric input signal 5A,5B.

    [0189] The hearing aid 1 may comprise a memory 10 storing said weights of the neural network 9 of the hearing aid 1. Accordingly, the memory 10 may both send and receive the presently used weights and/or reference weights. Additionally, or alternatively, the memory 10 may send and receive weights that have been adjusted based on configuration data from the auxiliary device 2.

    [0190] The hearing aid 1 may comprise an antenna and a transceiver circuitry 11 for establishing a communication link to the auxiliary device 2.

    [0191] The hearing aid 1 may be configured to receive the configuration data from the auxiliary device 2 regarding adjustment of said adaptively adjustable weights via the antenna and a transceiver circuitry 11.

    [0192] The processing unit 8 may be configured to adjust the adaptively adjustable weights of the neural network 9 based on said configuration data.

    [0193] The hearing aid 1 may further comprise a sound scene classifier 12 configured to classify said acoustic environment of the hearing aid user into a number of different sound scene classes.

    [0194] The hearing aid 1 may further comprise a detector/sensor/estimator 13, such as an SNR estimator, an SPL estimator, at least one physiological sensor, and/or at least one accelerometer.

    [0195] Alternatively, or additionally, the auxiliary device 2 may comprise a detector/sensor/estimator 13, such as an SNR estimator, an SPL estimator, at least one physiological sensor, and/or at least one accelerometer, and/or a sound scene classifier.

    [0196] The auxiliary device 2 may comprise a further neural network 14, such as a weight generating network, for determining said configuration data.

    [0197] The auxiliary device 2 may comprise an antenna and a transceiver circuitry (not shown) for establishing a communication link between the hearing aid 1 and the auxiliary device 2, and thereby allowing the exchange of information (e.g. the configuration data) between the hearing aid 1 and the auxiliary device 2.

    [0198] FIG. 2 shows an exemplary hearing system according to the present application.

    [0199] In FIG. 2, the neural network 9 of the processing unit is shown to be a Wave-U-Net.

    [0200] As also shown, the neural network 9 of the processing unit may receive and process at least one electric input signal 5A,5B from the input unit 3. At least one set of stimuli perceivable as sound to the hearing aid user based on processed versions of said at least one electric input signal 5A,5B may be provided as a result of the processing of the at least one electric input signal 5A,5B in the processing unit.

    [0201] A further neural network 14 (the weight generating network) is an MLP—a 3 layer fully connected network. Each layer may be found by a weighted sum over three kernels, where the output of the further neural network 14 generates weights (‘w’). The further neural network 14 may be trained on input-output-audiogram pairs 15, generated by a reference model, and provided as input to the further neural network (Θ is the network parameter space).

    [0202] For example, consider the case of a hearing aid user going to a Hearing Care Professional (HPC) to get a hearing aid fitted. The hearing aid may have a neural network that provides compensation for hearing loss, as illustrated in FIG. 2. This hearing loss can be measured using an audiogram (but it might also be other suprathreshold measures or physiological estimates such as fiber distributions in the auditory nerve synapse).

    [0203] In order to train the further neural network, one needs to generate a dataset consisting of input-output pairs that covers the distribution of audio and audiograms—here the inputs may both be speech and audiograms.

    [0204] FIG. 3 shows an exemplary training of a neural network of the processing unit of the hearing aid according to the present application.

    [0205] In addition to the features already described in connection with FIG. 2, FIG. 3 shows the training of the neural network 9 of the processing unit, which may be carried out e.g. at the hearing care professional (HCP) prior to the hearing aid user starts using the hearing aid or during service.

    [0206] The training is carried out in that the neural network 9 of the processing unit provides a hearing impaired representation of the at least one electric input signal 5A,5B, based on an auditory model of deficient hearing 16 of the hearing aid user. Additionally, a normal hearing representation of the at least one electric input signal 5A,5B is provided, based on an auditory model of a normal hearing 17. An objective function 18 may provide an error measure. The training may be based on a plurality of different electric input signals until the error measure is below a preset threshold. At that time, the neural network 9 of the processing unit can be considered to be sufficiently trained. To further adjust the (precision of) the auditory model of deficient hearing 16 of the hearing aid user, the auditory model 16 may be trained on e.g. the audiogram of the hearing aid user 19 or e.g. on one or more suprathreshold measures.

    [0207] The configuration data may be based on a frequency dependent gain parameter.

    [0208] For example, for the HCP to finetune the hearing aid manually, the configuration data (e.g. auxiliary parameters input to the further neural network) may include a frequency dependent gain parameter. This may be done by generating a new set of parameters that parameterizes the loss function—i.e., a frequency weighting of the different channels in the loss function. For example, if the hearing aid user wants more brightness, one can put an emphasis on the higher frequency channels. These parameters may then also be used as inputs for the further neural network. There might also be a parameter related to the amount and type of compression, which could be parameterized in the loss function.

    [0209] FIG. 4 shows an exemplary training of a neural network of the processing unit of the hearing aid according to the present application.

    [0210] In FIG. 4, the bold path denotes the electric input signal paths, the dashed lines the parameter (weight) paths, and the blue line the backpropagation path.

    [0211] In FIG. 4, an example is considered, where a hearing aid user has a hearing aid with a speech enhancement system (e.g. including noise reduction, dereverberation, etc.) that changes as a function of conditions. This might be measured conditions (e.g. SNR, type of environment, EEG, or maybe some combination feature of these) or by choice of the hearing aid user. This might be evaluated on the-go, and the further neural network (e.g. the weight generating network) may be a co-processor, that is potentially on an auxiliary device.

    [0212] The degradations might be simulated or recorded degradation of an input speech signal, e.g., recording of speech in a noisy café, or a simulation of speech in a reverberant room, or a combination.

    [0213] In this example, a training set consists of data that covers the distribution of input audio signal and the degraded audio signal 21. The loss function 22 might consist of different terms that may be parameterized to generate softer/harder noise reduction, some specific form of beamforming, softer/harder dereverberation, or frequency specific noise-reduction, etc. For the noise reduction case, this may be done by having a term that minimizes speech distortion versus another term that optimizes SNR and parameterizing these, or even a loss function that optimizes speech quality versus speech intelligibility.

    [0214] The parameters 23 related to the given degradation could be categorical, e.g., in a car, at a restaurant, music program, and could be implemented as a one-hot-encoded variable over the categorical distribution or embedded in a continuous space. The parameters 23 might also be continuous (e.g. a measurement of SNR, a beamform pattern, statistical parameters) or ordinal (e.g. low NR, medium NR, high NR). These parameters 23 could be related to a program or be optimal under different cognitive loads. The cognitive load could be measured by for example Ear-EEG, and if the load is large, one might want to apply a specific form of noise reduction, and the further neural network may generate weights to handle this situation better, which might be a strategy that favours speech intelligibility over speech quality.

    [0215] FIG. 5 shows an exemplary weight generating network according to the present application.

    [0216] The weight generating network 14 of FIG. 5 may be a 3-layer multi-layer-perceptron (fully connected neural network). However, the weight generating network 14 may be any neural network.

    [0217] In FIG. 5, the weight generating network 14 may parameterize a distribution of possible candidate tensors (matrices) w.sub.n,k containing the parameters of the neural network, where n is the nth parameter block of the neural network, and k is the candidate parameter tensor.

    [0218] The α.sub.n,k may be generated by the weight generating network 14, by feeding the model parameters found from the audiogram through a 3-layer Multi-Layer Perceptron (MLP) 24, e.g., a fully connected feedforward network with 3 layers. The output of the MLP has dimensions (1, KN), and may be reshaped 25 into a matrix of dimension (N, K). This matrix may be split into N different K-dimensional vectors, and a Softmax function (Weight block 1′, etc.) may be computed across the K elements in each vector, outputting 0≤α.sub.n,k≤1, which may be used to generate one single weight tensor:

    [00001] w n = .Math. k = 1 K α n , k w n , k

    [0219] It is intended that the structural features of the devices described above, either in the detailed description and/or in the claims, may be combined with steps of the method, when appropriately substituted by a corresponding process.

    [0220] As used, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well (i.e. to have the meaning “at least one”), unless expressly stated otherwise. It will be further understood that the terms “includes,” “comprises,” “including,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will also be understood that when an element is referred to as being “connected” or “coupled” to another element, it can be directly connected or coupled to the other element, but an intervening element may also be present, unless expressly stated otherwise. Furthermore, “connected” or “coupled” as used herein may include wirelessly connected or coupled. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. The steps of any disclosed method are not limited to the exact order stated herein, unless expressly stated otherwise.

    [0221] It should be appreciated that reference throughout this specification to “one embodiment” or “an embodiment” or “an aspect” or features included as “may” means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the disclosure. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects.

    [0222] The claims are not intended to be limited to the aspects shown herein but are to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more.

    REFERENCES

    [0223] [1] Yang, B., Le, Q. V., Bender, G., & Ngiam, J. (2019). CondConv: Conditionally parameterized convolutions for efficient inference. Advances in Neural Information Processing Systems, 32(NeurIPS).