METHOD OF OPERATING A HEARING DEVICE AND A HEARING DEVICE PROVIDING SPEECH ENHANCEMENT BASED ON AN ALGORITHM OPTIMIZED WITH A SPEECH INTELLIGIBILITY PREDICTION ALGORITHM

20190222943 · 2019-07-18

Assignee

Inventors

Cpc classification

International classification

Abstract

A method of training an algorithm for optimizing intelligibility of speech components of a sound signal in hearing aids, headsets, etc., comprises a) providing a first database comprising a multitude of predefined time segments of first electric input signals representing sound and corresponding measured speech intelligibilities; b) determining optimized first parameters of a first algorithm by optimizing it with said predefined time segments and said corresponding measured speech intelligibilities, the first algorithm providing corresponding predicted speech intelligibilities; c) providing a second database comprising a multitude of time segments of second electric input signals representing sound, d) determining optimized second parameters of a second algorithm by optimizing it with said multitude of time segments, said second algorithm being configured to provide processed second electric input signals exhibiting respective predicted speech intelligibilities estimated by said first algorithm, said optimizing being conducted under a constraint of maximizing said predicted speech intelligibility.

Claims

1. A method of training an algorithm for optimizing intelligibility of speech components of a sound signal, the method comprising, providing a first database (MSI) comprising a multitude of predefined time segments PDTS.sub.i, i=1, . . . , N.sub.PDTS, of first electric input signals representing sound, each time segment comprising a speech component representing at least one phoneme, or syllable, or word, or a processed or filtered version of said speech component, and/or a noise component, and corresponding measured speech intelligibilities P.sub.i, i=1, . . . , N.sub.PDTS, of each of said predefined time segments PDTS.sub.i; determining optimized first parameters of a first algorithm by optimizing it with at least some of said predefined time segments PDTS.sub.i and said corresponding measured speech intelligibilities P.sub.i of said first database (MSI), the first algorithm providing corresponding predicted speech intelligibilities P.sub.est,i, said optimizing being conducted under a constraint of minimizing a cost function of said predicted speech intelligibilities; providing a second database (NSIG) comprising, or otherwise providing access to, a multitude of time segments TS.sub.j, j=1, . . . , N.sub.TS, of second electric input signals representing sound, each time segment comprising a speech component representing at least one phoneme, or syllable, or word, or a processed or filtered version of said speech component, and/or a noise component; determining optimized second parameters of a second algorithm by optimizing it with at least some of said multitude of time segments TS.sub.j, where said second algorithm is configured to provide processed versions of said second electric input signals exhibiting respective predicted speech intelligibilities P.sub.est,j estimated by said first algorithm, said optimizing being conducted under a constraint of maximizing said predicted speech intelligibility P.sub.est,j, or a processed, version thereof.

2. A method according to claim 1 wherein said first database (MSI) comprises two sets of predefined time segments PDTS.sub.L,i, PDTS.sub.R,i of first electric input signals representing sound at respective left and right ears of a user (i=1, . . . , N.sub.PDTS), and corresponding measured speech intelligibilities P.sub.i, i=1, . . . , N.sub.PDTS, of each of said sets of predefined time segments PDTS.sub.L,i, PDTS.sub.R,i.

3. A method according to claim 1 wherein said first and/or second algorithm is or comprises a neural network.

4. A method according to claim 1 wherein the training of the first and/or second algorithm(s) comprise(s) a random initialization and a subsequent iterative update of parameters of the algorithm in question.

5. A method according to claim 1 wherein the training of the first and/or second algorithm(s) comprises minimizing a cost function.

6. A method according to claim 5 wherein the cost function is minimized using an iterative stochastic gradient descent or ascent approach.

7. A method according to claim 5 wherein the cost function of the first algorithm comprises a prediction error e.sub.i.

8. A method according to claim 1 wherein the predefined time segments PDTS.sub.i of the first database, which are used to train the first algorithm, and/or the time segments TS.sub.i of the second database, which are used to train the second algorithm, are arranged to comprise a number of consecutive time frames of the time segments in question, which are fed to the first and/or to the second algorithm, respectively, at a given point in time.

9. A method according to claim 1 wherein said first electric input signals representing sound, and/or said second electric input signals representing sound are each provided as a number of frequency sub-band signals.

10. A method according to claim 1 comprising using said optimized second algorithm in a hearing device for optimizing speech intelligibility of noisy or processed electric input signals comprising speech, and to provide optimized electric sound signals.

11. A method according to claim 1 comprising providing at least one set of output stimuli perceivable as sound by the user and representing processed versions of said noisy or processed electric input signals comprising speech.

12. A hearing device adapted to be worn in or at an ear of a user, and/or to be fully or partially implanted in the head of the user, and comprising An input unit providing at least one electric input signal representing sound comprising speech components; and An output unit for providing at least one set of stimuli representing said sound and perceivable as sound to the user based on processed versions of said at least one electric input signal, A processing unit connected to said input unit and to said output unit and comprising a second algorithm optimized according to the method of claim 1 to provide processed versions of said at least one electric input signal exhibiting an optimized speech intelligibility.

13. A hearing device according to claim 12 constituting or comprising a hearing aid, a headset, an earphone, an ear protection device or a combination thereof.

14. A hearing system comprising left and right hearing devices according to claim 12, the left and right hearing devices being configured to be worn in or at left and right ears, respectively, of said user, and/or to be fully or partially implanted in the head at left and right ears, respectively, of the user, and being configured to establish a wired or wireless connection between them allowing data to be exchanged between them, optionally via an intermediate device.

15. A computer program comprising instructions which, when the program is executed by a computer, cause the computer to carry out the method of claim 1.

16. A hearing aid adapted to be worn in or at an ear of a user, and/or to be fully or partially implanted in the head of the user, and adapted to improve the user's intelligibility of speech, the hearing aid comprising An input unit providing at least one electric input signal representing sound comprising speech components; and An output unit for providing at least one set of stimuli representing said sound perceivable as sound to the user, said stimuli being based on processed versions of said at least one electric input signal, A processing unit connected to said input unit and to said output unit and comprising a second deep neural network, which is trained in a procedure to maximize an estimate of the user's intelligibility of said speech components, and in an operating mode of operation where that second deep neural network has been trained is configured to provide a processed signal based on said at least one electric input signal or a signal derived therefrom, wherein said estimate of the user's intelligibility of said speech components is provided by a first deep neural network which has been trained in a supervised procedure with predefined time segments comprising speech components and/or noise components and corresponding measured speech intelligibilities, said training being conducted under a constraint of minimizing a cost function.

17. The hearing aid of claim 16 wherein said first deep neural network has been trained in an offline procedure, before the hearing aid is taken into use by the user.

18. The hearing aid of claim 16 wherein said minimization of a cost function comprises a minimization of a mean squared prediction error e.sub.i.sup.2 of said predicted speech intelligibilities using an iterative stochastic gradient descent, or ascent, based method.

19. The hearing aid of claim 16 wherein said stimuli are based on said processed signal from said second neural network or further processed versions thereof.

20. The hearing aid of claim 16 wherein said second neural network is configured to be trained in a specific training mode of operation of the hearing aid, while the user is wearing the hearing aid.

Description

BRIEF DESCRIPTION OF DRAWINGS

[0110] The aspects of the disclosure may be best understood from the following detailed description taken in conjunction with the accompanying figures. The figures are schematic and simplified for clarity, and they just show details to improve the understanding of the claims, while other details are left out. Throughout, the same reference numerals are used for identical or corresponding parts. The individual features of each aspect may each be combined with any or all features of the other aspects. These and other aspects, features and/or technical effect will be apparent from and elucidated with reference to the illustrations described hereinafter in which:

[0111] FIG. 1A illustrates a hearing device according to an embodiment of the present disclosure, the hearing device comprising a forward path comprising an input unit, a signal processor and an output unit, wherein the signal processor is configured to execute an algorithm for enhancing an intelligibility of the electric input signal before it is presented to the user via the output unit, and

[0112] FIG. 1B illustrates a forward path of a hearing device according to an embodiment of the present disclosure, wherein the forward path comprises a filter bank allowing the signal processor comprising a neural network configured to enhance an intelligibility of the electric input signal to operate in the (time-) frequency domain,

[0113] FIG. 2 illustrates a scheme for training of a Speech Intelligibility Prediction (SIP) unit based on a Neural Network (NN), as proposed in the present disclosure,

[0114] FIG. 3 illustrates an embodiment of the proposed system for training a neural network for speech intelligibility enhancement,

[0115] FIG. 4A schematically shows a scenario for generating a first database of measured speech intelligibilities for a binaural hearing system according to the present disclosure, and

[0116] FIG. 4B schematically shows a system for training a first neural network with binaural data having predefined time segments representing a mixture of speech noise and corresponding measured speech intelligibilities of the first database (Bin-MSI) as shown in FIG. 4A, the first neural network providing corresponding estimated speech intelligibilities, while minimizing a prediction error, thereby providing a first optimized (trained) neural network (Bin-SIP-NN*);

[0117] FIG. 4C schematically illustrates a system for training a second neural network with binaural data comprising (arbitrary) noisy time segments representing left and right electric input signals, determining optimized second weights of a second neural network (Bin SE-NN), while maximizing a speech intelligibility P.sub.bin,est, estimated by the first optimized (trained) neural network (Bin-SIP-NN*), where the second neural network (Bin-SE-NN) is configured to provide modified left and right electric input signals exhibiting an improved speech intelligibility, thereby providing a second optimized (trained) neural network (Bin-SE-NN*);

[0118] FIG. 4D schematically illustrates a first embodiment of a binaural hearing system comprising a second optimized (trained) neural network (Bin-SE-NN*) according to the present disclosure; and

[0119] FIG. 4E schematically illustrates a second embodiment of a binaural hearing system comprising left and right hearing devices, and a second optimized (trained) neural network (Bin-SE-NN*) according to the present disclosure, where the speech intelligibility enhancement is performed in a separate auxiliary device,

[0120] FIG. 5A schematically shows a system for training a first neural network with multi-input data having predefined time segments representing a mixture of speech noise and corresponding measured speech intelligibilities of the first database (MM-MSI), the first neural network providing corresponding estimated speech intelligibilities, while minimizing a prediction error, thereby providing a first optimized (trained) neural network (MM-SIP-NN*);

[0121] FIG. 5B schematically shows a system for training a second neural network with data comprising (arbitrary) noisy time segments representing a multitude of electric input signals picked up at different locations at or around a user, thereby determining optimized second weights of a second neural network (MM-SE-NN), while maximizing a speech intelligibility P.sub.MM,est, estimated by a first optimized (trained) neural network (MM-SIP-NN*);

[0122] FIG. 5C schematically shows a first embodiment of a hearing device comprising a multitude of input units and a second optimized (trained) neural network (MM-SE-NN*) according to the present disclosure; and

[0123] FIG. 5D schematically shows a second embodiment of a hearing device comprising a multitude of input units, a beamformer and a second optimized (trained) neural network (SE-NN*) according to the present disclosure,

[0124] FIG. 6A schematically shows a system for training a first neural network with multi-input, binaural data having predefined time segments representing a mixture of speech noise and corresponding measured speech intelligibilities of the first database (MM-Bin-MSI), the first neural network providing corresponding estimated speech intelligibilities, while minimizing a prediction error, thereby providing a first optimized (trained) neural network (MM-Bin-SIP-NN*);

[0125] FIG. 6B schematically shows a system for training a second neural network with binaural data comprising (arbitrary) noisy time segments representing a multitude of electric input signals picked up at different locations at or around a user, thereby determining optimized second weights of a second neural network (MM-Bin-SE-NN), while maximizing a speech intelligibility P.sub.MM,bin,est, estimated by a first optimized (trained) neural network (MM-Bin-SIP-NN*);

[0126] FIG. 6C illustrates a third embodiment of a binaural hearing system comprising left and right hearing devices, each comprising a multitude of input units according to the present disclosure; and

[0127] FIG. 6D illustrates a fourth embodiment of a binaural hearing system comprising left and right hearing devices, each comprising a multitude of input units according to the present disclosure,

[0128] FIG. 7A shows a use case of a binaural hearing system comprising left and right hearing devices and an auxiliary processing device according to the present disclosure, and

[0129] FIG. 7B illustrates a user interface implemented as an APP according to the present disclosure running on the auxiliary device, and

[0130] FIG. 8 shows (squared, average) estimated prediction error <e.sup.2> of speech intelligibility versus time of a (first) neural network (SIP-NN) during training with predefined a database (MSI) comprising predefined time segments representing a mixture of speech noise and corresponding measured speech intelligibilities of the first database, the (first) neural network providing corresponding estimated speech intelligibilities, while minimizing the prediction error <e>, using (different) training data and test data respectively.

[0131] FIG. 9A schematically illustrates a time variant analogue signal (Amplitude vs time) and its digitization in samples, the samples being arranged in a number of time frames, each comprising a number N.sub.s of samples,

[0132] FIG. 9B schematically illustrates a time-frequency representation of the time variant electric signal of FIG. 9A, and

[0133] FIG. 9C schematically illustrates a neural network for determining an output signal with enhanced intelligibility from a noisy input signal in a time-frequency representation, and

[0134] FIG. 10 schematically shows an embodiment of a RITE-type hearing device according to the present disclosure comprising a BTE-part, an ITE-part and a connecting element.

[0135] The figures are schematic and simplified for clarity, and they just show details which are essential to the understanding of the disclosure, while other details are left out. Throughout, the same reference signs are used for identical or corresponding parts.

[0136] Further scope of applicability of the present disclosure will become apparent from the detailed description given hereinafter. However, it should be understood that the detailed description and specific examples, while indicating preferred embodiments of the disclosure, are given by way of illustration only. Other embodiments may become apparent to those skilled in the art from the following detailed description.

DETAILED DESCRIPTION OF EMBODIMENTS

[0137] The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. Several aspects of the apparatus and methods are described by various blocks, functional units, modules, components, circuits, steps, processes, algorithms, etc. (collectively referred to as elements). Depending upon particular application, design constraints or other reasons, these elements may be implemented using electronic hardware, computer program, or any combination thereof.

[0138] The electronic hardware may include microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), gated logic, discrete hardware circuits, and other suitable hardware configured to perform the various functionality described throughout this disclosure. Computer program shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.

[0139] The present application relates to the field of hearing devices, e.g. hearing aids.

[0140] In the following, a single-microphone system is used to exemplify the concepts of the present disclosure. Multi-microphone systems (as outlined in slightly more detail below) are straightforward generalizations of the single-microphone system.

[0141] FIG. 1A shows a hearing device (HD) according to an embodiment of the present disclosure. The hearing device (HD) comprises a forward path comprising an input unit (IU), a signal processor (SPU) and an output unit (OU), wherein the signal processor (SPU) is configured to execute an algorithm for enhancing an intelligibility of the electric input signal X(n) before it is presented to the user via the output unit (OU). The signal processor (SPU) may process the electric input signal X(n) in the time domain and provide the processed signal Y(n) (preferably exhibiting an improved intelligibility of speech components), which is presented to the user as stimuli perceivable as sound. The input unit may comprise an input transducer (e.g. a microphone), and may further comprise an analogue to digital converter to provide the electric input signal X(n) as a digital signal. The output unit (OU) may comprise an output transducer, e.g. a vibrator or a bone conduction hearing device, or a loudspeaker of an air conduction hearing device. Alternatively (or additionally), the output unit may comprise a multi-electrode array of a cochlear implant hearing device adapted for electrically stimulating a hearing nerve of the user.

[0142] FIG. 1B illustrates a forward path of a hearing device according to an embodiment of the present disclosure, wherein the forward path comprises a filter bank allowing the signal processor comprising a neural network configured to enhance an intelligibility of the electric input signal to operate in the (time-) frequency domain.

[0143] The input signal to the system, X(n), where n is a time index, may be a noisy or otherwise degraded speech signal, i.e. a typical hearing aid input signal. This signal may be analyzed with a filter bank (cf. Analysis filterbank in FIG. 1B), or a similar analysis structure. The resulting time-frequency coefficients are denoted as x.sub.k,m, k=1, . . . , K, and m=1, . . . , M, where k is the frequency band index and m is the time frame index. The coefficients of one time frame (and possibly coefficients from earlier and/or later time framesthese generalizations are not shown in the figure), are passed through a neural network for speech enhancement (cf. SE-DNN in FIG. 1B). The SE-DNN processes the input and outputs enhanced time-frequency coefficients y.sub.k,m, k=1, . . . , K, and m=1, . . . , M, (or any other abstraction of an enhanced speech signal), which can be synthesized (cf. Synthesis filterbank in FIG. 1B) into an audio signal Y(n). The aim of the neural network is to process the input signal, X(n), such as to improve its intelligibility to either normal hearing or hearing impaired listeners. To do so, the SE-DNN is trained as described below.

[0144] Training of the Proposed System

[0145] The proposed system is trained in two stages as illustrated in exemplary embodiments of FIGS. 2 and 3.

[0146] 1) A neural network for predicting speech intelligibility (SIP-DNN) is trained using a database of measured intelligibility P (cf. unit Measured intelligibility in FIG. 2), i.e., the result of a listening test involving human subjects, along with the stimuli of the listening test. The SIP-DNN parameters/weights are initialized randomly. The SIP-DNN is then trained (in that appropriate parameters/weights are determined algorithmically) using a database of measured intelligibility values (i.e. a database containing noisy/distorted/processed speech signals and corresponding measured intelligibility values, e.g. in percentages of correctly understood words). This is done iteratively by use of an iterative procedure (e.g. (iterative) stochastic gradient descent (or ascent)) such as to minimize a cost function, e.g. the prediction error (or the squared prediction error). The input to the SIP-DNN is a noisy or degraded speech signal X(n) (e.g. provided in a time frequency representation by Analysis filterbank as a number K of frequency sub-band signals X.sub.1,m, . . . , X.sub.K,m, where K is the number of frequency sub-bands and m is a time index), and the output is a prediction {circumflex over (P)} of the intelligibility of the input signal X(n), measured e.g. as a percentage of correctly understood words (or syllables or other linguistic elements). The (adaptive) training process for the SIP-DNN is illustrated in FIG. 2, where the SIP-DNN is fed with a comparison measure, or cost function, (e.g. the squared difference) e between a measured P speech intelligibility provided by the Measured Intelligibility database and an estimated {circumflex over (P)} speech intelligibility provided by the neural network SIP-DNN. Such a system is described in [3], which is incorporated herein by reference (and referred to for further details). The (trained) SIP-DNN is assumed to be a reliable estimator of intelligibility within all considered acoustical environments and for all types of degradation (e.g. types of noise (e.g. its spectro-temporal and/or spatial distribution), signal-to-noise ratios (SNR), etc.) or processing (e.g. beamforming and/or other noise reduction) applied to the signals of interest. The estimated speech intelligibility {circumflex over (P)} is e.g. based on data representing a certain time segment of the input signal, e.g. comprising a minimum number of time frames, e.g. corresponding to more than 100 ms of the electric input signal such as more than 0.5 s, such as of the order of 1 s (or more). The minimum value of the length of time segments of the electric input signal on which to base an estimated speech intelligibility {circumflex over (P)} is related to the basic building blocks of speech, e.g. syllables, words, sentences (or the like).

[0147] 2) The trained SIP-DNN is, in turn, used as a proxy for real listening tests (see [3] for details), to train the SE-DNN. This is done as shown in FIG. 3. A database of noisy/distorted speech signals is used for this. It is important to notice that this database does not have to include the corresponding values of measured intelligibility, as these are simply estimated using the SIP-DNN (in other words, this database does not require additional listening tests to be conducted). Hence, this database can be generated offline, and can in principle be much larger than the database of intelligibility test results used to train the SIP-DNNfrom a practical perspective, this is big advantage, because large training databases are necessary to train large DNNs robustly. In order to train the SE-DNN (i.e. to determine the values of its weights), the SE-DNN may be randomly initialized and may thereafter be updated iteratively. This is done by using numerical optimization methods such as e.g. (iterative) stochastic gradient descent (or ascent). An advantage of this approach is the observation that, because both the SE-DNN and the SIP-DNN are neural networks and in turn differentiable, gradient steps can be applied to the SE-DNN such as to increase predicted intelligibility, {circumflex over (P)}. The result is a neural network, SE-DNN, which can increase predicted intelligibility.

[0148] Generalizations:

[0149] The description above involves training of a single-microphone system, the SE-DNN, for speech intelligibility enhancement (see, e.g. FIG. 1A, 1B, 2, 3). However, the presented idea can straight-forwardly be extended to a multi-microphone situation. To do so, consider the training scheme in FIG. 3 for finding the parameters of the SE-DNN, but extended for multiple inputs, X.sub.1(n), . . . , X.sub.M(n), where M2 denotes the number of microphones/sensors. In this situation, an analysis filter bank would be applied to each of the M microphone signals. The resulting time-frequency coefficients would then be input to an extended, multi-microphone SE-DNN. As before, the output of this multi-microphone SE-DNN would still be the time-frequency coefficients of a single intelligibility enhanced signal (see e.g. FIG. 5C). The training (i.e. determination of the parameters of) the extended SE-DNN would be conducted exactly as for the single-microphone situation sketched in FIG. 3 (cf. e.g. FIG. 5A, 5B): Numerical methods such as stochastic gradient-descent (or ascent) would be applied to determine the weights of the extended SE-DNN, which would be optimal for a large range of different input signals (different speech signals, speakers, speaker locations, noise types, spatial noise distributions, signal-to-noise ratios (SNRs), etc.).

[0150] In a similar manner, the proposed scheme can straight-forwardly be extended to a system with binaural outputs (i.e., systems with two, a left and a right, outputs, Y.sub.L(n) and Y.sub.R(n), cf. FIG. 4A, 4B, 4C, 4D, 4E).

[0151] Furthermore, in a similar manner, the proposed scheme may be applied to other aspects of speech signals than speech intelligibility. For example, one could envision a listening effort predictor based on neural networks (LEP-DNN) and the training of a speech enhancement neural network (SE-DNN) which minimizes listening effort.

[0152] FIG. 4A shows a scenario for (a listening test) generating a first database (Bin-MSI) of measured speech intelligibilities for a binaural hearing system according to the present disclosure. A (e.g. normally hearing) test user (TSTU) is exposed to a listening test, where a number (N.sub.PDTS) of predefined time segments PDTS.sub.i, i=1, . . . , N.sub.PDTS, each comprising a speech component (S(n), e.g. a sentence) representing a multitude of syllables and/or words (from target sound source S, e.g. a loudspeaker, or a person) is mixed with a noise component (from noise sources N1, N2, N3, e.g. from respective loudspeakers or real noise sources). The user is asked to repeat the contents of the time segment (e.g. a sentence), which is compared to the (predefined) contents of the time segment and corresponding (measured) speech intelligibilities P.sub.bin, i=1, . . . , N.sub.PDTS, of each of said predefined time segments PDTS.sub.i of an electric input signal are determined. The exemplary predefined sentence S.sub.i(n)=The children play with the toys as received and interpreted by user is interpreted as <Xi*(n)>=The child plays with the toy, and a corresponding intelligibility measure P.sub.bin,i is determined. The mixture of the target signal (S(n)) and the noise signals (N1(n), N2(n), N3(n)) as received by the left and right hearing devices (HD.sub.L and HD.sub.R) are recorded as X.sub.L(n), X.sub.R(n), respectively (e.g. by ear pieces comprising one or more microphones, here two are shown, in the form of respective front (FM.sub.L, FM.sub.R) and rear (RM.sub.L, RM.sub.R) microphones of behind the ear (BTE) parts of at the left and right hearing devices). The sound source S is located in front of the test person in a look direction (LOOK-DIR), a known distance d from the user.

[0153] By varying the spatial arrangement of the sound source S and the noise sources N and their mutual loudness (relative output levels) in different relevant setups (providing different signal to noise ratios), a large number of data is preferably recorded. By spatially rearranging the sound source relative to the user, and recording data at the different locations (e.g. to the side(s), to rear, etc.), training data for relevant acoustic situations are picked up. By varying the types of noise (e.g. noise having different spectro-temporal distributions) provided by the noise source(s), relevant acoustic environments can be emulated, e.g. car noise, flight noise, babble, etc.

[0154] In case of a multi microphone situation, as illustrated in FIG. 5A-5D, and FIG. 6A-6D, where processing of the multitude of electric input signals of a given hearing device is present, before an estimate of speech intelligibility of a signal resulting from the processing is provided, it is also of interest to include different processing configurations in the training data (e.g. using different programs, or different parameters of a program).

[0155] In the example above, the first database (Bin-MSI) was indicated to be generated using normally hearing test persons. The first database (MSI) may in general be generated for a number of different characteristic hearing profiles (e.g. for different groups of substantially equal audiograms), i.e. each version of the first database being based on a multitude of test persons having substantially identical hearing capability (e.g. normally hearing or with equal hearing loss). In case of hearing impaired test persons of a given hearing profile, it is assumed that during test they are all provided with the same linear amplification of the input signal (i.e. providing a level independent but frequency dependent hearing compensation of the hearing loss in question).

[0156] FIG. 4B schematically shows a system (TD1-bin) for training a first neural network (Bin-SIP-NN) with binaural data X.sub.L(n), X.sub.R(n) having predefined time segments representing a mixture of speech noise and corresponding measured speech intelligibilities P.sub.bin of the first database (Bin-MSI) as shown in FIG. 4A. The first neural network provides corresponding estimated speech intelligibilities P.sub.bin,est, while minimizing a prediction error e.sub.bin, thereby providing a first optimized (trained) neural network (Bin-SIP-NN*). The method of optimizing the neural network (Bin-SIP-NN) is similar to the method described above, e.g. in relation to FIG. 2 for the monaural situation. Binaural (time domain) stimuli X.sub.L,i(n), X.sub.R,i(n) from the database Bin-MSI are provided to respective left and right input units (IU.sub.L, IU.sub.R). The time segments are converted to frequency sub-band signals X.sub.L,i(k,m) and X.sub.R,i(k,m) by respective analysis filter banks (FBA), here indicated to include analogue to digital conversion (AD) ((if not provided elsewhere). Index i for time segment i (or training data i) has been omitted in the input part of FIG. 4B (and likewise in subsequent drawings). frequency sub-band signals X.sub.L,i(k,m) and X.sub.R,i(k,m) are fed to the first neural network (Bin-SIP-NN) which estimates a speech intelligibility P.sub.est,bin (for the i.sup.th data set) based thereon. The estimated is speech intelligibility P.sub.est,bin compared the measured speech intelligibility P.sub.bin (cf. indication Provide true SI on signal from the data base Bin-MSI to the combination unit +) in sum unit + providing a corresponding prediction error e.sub.bin. The (possibly averaged, and/or squared) prediction error is minimized in an iterative procedure where parameters of the neural network Bin-SIP-NN are modified (e.g. according to a steepest decent procedure) as further discussed in connection with FIG. 8.

[0157] FIG. 4C schematically illustrates a system (TD2-bin) for training a second neural network (Bin-SE-NN) with binaural data comprising (arbitrary) noisy time segments representing left and right electric input signals X.sub.L(n) and X.sub.R(n), determining optimized second weights of a second neural network (Bin-SE-NN), while maximizing a speech intelligibility P.sub.bin,est, estimated by the first optimized (trained) neural network (Bin-SIP-NN*) based on modified left and right electric input signals Y.sub.L(k,m) and Y.sub.R(k,m) provided by the second neural network (Bin-SE-NN). Thereby a second optimized (trained) neural network (Bin-SE-NN*) is provided. The training data X.sub.L(n) and X.sub.R(n) may be stored in a database and loaded into the input units in subsequent batches (e.g. controlled by a control unit) or be picked up by the input units, e.g. corresponding microphones. The training data X.sub.L(n) and X.sub.R(n) are converted to the time-frequency domain X.sub.L(k,m) and X.sub.R(k,m) by respective analysis filter banks (and prior to that digitized, e.g. stored in a database on digitized form or digitized in the respective input units). The database may be stored in the training system TD2-bin (or be accessible from the training system, e.g. via a wired or wireless link). The training system TD2-bin may form part of a hearing device according to the present disclosure.

[0158] FIG. 4D schematically illustrates a first embodiment of a binaural hearing system (HS) comprising a second optimized (trained) neural network (Bin-SE-NN*) according to the present disclosure. The hearing system comprises left and right input units adapted for being located at or in left and right ears of a user to pick up left and right electric input signals X.sub.L(n) and X.sub.R(n), respectively. The time domain signals X.sub.L(n) and X.sub.R(n) are converted to respective frequency sub-band signals X.sub.L(k,m) and X.sub.R(k,m) by respective analysis filter banks (FBA), e.g. including analogue to digital conversion units (AD) (if not provided elsewhere). The second optimized (trained) neural network (Bin-SE-NN*) provides enhanced left and right electric input signals Y.sub.L(k,m) and Y.sub.R(k,m) with optimized speech intelligibility with are fed to respective analysis filter banks (FBS) and optional digital to analogue converters (DA). The resulting left and right time domain output signals Y.sub.L(n) and Y.sub.R(n), are fed to output units OU.sub.L and OU.sub.R, respectively, for presentation to the user wearing the hearing system.

[0159] The binaural hearing system (HS) may be configured in a number of different ways, including partitioned in a number of separate devices in communication with each other. One such solution is schematically illustrated in FIG. 4E.

[0160] FIG. 4E schematically illustrates a second embodiment of a binaural hearing system (HS) comprising left and right hearing devices (HD.sub.L, HD.sub.R), and a second optimized (trained) neural network (Bin-SE-NN*) according to the present disclosure, where the speech intelligibility enhancement is performed in a separate auxiliary device (AD). The hearing system is configured to allow communication between left and right hearing devices (HD.sub.L, HD.sub.R) and the auxiliary device (AD). The auxiliary device (AD) and the left and right hearing devices (HD.sub.L, HD.sub.R) comprises respective transceivers (TU2L, TU2R in AD, and TU.sub.L and TU.sub.R in HD.sub.L, HD.sub.R, respectively) allowing the exchange of one or more audio signals between them. The left and right hearing devices (HD.sub.L, HD.sub.R) additionally comprises input units (IU.sub.L, IU.sub.R) providing respective noisy left and right electric input signals X.sub.L and X.sub.R, and output units (OU.sub.L, OU.sub.R) for providing stimuli perceivable as sound to the user's left and right ears based on respective processed left and right output signals OUT.sub.L, OUT.sub.R. The left and right hearing devices (HD.sub.L, HD.sub.R) may be mere ear pieces comprising only input and output units and all processing is performed in the auxiliary device. In the embodiment of FIG. 4E, however, the left and right hearing devices (HD.sub.L, HD.sub.R) additionally comprises respective processors (PR.sub.L, PR.sub.R), e.g. for applying one or more processing algorithms to the respective enhanced input signals Y.sub.L, Y.sub.R (e.g. for applying a frequency and/or level dependent gain (e.g. attenuation) to the enhanced signal to compensate for the user's hearing impairment).

[0161] In addition to the transceivers for receiving noisy input signals X.sub.L and X.sub.R from and for delivering enhanced input signals Y.sub.L and Y.sub.R to the left and right hearing devices (HD.sub.L, HD.sub.R), respectively, the auxiliary device (AD) comprises the speech intelligibility enhancement unit (Bin-SE-NN*) according to the present disclosure. The speech intelligibility enhancement unit is connected to user interface UI (e.g. a touch sensitive display) via signals UIS (e.g. for displaying relevant information to the user regarding current acoustic environments and speech intelligibility and for allowing the user to influence the hearing system, e.g. the configuration of the speech intelligibility enhancement unit. The auxiliary device also comprises a further transceiver unit TU1, e.g. or communicating with another device or a network (e.g. a telephone or data network).

[0162] In FIG. 4E, the processing (including the optimized neural network Bin-SE-NN*) of the electric input signals to improve speech intelligibility is performed in a separate auxiliary device (AD). This processing may be located fully or partially in one of the left and right hearing devices (HD.sub.L, HD.sub.R) when appropriately modified to allow transmission of electric input signals (e.g. X.sub.L) from a first one (e.g. HD.sub.L) of the hearing devices to the other (processing) hearing device (e.g. HD.sub.R) and to allow a resulting enhanced electric signal (e.g. Y.sub.L) with improved intelligibility to be transmitted back to the first hearing device (e.g. HD.sub.R). In an embodiment, the processing is fully or partially performed on a server accessible to the hearing device or hearing system, e.g. via a network (e.g. located in the cloud).

[0163] FIG. 5A shows a system (TD1-MM) for training a first neural network (MM-SIP-NN) with multi-input data comprising predefined time segments representing a mixture of speech and noise (cf. Apply stimuli X.sub.1,i(n), . . . , X.sub.M,i(n) in FIG. 5A) and corresponding measured speech intelligibilities (cf. Provide measured SI in FIG. 5A) stored in the first database (MM-MSI). The first neural network (MM-SIP-NN) provides corresponding estimated speech intelligibilities P.sub.MM,est, while minimizing a prediction error e.sub.MM, (or rather the squared prediction error e.sup.2.sub.MM), cf. Minimize e.sup.2.sub.MM in FIG. 5A. Thereby a first optimized (trained) neural network (MM-SIP-NN*) is provided. Compared to the system of FIG. 2, the system TD1-MM of FIG. 5A comprises M input units IU.sub.1, . . . , IU.sub.M (instead of one), where M2. Each of the multitude of corresponding input signals X.sub.1(n), . . . , X.sub.M(n) are converted to a time frequency representation X.sub.1(k,m), . . . , X.sub.M(k,m) by respective analysis filter banks (AFB) (and possible analogue to digital conversion (AD) circuitry (if not provided elsewhere in the system). The multitude of electric input signals X.sub.1(k,m), . . . , X.sub.M(k,m) are fed to processor (PRO) for generating a single processed electric input signal Y.sub.P(k,m), which is used as input to the first trainable neural network MM-SIP-NN. The processor may apply relevant processing algorithms to the multitude of electric input signals, e.g. beamforming for providing a combination (e.g. a linear combination, e.g. a weighted sum) of the input signals. The relevant processing algorithms may also comprise noise reduction, e.g. de-reverberation. To include a variation of the processing in the training data, a number of relevant processing parameter variations (cf. Apply processing parameters PROi in FIG. 5A) are included in addition to the previously mentioned variations of spatial configuration of target sound source and noise, types of noise, etc.

[0164] Alternatively, a multitude of time segments of the processed signal Y.sub.P,i may be stored together with corresponding measured speech intelligibilities P.sub.MM,i, in the first database MM-MSI, where the time segments of Y.sub.P,i are generated for a multitude of values of the M electric input signals (and types of noise, and mutual spatial configurations of target and noise sound sources), and a variety of processing conditions. Thereby a reduced number of data has to be stored in the database, and only the resulting processed signal (Y.sub.P,i) has to be fed from the database to to the first neural network (MM-SIP-NN).

[0165] FIG. 5B schematically shows a system (TD2-MM) for training a second neural network (MM-SE-NN) with data comprising (arbitrary) noisy time segments representing a multitude of electric input signals X.sub.1(n), . . . , X.sub.M(n) (X.sub.1k,m), . . . , X.sub.M(k,m)) picked up at different locations at or around a user (e.g. in one and the same hearing device, e.g. located at or in an ear of the user), thereby determining optimized second weights w.sub.opt of the second neural network (MM-SE-NN), while maximizing a speech intelligibility PMM.sub.,est, estimated by the first optimized (trained) neural network (MM-SIP-NN*) (cf. Maximize P.sub.MM,est in FIG. 5B). The determination of optimized second weights w.sub.opt of the second neural network (MM-SE-NN) is in principle equivalent to the determination of optimized second weights w.sub.opt of the second neural networks (SE-DNN and Bin-SE-NN) described above in connection with FIG. 3 and FIG. 4C, respectively.

[0166] FIG. 5C schematically shows a first embodiment of a hearing device (HD) comprising a multitude of input units (IU.sub.1, . . . , IU.sub.M) for providing corresponding noisy electric input signals X.sub.1(n), . . . , X.sub.M(n), each being converted to the time-frequency domain by analysis filter banks FBA, cf. signals X.sub.1(k,m), . . . , X.sub.M(k,m), which are fed to second optimized (trained) neural network (MM-SE-NN*) according to the present disclosure. The embodiment of FIG. 5C is similar to the embodiment of FIG. 1B. The difference is that the embodiment of FIG. 5C comprises more than one input unit, and hence more than one input signal to the optimized neural network. The second optimized (trained) neural network (MM-SE-NN*) provides enhanced electric input signal Y(k,m) with improved speech intelligibility. This signals is fed to synthesis filter bank FBS (and optional digital to analogue (DA) conversion circuitry) to provide a corresponding time domain signal for presentation to the user via output unit OU, e.g. a vibrator of a bone anchored hearing aid or a loudspeaker of hearing device, e.g. an air conduction hearing aid.

[0167] FIG. 5D schematically shows a second embodiment of a hearing device (HD) comprising a multitude of input units (IU.sub.1, . . . , IU.sub.M), as described in connection with FIG. 5C. The difference of the embodiment of FIG. 5D is that it comprises processor (here a beamformer (BF) for providing a single (beamformed) signal from the multitude of electric input signals X.sub.1(k,m), . . . , X.sub.M(k,m). The processed (beamformed) signal Y.sub.BF(k,m) is fed to a second optimized (trained) neural network (SE-NN*) according to the present disclosure. This is e.g. trained as suggested in connection with the single input system of FIG. 3 (but where training data for the network (SE-NN) representing different processing (beamformer) settings are added to complement the normal training data).

[0168] FIG. 6A shows a system (TD1-MM-bin) for training a first neural network (MM-Bin-SIP-NN) with multi-input, binaural data having predefined time segments representing a mixture of speech noise and corresponding measured speech intelligibilities P.sub.MM,bin of the first database (MM-Bin-MSI). The first neural network (MM-Bin-SIP-NN) provides corresponding estimated speech intelligibilities P.sub.MM,bin,est, while minimizing a prediction error, thereby providing a first optimized (trained) neural network (MM-Bin-SIP-NN*). The training method illustrated in FIG. 6A is equivalent to a combination of the systems of FIGS. 4B and 5A for binaural (one input) and monaural (multi-input) systems, respectively, as discussed above.

[0169] As described in connection with FIG. 5A, alternatively, a multitude of time segments of the left and right processed signals Y.sub.P,L,i and Y.sub.P,R,i may be stored together with corresponding measured speech intelligibilities P.sub.MM,bin,i, in the first database MM-bin-MSI, where the time segments of Y.sub.P,L,i and Y.sub.P,R,i are generated for a multitude of values of the M electric input signals (and types of noise, and mutual spatial configurations of target and noise sound sources), and a variety of processing conditions. Thereby a reduced number of data has to be stored in the database, and only the resulting processed signals (Y.sub.P,L,i and Y.sub.P,R,i) have to be fed from the database to the first neural network (MM-Bin-SIP-NN).

[0170] FIG. 6B schematically shows a system (TD2-MM-bin) for training a second neural network

[0171] (MM-Bin-SE-NN) with binaural data comprising (arbitrary) noisy time segments representing a multitude of electric input signals picked up at different locations at or around a user, thereby determining optimized second weights of the second neural network (MM-Bin-SE-NN), while maximizing a speech intelligibility P.sub.MM,bin,est, estimated by the first optimized (trained) neural network (MM-Bin-SIP-NN*), as discussed in connection with FIG. 6A. The training method illustrated in FIG. 6B is equivalent to a combination of the systems of FIGS. 4C and 5B for binaural (one input) and monaural (multi-input) systems, respectively, as discussed above.

[0172] FIG. 6C illustrates a third embodiment of a binaural hearing system comprising left and right hearing devices (HD.sub.L, HD.sub.R) according to the present disclosure. The left and right hearing devices of FIG. 6C comprise the same elements as the hearing device shown in connection with FIG. 5C and discussed above. Additionally, the embodiment of the left and right hearing devices (HD.sub.L, HD.sub.R) of FIG. 6C comprises a processing unit (PR), which processes the enhanced electric input signal (Y.sub.L(k,m) and Y.sub.R(k,m), respectively), including taking into account the enhanced electric input signal received from the opposite hearing device via an interaural link (IA-WL) established by respective transceiver units (TU.sub.L, TU.sub.R). The respective processors (PR) may provide a further enhanced signal OUT.sub.L(k,m) and OUT.sub.R(k,m), respectively, by binaural adjustments (e.g. related to level differences and/or spatial cues based on a comparison of the monaurally generated enhanced left and right signals (Y.sub.L(k,m) and Y.sub.R(k,m))). The further enhanced signals are fed to the respective synthesis filter banks and output units for presentation to the user as previously indicated in connection with FIG. 5C.

[0173] In another embodiment, as illustrated in FIG. 6D, a fully binaural hearing system as described in FIG. 4D or 4E, with a multitude of inputs at each ear can be envisioned. Such system would require an exchange of a multitude of audio signals, though, and thus require a large bandwidth link (and thus a relatively large power consumption). FIG. 6D schematically illustrates an embodiment of a binaural hearing system (HS) comprising a second optimized (trained) neural network (MM-Bin-SE-NN*) according to the present disclosure. The hearing system comprises a multitude of left and right input units (IU.sub.L,1, . . . , IU.sub.L,M and IU.sub.R,1, . . . , IU.sub.R,M, respectively) adapted for being located at or in or around left and right ears of a user to pick up respective multitudes of left and right electric input signals X.sub.L,1(n), . . . , X.sub.L,M(n), and X.sub.R,1(n), . . . , X.sub.R,M(n), respectively. This multitude of time domain signals are converted to respective frequency sub-band signals X.sub.L,1(k,m), . . . , X.sub.L,M(k,m), and X.sub.R,1(k,m), X.sub.R,M(k,m), by respective analysis filter banks (FBA), e.g. including analogue to digital conversion units (AD) (if not provided elsewhere). The second optimized (trained) neural network (MM-Bin-SE-NN*) provides enhanced left and right electric input signals Y.sub.L(k,m) and Y.sub.R(k,m) providing optimized speech intelligibility for the user. These enhanced signals are fed to respective synthesis filter banks (FBS) and optionally to respective digital to analogue converters (DA). The resulting left and right time domain output signals Y.sub.L(n) and Y.sub.R(n), are fed to output units OU.sub.L and OU.sub.R, respectively, for presentation to the user wearing the hearing system as stimuli perceivable as sound (e.g. as mechanical vibrations propagated via bone conduction or air conduction).

[0174] The binaural hearing system (HS) may be configured in a number of different ways, including partitioned in a number of separate devices in communication with each other (cf. e.g. FIG. 4E). Likewise, the number of input units (here indicated to be M in each of the left and right hearing devices, may be equal or different, as requested by the application in question). The same is true for the multi-input systems illustrated in FIGS. 5A-5D and 6A-6C.

[0175] FIG. 7A shows a use case of a binaural hearing system comprising left and right hearing devices (HD.sub.L, HD.sub.R) and an auxiliary processing device (AD) according to the present disclosure. FIG. 7A, 7B show an exemplary application scenario of an embodiment of a hearing system according to the present disclosure. FIG. 7A illustrates a user (U), a binaural hearing aid system (HD.sub.L, HD.sub.R) and an auxiliary device (AD). FIG. 7B illustrates the auxiliary device (AD) running an APP for configuring the speech intelligibility enhancement unit. The APP is a non-transitory application (APP) comprising executable instructions configured to be executed on the auxiliary device to implement a user interface (UI) for the hearing device(s) (HD.sub.L, HD.sub.R) or the hearing system. In the illustrated embodiment, the APP is configured to run on a smartphone, or on another portable device allowing communication with the hearing device(s) or the hearing system.

[0176] FIG. 7B illustrates a user interface (UI) implemented as an APP according to the present disclosure running on the auxiliary device (AD). The user interface comprises a display (e.g. a touch sensitive display). Via the display of the user interface, the user can interact with the hearing system and hence control functionality of the system. The illustrated screen of the Speech intelligibility enhancement SIE-APP allows the user to activate (or deactivate) a speech intelligibility enhancement mode (according to the present disclosure), cf. grey shaded button denoted SI enhancement mode (the grey shading indicating that the mode is activated). The screen further allows the user to choose between Monaural SIE and Binaural SIE (where Binaural SIE is activated in the example). Monaural and Binaural SIE (speech intelligibility enhancement) refer to speech enhancement based only on local input signals (monaural, cf. e.g. FIG. 1A, 1B, 2, 3, 5A-5D) and speech enhancement based on input signals from both sides of the head (binaural, cf. e.g. FIG. 4A-4E, 6A-6B). The screen informs the user about a current (average) estimated binaural speech intelligibility P.sub.bin,est=95% (which is indicated to be satisfactory by the smiley).

[0177] The auxiliary device (AD) comprising the user interface (UI) is preferably adapted for being held in a hand of a user (U).

[0178] In the embodiment of FIG. 7A, wireless links denoted IA-WL (e.g. an inductive link between the hearing left and right assistance devices) and WL-RF (e.g. RF-links (e.g. Bluetooth) between the auxiliary device (AD) and the left (HD.sub.L) and between the auxiliary device (AD) and the right (HD.sub.R), hearing device, respectively) are indicated (implemented in the devices by corresponding antenna and transceiver circuitry, indicated in FIG. 7A in the left and right hearing devices as RF-IA-Rx/Tx-L and RF-IA-Rx/Tx-R, respectively).

[0179] In an embodiment, the auxiliary device (AD) is or comprises an audio gateway device adapted for receiving a multitude of audio signals (e.g. from an entertainment device, e.g. a TV or a music player, a telephone apparatus, e.g. a mobile telephone or a computer, e.g. a PC) and adapted for selecting and/or combining an appropriate one of the received audio signals (or combination of signals) for transmission to the hearing device. In an embodiment, the auxiliary device (AD) is or comprises a remote control for controlling functionality and operation of the hearing device(s). In an embodiment, the function of a remote control is implemented in a smartphone, the smartphone possibly running an APP allowing to control the functionality of the audio processing device via the smartphone (the hearing device(s) comprising an appropriate wireless interface to the smartphone, e.g. based on Bluetooth or some other standardized or proprietary scheme).

[0180] In an embodiment, the hearing system, including the user interface (UI), is configured to allow a user to indicate a location of or a direction to a sound source of current interest to the user. In an embodiment, the hearing system, including the user interface (UI), is configured to allow a user to indicate a current acoustic environment of the user. Thereby, predefined specifically optimized (second) neural networks (e.g. SE-DNN*x, x=location 1, . . . , location N.sub.L, or x=environment 1, . . . , environment N.sub.E) may be loaded in the hearing system, e.g. the hearing device(s). This has the advantage of enabling a less complicated optimized neural network (thereby saving memory and processing power). Different spatial locations of the sound source of current interest may e.g. include one or more of in front, to the left, to the right, to the rear, in left front quarter plane, in right front quarter plane, in rear half plane, etc. Different acoustic environments may e.g. include, speech in quiet, speech in a car, speech in a multi talker environment (cocktail party), speech in reverberation, etc. In an embodiment, predefined specifically optimized (second) neural networks (e.g. SE-DNN*y, y=P1, . . . , P.sub.NP) are automatically loaded, when a specific hearing aid program is chosen by the user (e.g. via the user interface, or automatically chosen via an environment detector (classification unit). In an embodiment, a specific optimized (second) neural network is automatically loaded when the user (wearer of the hearing system) is talking, as e.g. detected by an own voice detector of the hearing system.

[0181] FIG. 8 shows (squared, average) estimated prediction error <e2> of speech intelligibility versus time of a (first) neural network (SIP-NN) during training with predefined a database (MSI) comprising predefined time segments representing a mixture of speech noise and corresponding measured speech intelligibilities of the first database, the (first) neural network providing corresponding estimated speech intelligibilities, while minimizing the prediction error <e>, using (different) training data and test data respectively. The prediction error is defined as the difference between a measured speech intelligibility (P) of a known speech element (e.g. a sentence), e.g. provided by a listening test, and an estimated speech intelligibility (P.sub.est (or {circumflex over (P)}), e.g. provided by the neural network SIP-DNN (cf. e.g. FIG. 2). The iterative algorithm (Minimize e.sup.2, cf. e.g. FIG. 2) comprises a) applying a batch of data (or all data) of the training set of the database MSI comprising predefined time segments of sound comprising speech (and typically additional noise) and corresponding speech intelligibilities obtained from a listening test (of a normally hearing person). After each epoch, the average estimated prediction error <e.sup.2>.sub.epoch is evaluated, and new set of weights of the neural network is determined (e.g. according to a steepest decent algorithm). This procedure is continued until a minimum in average estimated prediction error <e.sup.2>.sub.epoch has been arrived. In parallel or subsequently, the same weights are used on a test data set (different from the training data set) and the average estimated prediction error <e.sup.2>.sub.epoch is evaluated. When (if) the average estimated prediction error <e.sup.2>.sub.epoch starts to increase (as indicated by dotted ellipse and arrow to N.sub.opt on the Epochs (time) axis), the weights w corresponding to the preceding minimum (at epoch N.sub.opt) in average prediction error are chosen as the optimized weights. In other words, the weights w of the neural network used in the N.sub.opt.sup.th epoch are frozen, thereby providing a first optimized (trained) neural network (SIP-NN*) represented by optimized weights w.sub.opt. Preferably (to minimize the need for storing optimized parameters for all epochs), the average estimated prediction error <e.sup.2>.sub.epoch using the test data is evaluated right after the corresponding evaluation of the training data. Preferably a small number of sets of optimized parameters of the neural network for a number of previous epochs (e.g. 4) are stored to allow easy back tracking (e.g. in connection with identification of a minimum in the estimated prediction error <e.sup.2>.sub.epoch of the test data. Thereby an early stopping procedure can be implemented.

[0182] FIG. 9A schematically illustrates a time variant analogue signal (Amplitude vs time) and its digitization in samples, the samples being arranged in a number of time frames, each comprising a number N.sub.s of samples. FIG. 9A shows an analogue electric signal (solid graph), e.g. representing an acoustic input signal, e.g. from a microphone, which is converted to a digital audio signal in an analogue-to-digital (AD) conversion process, where the analogue signal is sampled with a predefined sampling frequency or rate f.sub.s, f.sub.s being e.g. in the range from 8 kHz to 48 kHz (adapted to the particular needs of the application) to provide digital samples y(n) at discrete points in time n, as indicated by the vertical lines extending from the time axis with solid dots at their endpoint coinciding with the graph, and representing its digital sample value at the corresponding distinct point in time n. Each (audio) sample y(n) represents the value of the acoustic signal at n (or t.sub.n) by a predefined number N.sub.b of bits, N.sub.b being e.g. in the range from 1 to 48 bit, e.g. 24 bits. Each audio sample is hence quantized using N.sub.b bits (resulting in 2.sup.Nb different possible values of the audio sample).

[0183] In an analogue to digital (AD) process, a digital sample y(n) has a length in time of 1/f.sub.s, e.g. 50 s, for f.sub.s=20 kHz. A number of (audio) samples N.sub.s are e.g. arranged in a time frame, as schematically illustrated in the lower part of FIG. 9A, where the individual (here uniformly spaced) samples are grouped in time frames (1, 2, . . . , N.sub.s)). As also illustrated in the lower part of FIG. 7A, the time frames may be arranged consecutively to be non-overlapping (time frames 1, 2, . . . , m, . . . , M), where m is time frame index. Alternatively, the frames may be overlapping (e.g. 50%). In an embodiment, a time frame comprises 64 audio data samples. Other frame lengths may be used depending on the practical application. A time frame may e.g. have a duration of 3.2 ms.

[0184] FIG. 9B schematically illustrates a time-frequency map representation of the time variant electric signal y(n) of FIG. 9A. The time-frequency representation comprises an array or map of corresponding complex or real values of the signal in a particular time and frequency range. The time-frequency representation may e.g. be a result of a Fourier transformation converting the time variant input signal y(n) to a (time variant) signal Y(k,m) in the time-frequency domain. In an embodiment, the Fourier transformation comprises a discrete Fourier transform algorithm (DFT). The frequency range considered by a typical hearing aid (e.g. a hearing aid) from a minimum frequency f.sub.min to a maximum frequency f.sub.max comprises a part of the typical human audible frequency range from 20 Hz to 20 kHz, e.g. a part of the range from 20 Hz to 12 kHz. In FIG. 9B, the time-frequency representation Y(k,m) of signal y(n) comprises complex values of magnitude and/or phase of the signal in a number of DFT-bins (or tiles) defined by indices (k,m), where k=1, . . . , K represents a number K of frequency values (cf. vertical k-axis in FIG. 9B) and m=1, . . . , N.sub.M represents a number N.sub.M of time frames (cf. horizontal m-axis in FIG. 9B). A time frame is defined by a specific time index m and the corresponding K DFT-bins (cf. indication of Time frame m in FIG. 9B). A time frame m represents a frequency spectrum of signal x at time m. A DFT-bin or tile (k,m) comprising a (real) or complex value X(k,m) of the signal in question is illustrated in FIG. 7B by hatching of the corresponding field in the time-frequency map (cf. DFT-bin=time frequency unit (k,m): X(k,m)=|X|.Math.e.sup.i in FIG. 9B, where |X| represents a magnitude and represents a phase of the signal in that time-frequency unit. Each value of the frequency index k corresponds to a frequency range f.sub.k, as indicated in FIG. 9B by the vertical frequency axis f. Each value of the time index m represents a time frame. The time t.sub.m spanned by consecutive time indices depend on the length of a time frame and the degree of overlap between neighbouring time frames (cf. horizontal time-axis in FIG. 9B).

[0185] The m.sup.th time frame is denoted now and the m.sup.th time frame and a number N.sub.h of preceding time frames (denoted history) are enclosed by a bold frame and used as inputs to the neural network illustrated in FIG. 9C. The inputs may, alternatively be a number of consecutive time domain time frames.

[0186] FIG. 9C schematically illustrates a neural network for determining an output signal Y(k,m) with enhanced intelligibility from a noisy input signal X(k,m) in a time-frequency representation. A present time frame and a number N.sub.h of preceding time frames are stacked to a vector and used as input layer in a neural network. Each frame comprises K (e.g. K=64 or K=128) values of a (noisy) electric input signal, e.g. X(k,m), k=1, . . . , K in FIG. 1B. The signal may be represented by its magnitude |X(k,m)| (e.g. by ignoring its phase ). An appropriate number of time frames is related to the correlation inherent in speech. In an embodiment, the number N.sub.h of previous time frames which are considered together with the present one may e.g. correspond to a time segment of duration of more than 20 ms, e.g. more than 50 ms, such as more than 100 ms. In an embodiment, the number of time frames considered (=N.sub.h+1) are larger than or equal to 4, e.g. larger than or equal to 10, such as larger than or equal to 24. The width of the neural network is in the present application equal to K(N.sub.h+1), which for K=64 and N.sub.h=9 amounts to N.sub.L1=640 nodes of the input layer L1 (representing a time segment of the audio input signal of 32 ms (for a sampling frequency of 20 kHz and a number of samples per frame of 64 and assuming non-overlapping time frames)). The number of nodes (N.sub.L2, . . . , N.sub.LN) in subsequent layers (L2, . . . , LN) may be larger or smaller than the number of nodes N.sub.L1 of the input layer L1, and in general adapted to the application (in view of the available number of input data sets and the number of parameters to be estimated by the neural network). In the present case the number of nodes N.sub.LN in the output layer LN is K (e.g. 64) in that it comprises K time-frequency tiles of a frame of the enhanced output signal Y(k,m).

[0187] FIG. 9C is intended to illustrate a general multi-layer neural network of any type, e.g. deep neural network, here embodied in a standard feed forward neural network. The depth of the neural network (the number of layers), denoted N in FIG. 9C, may be any number and typically adapted to the application in question (e.g. limited by a size and/or power supply capacity of the device in question, e.g. a portable device, such as a hearing aid). In an embodiment, the number of layers in the neural network is larger than or equal to two or three. In an embodiment, the number of layers in the neural network is smaller than or equal to four or five.

[0188] The nodes of the neural network illustrated in FIG. 9C is intended to implement standard functions of neural network to multiply the values of branches from preceding nodes to the node in question with weights associated with the respective branches and to add the contributions together to a summed value Y.sub.i,j for node i in layer j. The summed value Y.sub.i,j is subsequently subject to a non-liner function f, providing a resulting value Z.sub.i,j=f(Y.sub.i,j) for node i in layer j. This value is fed to the next layer (j+1) via the branches connecting node i in layer j with the nodes of layer j+1. In FIG. 9C the summed value Y.sub.i,j for node i in layer j (i.e. before the application of the non-linear (activation) function to provide the resulting value for node i of layer j) is expressed as:


Y.sub.i,j=.sub.p=1.sup.N.sup.L(j1)w.sub.p,i(j1,j)Z.sub.p(j1)

where w.sub.p,i(j1,j) denotes the weight for node p in layer L(j1) to be applied to the branch from node p in layer j1 to node i in layer j, and Z.sub.p(j1) is the signal value of the p.sup.th node in layer j1. In an embodiment, the same activation function is used for all nodes (this may not necessarily be the case, though). An exemplary non-linear activation function Z=f(Y) is schematically illustrated in the insert in FIG. 9C. Typical functions used in neural networks are the sigmoid function and the hyperbolic tangent function (tan h). Other functions may be used, though, as the case may be. Further, the activation function may be parametrized.

[0189] Together, the (possibly parameterized) activity function and the weights w of the different layers of the neural network constitute the parameters of the neural network. They represent the parameters that (together) are optimized in respective iterative procedures for the first and second neural networks of the present disclosure. In an embodiment, the same activation function f is used for all nodes (so in that case, the parameters of the neural network are constituted by the weights of the layers).

[0190] The neural network of FIG. 9C may e.g. represent a (second) neural network according to the present disclosure (cf. e.g. SE-DNN in FIG. 1B, or BIN-SE-NN* in FIG. 4D, 4E, etc.).

[0191] The structure of a first neural network according to the present disclosure (cf. e.g. SIP-DNN in FIG. 2, or BIN-SIP-NN in FIG. 4B, etc.) is equivalent to the one illustrated in FIG. 9C. A difference is that the output layer consists of a single node providing as an output an estimated intelligibility P.sub.est (also denoted {circumflex over (P)}) of speech components in the input signal(s). Likewise, the input layer of the first neural network may be different in width, adapted to the basic building blocks of the language in question (e.g. comprising a time segment comparable in time to one or more words, e.g. a sentence, e.g. comprising a number of time frames of the electric input signals corresponding to 0.5 s or 1 s of speech, or more. Also, the depth of the two neural networks may be different.

[0192] Typically, the first neural network according to the present disclosure is optimized (trained) in an offline procedure (e.g. as indicated in FIG. 2, 4B, 5A, 6A), e.g. using a model of the head and torso of a human being (e.g. Head and Torso Simulator (HATS) 4128C from Brel & Kjr Sound & Vibration Measurement A/S). Likewise, the second neural network according to the present disclosure may be optimized (trained) in an offline procedure (e.g. as indicated in FIG. 3, 4C, 5B, 6B), e.g. using an average model. Alternatively or additionally, the second neural network according to the present disclosure may be optimized (trained) or fine-tuned in a specific training mode, while the user wears a hearing device or hearing system according to the present disclosure. In an embodiment, data for training the second neural network (possibly in an offline procedure) may be picked up and stored while the user wears the hearing device or hearing system, e.g. over a longer period of time, e.g. days, weeks or even months. Such data may e.g. be stored in an auxiliary device (e.g. a dedicated, e.g. portable storage device, or in a smartphone). This has the advantage that the training data are relevant for the user's normal behaviour and experience of acoustic environments.

[0193] FIG. 10 schematically shows an embodiment of a hearing device according to the present disclosure. The hearing device (HD), e.g. a hearing aid, is of a particular style (sometimes termed receiver-in-the ear, or RITE, style) comprising a BTE-part (BTE) adapted for being located at or behind an ear of a user, and an ITE-part (ITE) adapted for being located in or at an ear canal of the user's ear and comprising a receiver (loudspeaker). The BTE-part and the ITE-part are connected (e.g. electrically connected) by a connecting element (IC) and internal wiring in the ITE- and BTE-parts (cf. e.g. wiring Wx in the BTE-part).

[0194] In the embodiment of a hearing device in FIG. 10, the BTE part comprises two input units (e.g. IU.sub.1, IU.sub.M (for M=2) in FIG. 5C, 5D) comprising respective input transducers (e.g. microphones) (M.sub.BTE1, M.sub.BTE2), each for providing an electric input audio signal representative of an input sound signal (S.sub.BTE) (originating from a sound field S around the hearing device). The input unit further comprises two wireless receivers (WLR.sub.1, WLR.sub.2) (or transceivers) for providing respective directly received auxiliary audio and/or control input signals (and/or allowing transmission of audio and/or control signals to other devices). The hearing device (HD) comprises a substrate (SUB) whereon a number of electronic components are mounted, including a memory (MEM) e.g. storing different hearing aid programs (e.g. parameter settings defining such programs, or parameters of algorithms, e.g. optimized parameters of a neural network) and/or hearing aid configurations, e.g. input source combinations (M.sub.BTE1, M.sub.BTE2, WLR.sub.1, WLR.sub.2), e.g. optimized for a number of different listening situations. The substrate further comprises a configurable signal processor (DSP, e.g. a digital signal processor, including the processor (HLC), feedback suppression (FBC) and beamformers (BFU) and other digital functionality of a hearing device according to the present disclosure). The configurable signal processing unit (DSP) is adapted to access the memory (MEM) and for selecting and processing one or more of the electric input audio signals and/or one or more of the directly received auxiliary audio input signals, based on a currently selected (activated) hearing aid program/parameter setting (e.g. either automatically selected, e.g. based on one or more sensors and/or on inputs from a user interface). The mentioned functional units (as well as other components) may be partitioned in circuits and components according to the application in question (e.g. with a view to size, power consumption, analogue vs. digital processing, etc.), e.g. integrated in one or more integrated circuits, or as a combination of one or more integrated circuits and one or more separate electronic components (e.g. inductor, capacitor, etc.). The configurable signal processor (DSP) provides a processed audio signal, which is intended to be presented to a user. The substrate further comprises a front end IC (FE) for interfacing the configurable signal processor (DSP) to the input and output transducers, etc., and typically comprising interfaces between analogue and digital signals. The input and output transducers may be individual separate components, or integrated (e.g. MEMS-based) with other electronic circuitry.

[0195] The hearing device (HD) further comprises an output unit (e.g. an output transducer) providing stimuli perceivable by the user as sound based on a processed audio signal from the processor (HLC) or a signal derived therefrom. In the embodiment of a hearing device in FIG. 10, the ITE part comprises the output unit in the form of a loudspeaker (receiver) for converting an electric signal to an acoustic (air borne) signal, which (when the hearing device is mounted at an ear of the user) is directed towards the ear drum (Ear drum), where sound signal (S.sub.ED) is provided. The ITE-part further comprises a guiding element, e.g. a dome, (DO) for guiding and positioning the ITE-part in the ear canal (Ear canal) of the user. The ITE-part further comprises a further input transducer, e.g. a microphone (M.sub.ITE), for providing an electric input audio signal representative of an input sound signal (S.sub.ITE).

[0196] The electric input signals (from input transducers M.sub.BTE1, M.sub.BTE2, M.sub.ITE) may be processed according to the present disclosure in the time domain or in the (time-) frequency domain (or partly in the time domain and partly in the frequency domain as considered advantageous for the application in question).

[0197] The hearing device (HD) exemplified in FIG. 10 is a portable device and further comprises a battery (BAT), e.g. a rechargeable battery, e.g. based on Li-Ion battery technology, e.g. for energizing electronic components of the BTE- and possibly ITE-parts. In an embodiment, the hearing device, e.g. a hearing aid (e.g. the processor (HLC)), is adapted to provide a frequency dependent gain and/or a level dependent compression and/or a transposition (with or without frequency compression) of one or more frequency ranges to one or more other frequency ranges, e.g. to compensate for a hearing impairment of a user.

[0198] It is intended that the structural features of the devices described above, either in the detailed description and/or in the claims, may be combined with steps of the method, when appropriately substituted by a corresponding process.

[0199] As used, the singular forms a, an, and the are intended to include the plural forms as well (i.e. to have the meaning at least one), unless expressly stated otherwise. It will be further understood that the terms includes, comprises, including, and/or comprising, when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will also be understood that when an element is referred to as being connected or coupled to another element, it can be directly connected or coupled to the other element but an intervening element may also be present, unless expressly stated otherwise. Furthermore, connected or coupled as used herein may include wirelessly connected or coupled. As used herein, the term and/or includes any and all combinations of one or more of the associated listed items. The steps of any disclosed method is not limited to the exact order stated herein, unless expressly stated otherwise.

[0200] It should be appreciated that reference throughout this specification to one embodiment or an embodiment or an aspect or features included as may means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the disclosure. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects. As an example, it should be noted that although the embodiments illustrated in FIG. 1B, 2, 3, 4B, 4C, 4D 5A, 5B, 5C, 5D, 6A, 6B, 6C, 6D, 9C, all comprise an analysis filter bank to provide an electric input signal in a time-frequency (or frequency sub-band) representation, other embodiments according to the present disclosure may be provided without separate dedicated analysis filter banks. (In such embodiments, it is left to the first and second algorithms (e.g. first and second neural networks) to work directly on the raw time domain signal samples (or time frames comprising a specific number of number of time samples generated therefrom).

[0201] The claims are not intended to be limited to the aspects shown herein, but is to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean one and only one unless specifically so stated, but rather one or more. Unless specifically stated otherwise, the term some refers to one or more.

[0202] Accordingly, the scope should be judged in terms of the claims that follow.

REFERENCES

[0203] [1] M. Kolbek, Z.-H. Tan, and J. Jensen, Speech intelligibility potential of general and specialized deep neural network based speech enhancement systems, IEEE Trans. Audio, Speech, Language Process., vol. 25, no. 1, pp. 153-167, 2017.

[0204] [2] E. W. Healy, S. E. Yoho, Y. Wang, and D. Wang, An algorithm to improve speech recognition in noise for hearing-impaired listeners, J. Acoust. Soc. Am., vol. 134, no. 4, pp. 3029-3038, October 2013.

[0205] [3] A. H. Andersen, J. M. de Haan, Z.-H. Tan and J. Jensen, Non-intrusive speech intelligibility prediction using convolutional neural networks, IEEE/ACM Transactions on Audio, Speech, and Language Processing, Vol. 26, No. 10, pp. 1925-1939, October 2018.

[0206] [4] I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning, MIT Press, 2016.