Hearing device comprising a recurrent neural network and a method of processing an audio signal
11696079 · 2023-07-04
Assignee
Inventors
- Zuzana Jel{hacek over (c)}icová (Smørum, DK)
- Rasmus Jones (København NV, DK)
- David Thorn Blix (Smørum, DK)
- Michael Syskind Pedersen (Smørum, DK)
- Jesper Jensen (Smørum, DK)
- Asger Heidemann Andersen (Frederikssund, DK)
Cpc classification
H04R2460/03
ELECTRICITY
H04R25/407
ELECTRICITY
H04R25/554
ELECTRICITY
H04R2430/03
ELECTRICITY
H04R2225/43
ELECTRICITY
H04R2201/107
ELECTRICITY
International classification
Abstract
A hearing device, e.g. a hearing aid or a headset, configured to be worn by a comprises an input unit for providing at least one electric input signal in a time-frequency representation; and a signal processor comprising a target signal estimator for providing an estimate of the target signal; a noise estimator for providing an estimate of the noise; and a gain estimator for providing respective gain values in dependence of said target signal estimate and said noise estimate. The gain estimator comprises a trained neural network, wherein the outputs of the neural network comprise real or complex valued gains, or separate real valued gains and real valued phases. The signal processor is configured—at a given time instance t—to calculate changes Δx(i,t)=x(i,t)−{circumflex over (x)}(i,t−1), and Δh(j,t−1)=h(j,t−1)−ĥ(j,t−2) to an input vector x(t) and to the hidden state vector h(t−1), respectively, from one time instance, t−1, to the next, t, and where {circumflex over (x)}(i,t−1) and ĥ(j,t−2) are estimated values of x(i,t−1) and h(j,t−2), respectively, where indices i, j refers to the i.sup.th input neuron and the j.sup.th neuron of the hidden state, respectively, where 1≤i≤N.sub.ch,x and 1≤j≤N.sub.ch,oh, wherein N.sub.ch,x and N.sub.ch,oh is the number of processing channels of the input vector x and the hidden state vector h, respectively, and wherein the signal processor is further configured to provide that the number of updated channels among said N.sub.ch,x and said N.sub.ch,oh processing channels of the modified gated recurrent unit for said input vector x(t) and said hidden state vector h(t−1), respectively, at said given time instance t is limited to a number of peak values N.sub.p,x, and N.sub.p,oh, respectively, where N.sub.p,x is smaller than N.sub.ch,x, and N.sub.p,oh, is smaller than N.sub.ch,oh.
Claims
1. A hearing device configured to be worn by a user, the hearing device comprising an input unit for providing at least one electric input signal in a time-frequency representation k, t, where k and t are frequency and time indices, respectively, and k represents a frequency channel, k=1, . . . , K, K>1, the at least one electric input signal being representative of sound and comprising target signal components and noise components, a signal processor connected to the input unit and configured to receive the at least one electric input signal or a signal originating therefrom, the signal processor comprising a target signal estimator for providing an estimate of the target signal; a noise estimator for providing an estimate of the noise; a gain estimator for providing respective gain values in dependence of said target signal estimate and said noise estimate, wherein said gain estimator comprises a neural network comprising at least one layer defined as a gated recurrent unit, the gated recurrent unit comprising memory in the form of a hidden state vector h, wherein the weights of the neural network have been trained with a plurality of training signals, and wherein the outputs of the neural network comprise real or complex valued gains, or separate real valued gains and real valued phases; wherein the signal processor is configured—at a given time instance t—to calculate changes Δx(i,t)=x(i,t)−{circumflex over (x)}(i,t−1), and Δh(j,t−1)=h(j,t−1)−ĥ(j,t−2) to an input vector x(t) and to the hidden state vector h(t−1), respectively, from one time instance, t−1, to the next, t, and where {circumflex over (x)}(i,t−1) and ĥ(j,t−2) are estimated values of x(i,t−1) and h(j,t−2), respectively, where indices i, j refers to the i.sup.th input neuron and the j.sup.th neuron of the hidden state, respectively, where 1≤i≤N.sub.ch,x and 1≤j≤N.sub.ch,oh, wherein N.sub.ch,x and N.sub.ch,oh is the number of processing channels of the input vector x and the hidden state vector h, respectively, and wherein the signal processor is further configured to provide that a number of updated channels among said N.sub.ch,x and said N.sub.ch,oh processing channels of the modified gated recurrent unit for said input vector x(t) and said hidden state vector h(t−1), respectively, at said given time instance t is limited to a number of peak values N.sub.p,x, and N.sub.p,oh, respectively, where N.sub.p,x is smaller than N.sub.ch,x, and N.sub.p,oh, is smaller than N.sub.ch,oh.
2. A hearing device according to claim 1 wherein the signal processor is configured to determine said estimated values of the input vector and the hidden state vector as
3. A hearing device according to claim 2 wherein the signal processor is configured to determine said changes to values of the input vector and the hidden state vector as
4. A hearing device according to claim 1 wherein the input unit comprises a multitude of input transducers and a beamformer filter, wherein the beamformer filter is configured to provide said at least one electric input signal based on signals from said multitude of input transducers.
5. A hearing device according to claim 1 comprising a voice activity detector configured to estimate whether or not, or with what probability, an input signal comprises a voice signal at a given point in time, and to provide a voice activity control signal indicative thereof.
6. A hearing device according to claim 1 comprising an output unit configured to provide output stimuli to the user in dependence of said at least one electric input signal.
7. A hearing device according to claim 1 wherein the signal processor is configured to apply the gain values G(k,t) provided by the SNR-to-gain converter to the at least one electric input signal, or to a signal derived therefrom.
8. A hearing device according to claim 1 wherein the signal processor is configured to discard processing of a channel among said N.sub.p channels at a given time instant t in case its absolute value |x(t)−{circumflex over (x)}(t−1)| or |h(t−1)−ĥ(t−2)| is smaller than a threshold value Θ.sub.p.
9. A hearing device according to claim 1 wherein the number of peak values N.sub.p,x and N.sub.p,oh, respectively, is adaptively determined in dependence of the at least one electric input signal.
10. A hearing device according to claim 1 wherein parameters of the neural network have been trained with a plurality of training signals.
11. A hearing device according to claim 1 being constituted by or comprising an air-conduction type hearing aid, a bone-conduction type hearing aid, a cochlear implant type hearing aid, or a combination thereof.
12. A hearing device according to claim 1 being constituted by or comprising a headset an earphone.
13. A hearing device according to claim 1 comprising a hardware block specifically adapted to process elements of the gated recurrent unit as vectorized operations.
14. A hearing device according to claim 1 wherein the input vector of the neural network is based on or comprises said estimate of the target signal and said estimate of the noise, or a signal or signals originating therefrom.
15. A hearing device according to claim 14 wherein the lector of the neural network comprises a gain of a noise reduction algorithm.
16. A hearing device according to claim 1 wherein the number of input nodes may in the range from 16 to 500, and wherein the number of output nodes is in the range from 1 to 150.
17. A hearing device according to claim 1 wherein the number of hidden layers in the neural network is in the range from two to ten.
18. A hearing device according to claim 1 wherein the number of layers implemented as a modified gated recurrent unit in the range from one to three.
19. A method of operating a hearing device, the method comprising providing at least one electric input signal in a time-frequency representation k, t, where k and t are frequency and time indices, respectively, and k represents a frequency channel, k=1, . . . , K, the at least one electric input signal being representative of sound and comprising target signal components and noise components; and providing an estimate of the target signal; providing an estimate of the noise; providing respective gain values in dependence of said target signal estimate and said noise estimate, wherein said gain estimator comprises a neural network, wherein the weights of the neural network have been trained with a plurality of training signals, and wherein the outputs of the neural network comprise real or complex valued gains, or separate real valued gains and real valued phases; providing that the neural network comprises at least one layer defined as a gated recurrent unit, the gated recurrent unit comprising memory in the form of a hidden state vector h, and wherein an output vector o(t) is provided by said gated recurrent unit in dependence of an input vector x(t) and said hidden state vector h(t−1), wherein said output o(t) at a given time step t is stored as said hidden state h(t) and used in the calculation of the output vector o(t+1) in the next time step t+1, and wherein the method further comprises determining changes Δx(i,t)=x(i,t)−{circumflex over (x)}(i,t−1), and Δh(j,t−1)=h(j,t−1)−ĥ(j,t−2) to the input vector x(t) and to the hidden state vector h(t−1), respectively, front one time instance, t−1, to the next, t, and where {circumflex over (x)}(i,t−1) and ĥ(j,t−2) are estimated values of x(i,t−1) and h(j,t−2), respectively, where indices i, j refers to the i.sup.th input neuron and the j.sup.th neuron of the hidden state, respectively, where 1≤i≤N.sub.ch,x and 1≤j≤N.sub.ch,oh, wherein N.sub.ch,x and N.sub.ch,oh is the number of processing channels of the input vector x and the hidden state vector h, respectively, and wherein the signal processor is further configured to provide that the number of updated channels among said N.sub.ch,x and said N.sub.ch,oh, processing channels of the modified gated recurrent unit for said input vector x(t) and said hidden state vector h(t−1), respectively, at said given time instance t is limited to a number of peak values N.sub.p,x, and N.sub.p,oh, respectively, where N.sub.p,x is smaller than N.sub.ch,x and N.sub.p,oh, is smaller than N.sub.ch,oh.
20. A method of operating an audio processing device comprising at least an input unit and a signal processor for processing outputs of the input unit and for providing a processed output, the signal processor comprising a neural network comprising at least one layer implemented as a modified gated recurrent unit (modified GRU) comprising memory in the form of a hidden state vector (h(t−1)), the method comprising providing by the input unit at least one electric input signal in a time-frequency representation k, t, where k and t are frequency and time indices, respectively, and k represents a frequency channel, k=1, . . . , K, the at least one electric input signal being representative of sound or image data; and providing an input vector x(t) to the at least one layer implemented as a gated recurrent unit (GRU) based on or comprising said at least one electric input signal, or a signal originating therefrom; calculating by the signal processor—at a given time instance t—changes Δx(i,t)=x(i,t)−{circumflex over (x)}(i,t−1), and Δh(j,t−1)=h(j,t−1)−ĥ(j,t−1)−ĥ(j,t−2) to the input vector x(t) and to the hidden state vector h(t−1), respectively, from one time instance, t−1, to the next, t, and where {circumflex over (x)}(i,t−1) and ĥ(j,t−2) are estimated values of x(i,t−1) and h(j,t−2), respectively, where indices i, j refers to the i.sup.th input neuron and the j.sup.th neuron of the hidden state, respectively, where 1≤i≤N.sub.ch,x and 1≤j≤N.sub.ch,oh, wherein N.sub.ch,x and N.sub.ch,oh is the number of processing channels of the input vector x and the hidden state vector h, respectively; and providing by the signal processor that the number of updated channels among said N.sub.ch,x and said N.sub.ch,oh processing channels of the modified gated recurrent unit for said input vector x(t) and said hidden state vector h(t−1), respectively, at said given time instance t is limited to a number of peak values N.sub.p,x, and N.sub.p,oh, respectively, where N.sub.p,x is smaller than and N.sub.ch,x, and N.sub.p,oh, is smaller than N.sub.ch, oh; calculating by the signal processor—at a given time instance—an output vector o(t) from the at least one layer implemented as a gated recurrent unit (GRU) in dependence of said input vector x(t) and said hidden state vector k(t−1), and wherein the output o(t) at a given time step t is stored as the hidden state h(t) and used in the calculation of the output vector o(t+1) in the next time step t+1; wherein the processed output is determined by the signal processor in dependence of said output vector o(t) or a signal originating therefrom; and wherein the processed output is used for controlling processing in the audio processing device and/or transmitted by the audio processing device to another device.
Description
BRIEF DESCRIPTION OF DRAWINGS
(1) The aspects of the disclosure may be best understood from the following detailed description taken in conjunction with the accompanying figures. The figures are schematic and simplified for clarity, and they just show details to improve the understanding of the claims, while other details are left out. Throughout, the same reference numerals are used for identical or corresponding parts. The individual features of each aspect may each be combined with any or all features of the other aspects. These and other aspects, features and/or technical effect will be apparent from and elucidated with reference to the illustrations described hereinafter in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20) The figures are schematic and simplified for clarity, and they just show details which are essential to the understanding of the disclosure, while other details are left out. Throughout, the same reference signs are used for identical or corresponding parts.
(21) Further scope of applicability of the present disclosure will become apparent from the detailed description given hereinafter. However, it should be understood that the detailed description and specific examples, while indicating preferred embodiments of the disclosure, are given by way of illustration only. Other embodiments may become apparent to those skilled in the art from the following detailed description.
DETAILED DESCRIPTION OF EMBODIMENTS
(22) The detailed description set forth below in connection with the appended drawings is intended as a description of various configurations. The detailed description includes specific details for the purpose of providing a thorough understanding of various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. Several aspects of the apparatus and methods are described by various blocks, functional units, modules, components, circuits, steps, processes, algorithms, etc. (collectively referred to as “elements”). Depending upon particular application, design constraints or other reasons, these elements may be implemented using electronic hardware, computer program, or any combination thereof.
(23) The electronic hardware may include micro-electronic-mechanical systems (MEMS), integrated circuits (e.g. application specific), microprocessors, microcontrollers, digital signal processors (DSPs), field programmable gate arrays (FPGAs), programmable logic devices (PLDs), gated logic, discrete hardware circuits, printed circuit boards (PCB) (e.g. flexible PCBs), and other suitable hardware configured to perform the various functionality described throughout this disclosure, e.g. sensors, e.g. for sensing and/or registering physical properties of the environment, the device, the user, etc. Computer program shall be construed broadly to mean instructions, instruction sets, code, code segments, program code, programs, subprograms, software modules, applications, software applications, software packages, routines, subroutines, objects, executables, threads of execution, procedures, functions, etc., whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise.
(24) The present application relates to the field of signal processing, e.g. audio and/or image processing, in particular to the use of neural networks in audio processing, specifically to algorithms implementing learning algorithms or machine learning techniques in (e.g. portable) audio processing devices.
(25) So-called learning algorithms, e.g. in the form of neural networks, have found increasing application in all sorts of signal processing tasks, including (natural) speech processing in hearing aids or headsets. In a hearing aid, noise reduction is a key feature in solving the problem of providing or increasing a user's (e.g. a hearing-impaired user's) access to an acceptable perception of speech and other sounds in the environment. In the present disclosure, a (e.g. deep) neural network (NN, DNN) is applied to the task of determining a postfilter gain of a single-channel noise reduction system of a hearing aid or headset. The postfilter is configured to reduce remaining noise in a spatially filtered signal. The spatially filtered signal is e.g. determined as a linear combination of a multitude of electric input signals, e.g. from microphones of the hearing aid or headset. A postfilter with this purpose has been implemented in different ways, e.g. as a Wiener filter (or a modification thereof). The use of neural networks to implement the postfilter has been proposed in the prior art, see e.g. EP3694229A1.
(26) In hearing aids or headsets or similar portable audio processing, devices, a) small size, b) low latency, and c) low power consumption, are important design parameters. Hence ‘power efficiency’ of processing algorithms is paramount. The low-power requirement limits the type and size of the neural networks that are feasible, e.g. in a hearing aid, and thus the functional tasks that are suitable for being implemented by such networks. In practice, the number of network parameters are limited to of the order of 10,000 (with current processing power in a hearing aid). This number is expected to increase with time (e.g. due to developments in the field of integrated circuits).
(27) The present disclosure deals with a (single- or multi-layer) recurrent neural network (RNN) architecture comprising memory and so-called gated recurrent units (GRU). Although attractive from a functional point of view, such networks may, however, be computationally rather demanding and not generally applicable in practice in low-power applications, such as portable electronic devices, e.g. audio processing devices, such as hearing aids or headsets, or the like. An attempt to limit the processing complexity of this kind of neural network, a Recurrent Neural Network (RNN) architecture, called a delta network, has been proposed (cf. e.g. [Neil et al.; 2018]. In the Delta Recurrent Neural Network (Delta RNN), each neuron transmits its value only when the change in its activation exceeds a threshold. This can be particularly efficient when the signals to be processed are relatively stable over time (change relatively slowly). This is e.g. the case for some audio and video signals.
(28) Even the Delta RNN might not be suitable for use in a low-power (portable) device, such as a hearing aid, or the like, because the amount of operations performed by the network may be substantial. Above all, however, the amount of operations is unknown, because it will depend on the input signals to the network and the threshold parameter(s).
(29) The present disclosure proposes a modification to the Delta RNN.
(30) In the following, the modified so-called Peak GRU RNN is described. But first key elements of baseline GRU and Delta RNN are outlined.
(31) A baseline GRU, a Delta GRU, or a Peak GRU RNN may be used as a single layer (recurrent) neural network.
(32) A baseline deep neural network (DNN) consists of several layers. A deep neural network comprises at least one (hidden) layer between an input and an output layer (to be termed a DNN). At least one of the layers may comprise a gated recurrent unit (GRU), e.g. a Peak GRU according to the present disclosure.
(33) Baseline GRU performs computations on all inputs (see e.g.
(34) Sparsity (e.g. zeros) in network activations or input data (i.e. sparsity in network parameters (weights, bias, or non-linear function) or sparsity in input data to the network, respectively) is a property that can be used to achieve high power efficiency. By ignoring computations involving zeros, computational complexity (and thus power) can be reduced.
(35) The basic parts of the gated recurrent units GRU are the following (adapted from section 2.1. of [Neil et al.; 2018]):
Gated Recurrent Units (GRU)
(36) The GRU neuron model has two gates—a reset gate r and an update gate u—and a candidate hidden state c. The reset gate r determines the amount of information from the previous hidden state that will be added to the candidate hidden state c. The update gate u decides to what extent the activation h should be updated by the candidate hidden state c to enable a long-term memory. The GRU formulation is shown below:
r(t)=σ[W.sub.xrx(t)+W.sub.hrh(t−1)+b.sub.r] (1)
u(t)=σ[W.sub.xux(t)+W.sub.huh(t−1)+b.sub.u] (2)
c(t)=tanh [W.sub.xcx(t)+r(t)⊙(W.sub.hch(t−1))+b.sub.c] (3)
h(t)=(1−u(t))⊙h(t−1)+u(t)⊙c(t) (4)
(37) where x is the (external) input vector, h the activation vector, W the weight matrix, b the bias, σ signify logistic sigmoid function, and ⊙ indicates element-wise multiplication. The data flow of the GRU node model is schematically illustrated in
(38)
(39) The parameter x(t) is the (external) input and the parameter h(t) is the output (also denoted o(t) in
(40)
Update Gate u
(41) In the update gate a, input x(t) is multiplied with its corresponding weights represented by matrix W.sub.xu (W.sub.xux(t). In the examples of the present disclosure the weight matrices W.sub.xu, W.sub.hu, etc., are K×K matrices, where K is the number of frequency bands (e.g. K=N.sub.ch) of the input signals to the GRU. In general, however, the dimension of the weight matrices is adapted to the specific case in dependence of the dimension of the input and output vectors of the GRU. If e.g. there are 64 inputs and 256 outputs (dimensionality of the hidden state), then the kernels (matrix) for X (W.sub.xu, etc.) will have dimensions 64×256 each, the dimensionality of H matrices (W.sub.hu, etc.) will be 256×256 each. The same applies to h(t−1) that is multiplied with matrix W.sub.hu (W.sub.hu(t−1)), where h(t−1) contains information from the previous time step. Both results are added together (W.sub.xux(t)+W.sub.huh(t−1)) (and possibly further a bias parameter b.sub.u added) and a sigmoid activation function (σ) is applied to limit the result to be between 0 and 1 (u(t)=σ[W.sub.xux(t)+W.sub.huh(t−1)+b.sub.u]), see equation (2). Other activation functions than sigmoid may be applied.
Reset Gate r
(42) The reset gate r is used to decide how much of the previous information should be forgotten. The formula is the same as it was for the update gate except for the weight matrices (W.sub.xr, W.sub.hr) and a possible bias parameter (b.sub.r) that are different (r(t)=σ[W.sub.xrx(t)+W.sub.hrh(t−1)+b.sub.r]), see equation (1).
Candidate State c
(43) Like previously, the multiplications with inputs (W.sub.xcx(t)) and (W.sub.hch(t−1)) are performed (see equation (3)). Then the element-wise product (r(t)⊙(W.sub.hch(t−1))) with reset gate determines what information will be removed from previous time steps. The closer the value of r to 0, the more the information will be forgotten. Finally (after optionally adding a bias b.sub.c), the tanh activation function, limiting the results in a range between −1 and 1, is applied (c(t)=tanh[W.sub.xcx(t)+r(t)⊙(W.sub.hch(t−1))+b.sub.c]). Other activation functions than tanh may be applied.
Hidden State h
(44) The last step is to calculate the hidden state h(t) that holds information for the current unit. In order to obtain h(t), the update gate (u) is needed. The update gate determines what to keep from both the current memory content (c) and previous memory content h(t−1). Therefore, we again need to perform element-wise multiplications, (1−u(t))⊙h(t−1), and u(t)⊙c(t)—this time using the update gate u(t). Finally, we sum the results and obtain h(t) (h(t)=(1−u(t))⊙h(t−1)+u(t)⊙c(t)) that will serve as an input to both the next layer and the next time step (cf. e.g.
(45) The baseline idea of the Gated recurrent unit (GRU) is common for both the Delta GRU RNN and the Peak GRU RNN approaches. The Peak GRU RNN is derived from the Delta GRU RNN method, and they have many calculations in common.
(46)
(47) The basic parts of the Delta GRU RNN are the following (adapted from section 2.2. of [Neil et al.; 2018]):
Delta GRU RNN Algorithm
(48) The Delta GRU RNN algorithm reduces both memory access and arithmetic operations by exploiting the temporal stability of RNN inputs, states and outputs. Computations associated with a neuron activation that has a small amount of change from its previous time step can be skipped. Skipping a single neuron saves multiplications of an entire column in all related weight matrices (cf. e.g. weight matrices (W.sub.xr, W.sub.hr, W.sub.xc, W.sub.hc, W.sub.xu, W.sub.hu) of the GRU as described above) as well as fetches of the corresponding weight elements. In a Matrix-Vector Multiplication (M×V) between a neuron activation vector (e.g. x or h) and a weight matrix (W), zero elements in the vector result in zero partial sums that do not contribute to the final result.
(49) The largest elements would be determined and placed in delta vectors (Δx, Δh). The smallest elements would be skipped in hardware (set to 0 in the equations). These delta vectors are then used for multiplications with matrices. So instead of the original (e.g. for the external input x(t)) W.sub.xrx(t) for the reset state r we will get W.sub.xrΔx(t), where Δx(t) is sparse. The same goes for the activation h.
(50) The key feature of the Delta GRU RNN is that it updates the output of a neuron only when the neuron's activation changes by more than a delta threshold Θ. To skip the computations related to any small Δx(t), the delta threshold Θ is introduced to decide when a delta vector element can be ignored. The change of a neuron's activation is only memorized when it is larger than Θ. Furthermore, to prevent the accumulation of error with time, only the last activation value that has a change larger than the delta threshold is memorized. This is defined by the following equation sets (node indices have been omitted):
(51)
(52) where the last changes ({circumflex over (x)}(t), ĥt−1)) are memorized and used as internal inputs in the next time cycle (t+1, see
(53) In a modified Delta asymmetric thresholds may be applied. In another modified Delta GRU, different thresholds across different neurons in a layer may be applied.
(54) Next, using equations (5), (6), (7) and (8), the conventional GRU equation set ((1)-(4) above) can be transformed into its delta network version:
M.sub.r(t)=W.sub.xrΔx(t)+W.sub.hrΔh(t−1)+M.sub.r(t−1) (9)
M.sub.u(t)=W.sub.xuΔx(t)+W.sub.huΔh(t−1)+M.sub.u(t−1) (10)
M.sub.xc(t)=W.sub.xcΔx(t)+M.sub.xc(t−1) (11)
M.sub.hc(t)=W.sub.hcΔh(t−1)+M.sub.hc(t−1) (12)
r(t)=σ[M.sub.r(t)] (13)
u(t)=σ[M.sub.u(t)] (14)
c(t)=tanh [M.sub.xc(t)+r(t)⊙M.sub.hc(t)] (15)
h(t)=(1−u(t))⊙h(t−1)+u(t)⊙c(t) (16)
(55) where M.sub.r, M.sub.u, M.sub.xc, and M.sub.hc refer to stored (memory) values for the reset gate (r), the update gate (u), the candidate state (c), and the hidden state (h), respectively, and where M.sub.r(0)=b.sub.r, M.sub.u(0)=b.sub.u, M.sub.xc(0)=b.sub.c, and M.sub.hc(0)=0. The above processing relations for time index t (denoted eq. (9) to (18)) are graphically illustrated in
(56) There may e.g. be 6 different weight matrices in Delta G-RU (other representation may be possible). These could be W.sub.xr, W.sub.hr, W.sub.xu, W.sub.hu, W.sub.xx, and W.sub.hc, (see also
(57) In summary, the Delta GRU RNN algorithm sets a specific threshold Θ, such as 0.1 (cf. Θ in the equations (5)-(8) above), that will be used for comparisons with the element values. If the value (absolute value of the subtraction) of the currently processed element is below this threshold, the element will not be used far further computations in that time step—it will be skipped (set to 0). The number of elements that will further be processed can hence vary from one time step to another depending on their value.
Peak GRU RNN
(58) The Peak GRU RNN—a modified version of GRU and Delta GRU RNN—according to the present disclosure is described in the following.
(59) An advantage of the Peak GRU RNN algorithm is that it reduces computations in a deterministic manner (i.e. it is always known in advance how many operations will be performed by a given configuration of the algorithm). This is achieved by setting a hard limit that will determine how many values (peaks) should be processed in each input feature vector to the Peak GRU RNN (e.g. in each time frame of an audio or video signal). Instead of (as in Delta GRU) setting a threshold on the rate of change of the elements of the input vector, one chooses a pre-determined number (N.sub.p) of elements to process in each iteration. Thus, also only N.sub.p columns of the involved matrix (e.g. W.sub.xr, W.sub.hr, W.sub.xu, W.sub.hu, W.sub.xc, W.sub.hc) are required. In each iteration the N.sub.p elements of the input vector with the largest rate of change (delta) are chosen for further processing. This allows flexibility in not having to process the whole vector matrix multiplication, yet also being deterministic (overall N.sub.p dot products). The Peak GRU RNN (as well as the Delta GRU RNN, if e.g. there are no zeros in the (delta) input vector) can perform the whole vector matrix multiplication (e.g. N.sub.p=N.sub.ch), or a subset thereof, down to zero multiplications (e.g. N.sub.p=0). The threshold value, N.sub.p, corresponds to the number of largest absolute values/magnitudes of the (delta) input vector (Δx(t), Δh(t−1)) to the Peak GRU RNN layer that should (or may be) be processed. As indicated above, the number of peak values N.sub.p (N.sub.p,x, N.sub.p,oh) may be equal (N.sub.p=N.sub.p,x=N.sub.p,oh) or different for the input vector (x) (N.sub.p,x) and the output (and hidden state) vector (o, h) (N.sub.p,oh).
(60) Updating the neurons corresponding to the (e.g. N.sub.p) largest deltas (Δx, Δh) assumes that all neurons are equally salient. The deltas may, however, be weighted by a ‘saliency’ factor, where the factor is related to the resulting change of a cost function during training, see also section on ‘training’ below.
(61) The weight matrices (e.g. W.sub.xr, W.sub.hr, W.sub.xu, W.sub.hu, W.sub.xc, W.sub.hc) of the Peak GRU RNN (and similarly of the Delta GRU RNN and the Baseline GRU RNN) are typically fixed to values optimized in a training procedure.
(62) The lower the number of peaks, the less computations and memory accesses are necessary, and hence the lower the power consumption.
(63) The modification is rooted in equations (5) to (8) of the Delta RNN algorithm. Instead of fulfilling the criterion that |x(t)−{circumflex over (x)}(t−1)| and |h(t−1)−ĥ(t−2)| being larger than a threshold value Θ as required by the Delta RNN algorithm, the Peak GRU RNN algorithm according to the present disclosure identifies only the N.sub.p largest values of |x(t)−{circumflex over (x)}(t−1)| and |h(t−1)−ĥ(t−2)|, respectively, at a given point in time t, and only these values are processed. At different points in time, . . . , t−1, t, t+1 . . . , the N.sub.p largest values of |x(t)−{circumflex over (x)}(t−1)| and |h(t−1)−ĥ(t−2) may be associated with different elements in the vector/positions (indices) in the input vector and hidden state vector (e.g. different frequencies in the example of the present disclosure).
(64) The absolute value |⋅| is used so that we can compare the numbers based on their magnitudes; the delta vectors are then assigned the actual result of the subtraction (the ones corresponding to their magnitudes being above the threshold). This is also valid for Delta GRU (but the number of processed values may be different).
(65) Instead of the equations (5), (6) for the Delta GRU RNN, the following expressions may be used for the Peak GRU RNN:
(66)
(67) where indices i, j refer to the i.sup.th input, and the hidden state of the j.sup.th neuron (where 1≤i≤K and 1≤i≤K, where K is the number of frequency bands in the exemplary (input and output feature vectors) of the use case (audio processing) of the present disclosure).
(68) Likewise, the equations (7), (8) for the Delta GRU RNN may be used for the Peak GRU RNN, including the modification related to the pre-determined number (N.sub.p) of elements to process in each iteration:
(69)
(70) Elements at different indices in the vector might be selected in different time steps, e.g. in time step (t−1), we can have the following exemplary vector, where the bold values were chosen (two largest peaks based on their magnitudes, N.sub.p=2, N.sub.ch=5): [1, 0, −9, 8, −4], while in time step t, we can have a vector: [2, −5, 0, 1, −1], where two other indices in the vector are chosen, because they exhibit the (numerically) largest values.
(71) The largest values are chosen for x and h separately, which means that we will have N.sub.p peaks in Δx and N.sub.p (or a different value N′.sub.p) peaks in Δh.
(72) In summary, the Peak GRU RNN works with a different type of limit (however, the subtractions in equations (5)-(8) above are kept). It uses a hard limit that is equal to the number of elements that should always (or maximally) be processed in every time step. The elements that will be processed are selected based on their absolute value. If we, e.g. set a number of peaks to 48 in a 64-input node system, the 64−48=16 smallest absolute values will always be skipped in every time step.
(73) An example of the values of x(t)−{circumflex over (x)}(t−1) (or h(t−1)−ĥ(t−2)) (referred to as ‘Elements after subtraction’) and |x(t)−{circumflex over (x)}(t−1)| (or h(t−1)−ĥ(t−2)|) (referred to as ‘Elements after ABS(subtraction)’) illustrating the two methods is shown below (simplified by using only 10 elements, N.sub.ch=10):
(74) Elements after subtraction: [0, −0.6, 0.09, 0.8, 0.1, 0, 0, −1.0, 0.05, 0.22]
(75) Elements after ABS(subtraction): [0, 0.6, 0.09, 0.8, 0.1, 0, 0, 1.0, 0.05, 0.22]
Delta GRU RNN
(76) If, e.g. threshold theta Θ=0.06
(77) The elements with values below or equal to 0.06 will be set to 0 (as indicated by a bold underlined zero (0) [0, −0.6, 0.09, 0.8, 0.1, 0, 0, −1.0, 0, 0.22]
Peak GRU RNN
(78) If, e.g. the number of peaks we want to process is N.sub.p=6, then we will skip 10−6=4 smallest absolute values (set to 0): [0, −0.6, −0.09, 0.8, 0.1, 0, 0, −1.0, 0, 0.22]
(79) In this example (at this time instant), the Delta GRU method the Peak GRU method are equally computationally efficient. It illustrates the difficulty in comparing the methods, only based on the number of data values removed by the method (at a given time instant). This depends on the values of Θ and N.sub.p, respectively, and the data being processed by the respective algorithm. Therefore, it is not possible to say which method is better only by looking at how many values were removed without knowing the dataset/how it impacted the final result. If we set the number of peaks in this example to e.g. 3 only, then Peak GRU RNN method would look more efficient.
Potential Advantages of Peak GRU
(80) The Delta GRU RNN algorithm works with an actual numerical threshold that the elements are compared against. It can, therefore, happen that if all the values are above the threshold (the threshold was not set sufficiently high), all the elements will be processed and all the additional computations compared to the baseline GRU will contribute to higher power consumption instead of reducing it. It is expected that the Delta GRU RNN works well in situations where changes between the time steps are not so sudden (e.g. more silent environments).
(81) In general, the Delta GRU sets a threshold on the rate of change of the elements of the input vector, whereas the Peak GRU chooses a predetermined number of elements from the input vector, where the chosen elements have the largest rate of change.
(82) The Peak GRU RNN algorithm may also contribute to an increase of power consumption if the number of peaks to be skipped is not sufficiently high. However, in general, the Peak GRU RNN algorithm: Always cuts a specific number of elements, no matter whether they are above or below a specific numerical threshold. Therefore, compared to Delta GRU RNN, the Peak GRU RNN is deterministic, i.e. we can always define in advance how many computations will be performed/executed. This is a very important aspect for low-power devices such as hearing instruments. Since the Peak GRU RNN algorithm does not use a static threshold value like Delta GRU RNN, but rather works with a number of top elements only (i.e. the “numerical threshold” is dynamic and adapted every time step), it is robust to preprocessing and can process a dataset without a prior analysis of the dataset values. If we need to apply preprocessing on the data, causing the data to be changed from one representation to another (e.g. quantization, or some simple filtering, normalization, etc.), the Delta GRU threshold will not work anymore—it will have to be remapped as well. However, no additional adjustments will be required for the Peak GRU as the order of the preprocessed data elements will remain the same. Similarly, like the Delta GRU RNN algorithm, the Peak GRU RNN algorithm saves computations compared to the baseline GRU RNN algorithm.
An Example: An SNR to Gain Estimator
(83)
(84) The multi dashed line arrows of the (fully connected, feedforward) input and output layers (NN-layer) of the SNR2G estimator are intended to indicate that the gain in the k.sup.th frequency channel (at a given point in time t) depend on at least one of the estimated SNR values, e.g. some or of all K channels, i.e. e.g.
G(k,t)=f(SNR(1,t), . . . ,SNR(k,t), . . . ,SNR(K,t)).
(85) This property may also be inherent in the GRU-layer. In other words, the (deep) neural network according to the present disclosure may be optimized to find the best mapping front a set of SNR estimates across frequency (SNR(k,t)) to a set of frequency dependent gain values (G(k,t)).
(86)
(87) In
(88) The GRU RNN (GRU) may be a Peak GRU RNN according to the present disclosure. In the present context the number of peaks of the Peak GRU RNN algorithm corresponds to a number of channels (N.sub.p) out of the total number of processing channels (N.sub.ch), N.sub.p≤N.sub.ch that are processed by the RNN (at a given point in time). The number of (input or output) channels (e.g. N.sub.ch (or N.sub.ch,x, N.sub.ch,oh)) can be any number (larger than two), e.g. between 2 and 1024, e.g. between 4 and 512, e.g. between 10 and 500. It means that we can decide to process N.sub.ch, N.sub.ch−1, N.sub.ch−2, . . . down to 0 channels in dependence of the value of N.sub.p. In the context of
(89) The reduction of the number of channels N.sub.ch being processed to N.sub.p≤N.sub.ch (e.g. N.sub.p<N.sub.ch,), e.g. the number of units (k,t) being considered (updated) at any given point in time t, will lead to power savings but, naturally, it may also have an impact on the resulting audio quality. Therefore, a reasonable number of peaks needs to be found, e.g. in dependence of the current acoustic environment. One option is to select a fixed number N.sub.p, e.g. based on simulations of different acoustic environments (e.g. using speech intelligibility (SI) measures to qualify the result of a given choice of the number of peaks N.sub.p). Thereby, an optimal number of peaks N.sub.p for each environment may be found (each being e.g. applied in a different hearing aid program dedicated to a given acoustic environment or listening situation, e.g. speech in noise, speech in quiet, party, music, in car, in an airplane, in an open office, in an auditorium, in a church, etc.). Alternatively, a general number of peaks N.sub.p may be found as the one that provides a maximum average SNR over all environments.
(90) The number of peaks N.sub.p may be constant for a given application or adaptively determined in dependence of the input signal (e.g. evaluated over a time period). The number of peaks N.sub.p may be constant for a given acoustic environment (e.g. for a given program of a hearing aid). The hearing aid or headset may comprise a classifier of the acoustic environment. The number of peaks at a given time may be dependent on a control signal from the acoustic environment classifier indicative of the current acoustic environment.
(91) To optimize the number of peaks of a given application, a simulation using a relevant data set may be performed, e.g. by increasing the number of peaks we want to skip and observe how it impacts SNR, estimated gain, speech intelligibility measures (and/or other measures). The Peak GRU RNN (and also Delta GRU RNN) have exactly the same performance as the baseline GRU when we process all the peaks and the threshold is 0 for Peak GRU RNN and Delta GRU RNN, respectively. When we start throwing out the values, the measures will, at some point, gradually start deteriorating and reach the point where the measures such as SNR will be too low and unacceptable.
(92) An optimal number of layers of the neural network is dependent on the dataset, the complexity of the problem we are trying to solve, the number of neurons per layer, what signal we are feeding into the network, etc. 3 layers might be sufficient, it might be needed to use 4 or 5 layers etc. In the example of the present disclosure, the number of neurons per layer is kept at the number of frequency channels K. This need not be so, however.
(93)
(94) The signal-to-noise-ratio estimator (SNR-EST) may be implemented as a neural network. The signal-to-noise-ratio estimator (SNR-EST) may be included in the signal-to-noise-ratio to gain converter (SNR2G) and implemented by a recurrent neural network according to the present disclosure. In the exemplary embodiment of
(95) The signal-to-noise-ratio estimator (SNR-EST) provides an SNR-estimate (SNR(k,t)) to the signal-to-noise-ratio to gain converter (SNR2G, RNN). Successive time frames of the signal to noise ratio (SNR(k,t)) for the respective frequency channels (k=1, . . . , K) are used as input vectors to the SNR to gain estimator (SNR2G) implemented as a deep neural network, in particular as a Peak GRU recurrent neural network according to the present disclosure. The neural network (RNN) of the SNR to gain estimator (SNR2G, RNN) comprises an input layer, a number of hidden layers, and an output layer (see e.g.
(96)
(97)
(98) In the present disclosure, a hearing aid or a headset comprising a noise reduction system comprising an SNR to gain conversion block (SNR2G) implemented as a recurrent (possibly deep) neural network (RNN) according, to the present disclosure is described. The hearing aid comprises a forward (audio) signal path comprising at least one input transducer (e.g. a microphone M) providing at least one electric input signal IN(t) representing sound in the environment of the hearing aid or headset. The forward path further comprises an analysis filter bank (FB-A) converting the (time domain) electric input signal IN(t) to a multitude (K) of frequency channels IN(k,t), where k=1, . . . , K, is a frequency index, and t is a time (frame) index. The hearing aid further comprises an analysis path comprising a noise reduction system (NRS, cf. dashed enclosure) configured to reduce noise in the (noisy) electric input signal (IN(k,t), or in a signal derived therefrom) to thereby provide the user with a better quality of a target signal (the target signal being e.g. a speech signal from a communication partner) assumed to be present in the noisy electric input signal. The noise reduction system comprises an SNR estimator (SNR-EST) configured to estimate a signal to noise ratio (SNR(k,t)) for respective frequency channels (k) of the (frequency domain) electric input signal IN(k,t), cf. e.g.
(99) An example recurrent neural network comprising three layers for implementing the SNR-to-gain estimator (SNR2G, RNN) is shown in
(100)
(101) The forward path of the embodiments of a hearing device shown in
(102) The hearing aid/headset may comprise further circuitry to implement other functionality of a hearing aid/headset, e.g. an audio processor for applying a frequency and level dependent gain to a signal of the forward (audio) path, e.g. to compensate for a hearing impairment of the user. The audio processor may e.g. be located in the forward path, e.g. between the combination units (‘x’) and the synthesis filter bank (FB-S). The hearing aid may further comprise analogue to digital and digital to analogue converters as appropriate for the application in question. The hearing aid may further comprise antenna and transceiver circuitry allowing the hearing aid or headset to communicate with other devices (e.g. a contra-lateral device, e.g. a contralateral hearing aid or headset part), e.g. to establish a link to a remote communication partner, e.g. via a cellular telephone.
(103)
(104) The hearing aid further comprises a noise reduction system (NRS) according to the present disclosure, e.g. as described in connection with
(105)
(106) As shown in the embodiments of
(107)
(108) A noise reduction system (e.g. NRS in
(109)
Training
(110)
(111) In general, a neural network comprising the Peak GRU RNN algorithm according to the present disclosure may be trained with a baseline GRU RNN providing optimal weights for a trained network (e.g. weight matrices (W.sub.xr, W.sub.hr, W.sub.xc, W.sub.hc, W.sub.xu, W.sub.hu, cf.
(112) The neural network may be trained on examples of estimated signal-to-noise ratios as input obtained from a noisy input mixture and its corresponding output as a vector across frequency of a noise-reduced input signal mainly containing the desired signal. The neural network may be trained on examples of (digitized) electric input signals IN(k,t), e.g. directly from an analysis filter bank (cf. e.g. FB-A in
(113) The neural network (RNN) may be trained on examples of estimated signal-to-noise ratios as input obtained from a noisy input mixture and its corresponding output as a vector across frequency of a noise-reduction gains (attenuation), which, when applied to (a digital representation of) the electric input signal, provides a signal mainly containing the desired (target) signal. The Peak GRU RNN may be trained using a regular GRU, i.e. the trained weights and biases are transferred from GRU to Peak GRU.
(114) As discussed in EP3694229A1, the SNR-to-gain converter (cf. e.g. SNR2G in the drawings) may comprise the neural network, wherein the weights of the neural network have been trained with a plurality of training signals (cf. e.g.
(115)
(116) The training data may be delivered to the neural network on a time-frame by time frame basis (SNR(k,t) followed by SNR(k,t+), or IN(k,t) followed by IN(k,t+1)), where one time step represents a frame length (e.g. N.sub.s (e.g. N.sub.s=64) samples/frame divided by the sampling frequency f.sub.s (e.g. f.sub.s=20 kHz) providing an exemplary frame length of 3.2 ms) or a fraction thereof (in case of overlap between neighbouring time frames). Likewise the output of the neural network G(k,t) may be delivered on a time-frame by time frame basis (G(k,t) followed by G(k,t+1)).
(117) The neural network may be randomly initialized and may thereafter be updated iteratively. The optimized neural network parameters (e.g. a weights, and a bias-value) for each node may found using standard, iterative stochastic gradient, e.g. steepest-descent or steepest-ascent methods, e.g. implemented using back-propagation minimizing a cost function, e.g. the mean-squared error, (cf. signal ΔG.sub.q(k, t)) in dependence of the neural network output G-EST.sub.p(k,t) and the optimal gain G-OPT.sub.q(k,t). The cost function (e.g. the mean-squared error) is computed across many training pairs (q=1, . . . , Q, where Q may be ≥10, e.g. ≥50, e.g. ≥100 or more) of the input signals.
(118) The optimized neural network parameters may be stored in the SNR-to-gain estimator (SNR2G) implemented in the hearing device and used to determine frequency dependent gain from frequency dependent input SNR-values, e.g. from an ‘a posteriori SNR’ (simple SNR, e.g. (S+N)/<N>), or from an ‘a priori SNR’ (improved SNR, e.g. <S>/<N>), or from both (where <⋅> denotes estimate).
(119) It may be advantageous that the thresholds for selecting the neurons to be processed are different across the neurons (within a given layer). The threshold for a given neuron maybe adapted during training.
(120) Thresholds may also be different between layers, if more than one RNN-layer is present.
(121) Other training methods may be used, see e.g. [Sun et al.; 2017]. See also section ‘Statistical RNN (StatsRNN)’ below.
(122) The neural network preferably have an input vector and an output vector with a dimension of K (as a time frame of the input signal IN(k,t) (or SNR(k,t)) and a corresponding (estimated) noise reduction gain G(k,t)), where K is the number of frequency bands of the analysis path of the hearing device, wherein the neural network is implemented (see e.g.
(123)
(124) Furthermore, the elements may be reorganized such that the adjacent frequency bands are spread across different groups (in order not to lose the fundamental information from lower bands if the entire vector groups are discarded). The 4 vertical rectangles in the memory (denoted ‘Memory’ in
(125) Several ways are possible when processing elements in hardware. We can either process the elements sequentially one by one, or we can do vectorized operations (operating on a set of values (vector) at one time, providing speed up) in, e.g. groups of four elements. In this case, we would need to decide whether we want to process or discard all four elements since they would be received and output in vectors. This could be based on determining whether most/some number of the values are above a threshold/among the top Np values in a vector. If this is the case, the entire vector will be processed. If not, all values will be discarded. It is then important to consider how the individual frequency bands would be grouped in vectors. Grouping the frequency bands in the original order (band 1, 2, 3, 4, etc.) could result in losing, e.g. lower frequency information if most values withing the vector would be too small. Therefore, regrouping could be considered (e.g. based on experiments).
Combination of Peak GRU and Delta GRU
(126) Furthermore, the Peak GRU RNN and Delta GRU RNN approaches may be combined, i.e. we can firstly determine whether the values are above a specific delta threshold, and then additional filtering based on Peak GRU RNN could be applied (or vice versa). This combines the advantage of both approaches, i.e. to only update while exceeding a threshold, but not to update more than N.sub.p neurons.
(127) A combination of the Peak GRU RNN and Delta GRU RNN may comprise the option that, if N.sub.p is not reached in one layer (based on a Delta GRU threshold), the ‘saved’ calculations may be moved to another layer (or time step), where N.sub.p has been exceeded (presuming the neural network having more than one Peak GRU layer).
Statistical RNN (StatsRNN) (Training, and Setting of Peak and/or Thresholds Values, N.SUB.p., Θ)
(128) Another way to obtain/approximate top elements like Peak GRU RNN is to look at the data from a statistical perspective to investigate if the |x(t)−{circumflex over (x)}(t−2)| and |h(t−1) ĥ(t−2)| computations, determining whether the elements should be zeroed out, have any underlying statistical property that could be exploited. We can create a histogram for x and h computations (across all the acoustic environments or each environment individually) above (absolute values of subtractions) separately based on the training dataset and explore whether the data has random behavior or not (this is illustrated with an example in
(129) Furthermore, we can apply adaptive thresholding, i.e. the thresholds for x and h are not static and can be adjusted e.g. from one time step to another (either across all the environments or each environment individually). If too many elements are being processed in the current timestep t with the current thresholds (too many elements of the vector are above the threshold than necessary. e.g. 80 instead of 50 out of 100), the threshold can be increased in the next time step. The new threshold value may be determined based on histogram again, i.e. we may take the boundary of the adjacent histogram bar to the right (bigger value) of the current boundary or any other boundary of the histogram that is greater than the current one. Regarding the elements themselves, we can either proceed with doing further computations with all of them or perform additional filtering to obtain a desired (or a close to) number of elements (50 out of 100, dropping 30 elements from initial 80). This filtering may be done based on sorting and selecting the biggest values, selecting the first/last elements that were above the threshold, selecting random elements that were above the threshold etc. Alternatively, the elements which are above the threshold, but not selected, may be prioritized in the next time step.
(130) Likewise, if too few elements are being, processed in the current timestep t (e.g. 20 instead of 50 out of 100), the threshold can be lowered in the next time step and may be determined based on histogram. We can take the boundary of the adjacent histogram bar to the left (smaller value) of the current boundary or any other boundary of the histogram that is smaller than the current one. Regarding the elements themselves, we may either proceed with doing further computations with the few obtained elements or perform additional selection of elements to obtain a desired (or a close to) number of elements (in this example, additional 30 elements to obtain 50 out of 100). This selection may be done based on sorting and selecting the biggest values, setting a higher threshold already in the current time step and select e.g. the first/last/random elements that are above the threshold etc. from the sets of elements that were initially discarded using the original thresholds.
(131) The starting threshold itself may be (is ideally) based on the histogram (could be random as well), which will provide the best starting point and more predictable way of nudging the thresholds. The same x and h thresholds can be used for all acoustic environments or different x and h thresholds can be applied to each environment individually. This method may be used primarily for StatsRNN but also DeltaRNN (and modified versions thereof) in both software and hardware implementation.
(132)
(133)
(134) The estimated gain may be applied to the target estimate (TE(k,m)) instead of to the output of the analysis filter bank (electric input signal, IN(k,m)). This is especially relevant in the case where the target estimate is a beamformed signal. Instead of applying the ‘noise reduction gains (G(k,m)) from the neural network (NN) to the electric input signal (IN(k,m)), it may be applied to a further processed version of the electric input signal, as e.g. described in the following (see
(135) In the case of multiple microphones (cf. e.g. M1, M2 in
(136) The hearing device of
(137) The beamformers may be fixed or adaptive. Multiple target cancelling beamformers may be used as input features to the neural network (NN) simultaneously. E.g. two target cancelling beamformers, each having a single null direction, but having nulls pointing towards different possible targets.
(138) The magnitudes, or the squared magnitudes, or the logarithm of the magnitudes of the target and the noise estimates may be, used as input to the neural network (NN). The outputs of the neural network (NN) may comprise real or complex valued gains, or separate real valued gains and real valued phases.
(139) The maximum amount of noise reduction provided by the neural network may be controlled by level, or modulation (e.g. SNR), or a degree of sparsity of the inputs to the neural network. A degree of sparsity may e.g. be represented by a degree of overlap in time and/or frequency of background noise with (target) speech.
(140) In the embodiment of
(141) It is intended that the structural features of the devices described above, either in the detailed description and/or in the claims, may be combined with steps of the method, when appropriately substituted by a corresponding process.
(142) As used, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well (i.e. to have the meaning “at least one”), unless expressly stated otherwise. It will be further understood that the terms “includes,” “comprises,” “including,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. It will also be understood that when an element is referred to as being “connected” or “coupled” to another element, it can be directly connected or coupled to the other element, but an intervening element may also be present, unless expressly stated otherwise. Furthermore, “connected” or “coupled” as used herein may include wirelessly connected or coupled. As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items. The steps of any disclosed method are not limited to the exact order stated herein, unless expressly stated otherwise.
(143) It should be appreciated that reference throughout this specification to “one embodiment” or “an embodiment” or “an aspect” or features included as “may” means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosure. Furthermore, the particular features, structures or characteristics may be combined as suitable in one or more embodiments of the disclosure. The previous description is provided to enable any person skilled in the art to practice the various aspects described herein. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects.
(144) The claims are not intended to be limited to the aspects shown herein but are to be accorded the full scope consistent with the language of the claims, wherein reference to an element in the singular is not intended to mean “one and only one” unless specifically so stated, but rather “one or more.” Unless specifically stated otherwise, the term “some” refers to one or more.
(145) The Peak GRU RNN algorithm is exemplified in the field of audio processing, in particular in a noise reduction system framework of a hearing aid or headset. The algorithm can, however, be applied to other tasks, in a hearing aid or headset, or in other audio processing devices, e.g. for direction of arrival estimation, feedback path estimation, (own) voice activity detection, keyword detection, or other (e.g. acoustic) scene classification. But the algorithm may also be applied to other fields than audio processing, e.g. the processing of images or other data comprising a certain amount of redundancy (e.g. relatively slow variance over time, e.g. relative to a sampling time (t.sub.S=1/f.sub.S, where f.sub.S is the sampling frequency of such data)), e.g. to processing of financial data, weather data, etc.
REFERENCES
(146) [Neil et al.; 2018] Daniel Neil, Jun Haeng Lee, Tobi Delbruck, Shih-Chii Liu, DeltaRNN: A Power-efficient Recurrent Neural Network Accelerator, published in FPGA '18: Proceedings of the 2018 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, February 2018, Pages 21-30, https://doi.org/10.1145/3174243.3174261 EP3694229A1 (Oticon) 12.08.2020 [Kostadinov; 2018] https://towardsdatascience.com/understanding-gru-networks-2ef37df6c9be