Signal processing method for cochlear implant
10245430 ยท 2019-04-02
Assignee
Inventors
Cpc classification
H04R2225/67
ELECTRICITY
G10L19/00
PHYSICS
International classification
A61N1/05
HUMAN NECESSITIES
G10L19/00
PHYSICS
Abstract
A signal processing method for cochlear implant is performed by a speech processor and comprises a noise reduction stage and a signal compression stage. The noise reduction stage can efficiently reduce noise in a electrical speech signal of a normal speech. The signal compression stage can perform good signal compression to enhance signals to stimulate cochlear nerves of a patient with hearing loss. The patient who uses a cochlear implant performing the signal processing method of the present disclosure can accurately hear normal speech.
Claims
1. A signal processing method for a cochlear implant, the cochlear implant comprising a microphone and a speech processor, the signal processing method being executed by the speech processor comprising: receiving an electrical speech signal from the microphone; segmenting the electrical speech signal to a plurality of time-sequenced noisy frames; reducing noise in each of the plurality of time-sequenced signal frames to obtain a plurality of clean signal frames, the plurality of clean signal frames comprising a (t-1)-th clean frame x.sub.t-1 and a t-th clean frame x.sub.t; obtaining a (t-1)-th compression factor .sub.t-1 according to the (t-1)-th clean frame x.sub.t-1; obtaining a t-th compression factor .sub.t for the t-th clean frame x.sub.t according to the compression factor .sub.t-1 and the t-th clean frame x.sub.t; obtaining a t-th output frame z.sub.t based on the t-th compression factor .sub.t; and outputting the t-th output frame z.sub.t.
2. The signal processing method of claim 1, further comprising: obtaining a (t-1)-th amplitude envelope of the (t-1)-th clean frame x.sub.t-1 and calculating a (t-1)-th upper boundary and a(t-1)-th lower boundary of the (t-1)-th amplitude envelope; wherein the (t-1)-th compression factor .sub.t-1 for the (t-1)-th clean frame x.sub.t-1 is obtained based on the (t-1)-th upper boundary and the (t-1)-th lower boundary.
3. The signal processing method of claim 2, further comprising: obtaining a t-th amplitude envelope of the t-th clean frame x.sub.t and calculating a t-th upper boundary and a t-th lower boundary of the t-th amplitude envelope; wherein the t-th compression factor .sub.t for the t-th clean frame x.sub.t is obtained based on the compression factor t-1, the t-th upper boundary and the t-th lower boundary.
4. The signal processing method of claim 3, wherein when the t-th output frame zt falls within a range between a the t-th upper boundary and the t-th lower boundary, the t-th compression factor t is calculated by: t=t-1+1, and 1 is a positive value.
5. The signal processing method of claim 3, wherein when the t-th output frame z.sub.t falls beyond a range between a the t-th upper boundary and the t-th lower boundary, the t-th compression factor .sub.t is calculated by: .sub.t=.sub.t-1+.sub.2, and .sub.2 is a negative value.
6. The signal processing method of claim 1, wherein the t-th output frame z.sub.t, is obtained by: z.sub.t=.sub.t(x.sub.t
7. The signal processing method of claim 1, wherein the t-th clean frame x.sub.t is calculated by:
x.sub.t=InvF{(W.sub.2h(F{y.sub.t})+b.sub.2)} wherein F{} is a Fourier transform function to transfer the t-th noisy frame y.sub.t from time domain to frequency domain; h( )is a function including W.sub.1 and b.sub.1; W.sub.1 and W.sub.2 are default connection weights in frequency domain; b.sub.1 and b.sub.2 are default vectors of biases of hidden layers of a DDAE-based NR structure in the frequency domain; and InvF { } is an inverse Fourier transform function.
8. The signal processing method of claim 7, wherein the h(F{y.sub.t}) is calculated by:
9. The signal processing method of claim 1, wherein the t-th upper boundary (UB) is calculated by UB=
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The present disclosure is illustrated by way of embodiments and accompanying drawings.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
DETAILED DESCRIPTION
(9) With reference to
(10) The microphone 11 is an acoustic-to-electric transducer that converts a normal speech sound into an electrical speech signal. The speech processor 12 receives the electrical speech signal and converts the electrical speech signal into multiple output sub-speech signals in different frequencies. The transmitter 13 receives the output sub-speech signals from the speech processor 12 and wirelessly sends the output sub-speech signals to the receiver 14. The pulse generator 15 receives the output sub-speech signals from the receiver 14 and generates different electrical pulses based on the output sub-speech signals to the electrode array 16. The electrode array 16 includes a plurality of electrodes 161 and each of the electrodes 161 electrically connected to different cochlear nerves of the patient's inner ear. The electrodes 161 output the electrical pulses to stimulate the cochlear nerves, such that the patient can hear something approximating to normal speech.
(11) The present disclosure provides a signal processing method for cochlear implant and the cochlear implant using the same. The signal processing method is performed by a speech processor of the cochlear implant. The signal processing method is configured to compress an input speech signal into a predetermined amplitude range, which includes a noise reduction stage and a signal compression stage.
(12) In more detail, with reference to
(13) Based on the above configuration, the band-pass filter 121 of each one of the channels sequentially receives the frames of the electrical speech signal from the noise reduction unit 126. The band-pass filter 121 of each one of the channels can preserve elements of each one of the frames of the electrical speech signal within a specific frequency band and remove elements beyond the specific frequency band from such frame. The specific frequency bands of the band-pass filters 121 of the channels are different from each other. Afterwards, the amplitude envelopes of the frames of the electrical speech signal are detected by the envelope detection units 122 and are provided to the signal compressors 123.
(14) The present disclosure relates to a noise reduction stage performed by the noise reduction unit 126 and a signal compression stage performed by the signal compressor 123. The noise reduction stage and the signal compression stage are described below.
(15) 1. Noise Reduction Stage
(16) The noise reduction unit 126 can be performed in a deep denoising autoencoder (DDAE)-based noise reduction (NR) structure. The DDAE-based NR structure is widely used in building a deep neural architecture for robust feature extraction and classification. In brief, with reference to
(17) The input layer 21 receives an electrical speech signal y from the microphone 11 and segments the electrical speech signal y into a first noisy frame y.sub.1, a second noisy frame y.sub.2, . . . , a t-th noisy frame y.sub.t, . . . , and a T-th noisy frame y.sub.T, wherein T is a length of the current utterance. In other words, the present disclosure may segment an input speech signal, such as the electrical speech signal y, into a plurality of time-sequenced frames, such as the noisy frames y.sub.1, y.sub.2, . . . , and y.sub.T. For the elements in the t-th noisy frame y.sub.t, the noise reduction unit 126 reduces noise in the t-th noisy frame y.sub.t to form a t-th clean frame x.sub.t. Afterwards, the output layer 23 sends the t-th clean frame x.sub.t to the channels of the speech processor 12.
(18) A relationship between the t-th noisy frame y.sub.t and the t-th clean frame x.sub.t can be represented as:
x.sub.t=W.sub.2h(y.sub.t)+b.sub.2(equation (1))
wherein h(y.sub.t) is a function including W.sub.1 and b.sub.1 in time domain and W.sub.1 and W.sub.2 are default connection weights in time domain. b.sub.1 and b.sub.2 are default vectors of biases of the hidden layers 22 of the DDAE-based NR structure in time domain.
(19) In another embodiment, the relationship between the t-th noisy frame y.sub.t and the t-th clean frame x.sub.t can be represented as:
x.sub.t=InvF{(W.sub.2h(F{y.sub.t})+b.sub.2)}(equation (2))
wherein F{} is a Fourier transform function to transfer the t-th noisy frame y.sub.t from time domain to frequency domain and h( ) is a function including W.sub.1 and b.sub.1; W.sub.1 and W.sub.2 are default connection weights in frequency domain. b.sub.1 and b.sub.2 are default vectors of biases of the hidden layers 22 of the DDAE-based NR structure in frequency domain and InvF { } is an inverse Fourier transform function to obtain the t-th clean frame x.sub.t.
(20) According to experiment results, the t-th clean frame x.sub.t deduced from the Fourier transform and the inverse-Fourier transform as mentioned above has better performance than that without the Fourier transform and the inverse-Fourier transform.
(21) For the time domain based method as shown in equation (1), h(y.sub.t) can be represented as:
(22)
(23) For the frequency domain based method shown in equation (2), h(F{y.sub.t) can be represented as:
(24)
(25) Regarding the parameters including W.sub.1, W.sub.2, b.sub.1 and b.sub.2 in time domain or W.sub.1, W.sub.2, b.sub.1 and b.sub.2 in frequency domain, they are preset in the speech processor 12.
(26) For example, in time domain, the parameters including W.sub.1, W.sub.2, b.sub.1 and b.sub.2 in equations (1) and (3) are obtained from a training stage. Training data includes a clean speech sample u and a corresponding noisy speech sample v. Likewise, the clean speech sample u is segmented into several clean frames u.sub.1, u.sub.2, . . . , u.sub.T, and the noisy speech sample v is segmented into several noisy frames v.sub.1, v.sub.2, . . . , v.sub.T, wherein T is a length of a training utterance.
(27) The parameters including W.sub.1, W.sub.2, b.sub.1 and b.sub.2 of equation (1) and equation (3) are optimized based on the following objective function:
(28)
(29) In equation (5), is a parameter set {W.sub.1, W.sub.2, b.sub.1, b.sub.2}, T is a total number of the clean frames u.sub.1, u.sub.2, . . . , u.sub.T, and is a constant used to control the tradeoff between reconstruction accuracy and regularization on connection weights (for example, can be set as 0.0002). The training data including the clean frames u.sub.1, u.sub.2, . . . , u.sub.T, and the training parameters of W.sub.1-test, W.sub.2-test, b.sub.1-test and b.sub.2-test can be substituted into the equation (1) and equation (3) to obtain a reference frame .sub.t. When the training parameters of W.sub.1-test, W.sub.2-test, b.sub.1-test, and b.sub.2-test can make the reference frame .sub.t be approximate to the clean frames u.sub.t, such training parameters of W.sub.1-test, W.sub.2-test, b.sub.1-test, and b.sub.2-test are taken as the parameters of W.sub.1, W.sub.2, b.sub.1 and b.sub.2 of equation (1) and equation (3). When the noisy speech sample v approximates the electrical speech signal y, the training result of the parameters of W.sub.1, W.sub.2, b.sub.1 and b.sub.2 can be optimized. The optimization of equation (5) can be done by using any unconstrained optimization algorithm. For example, a Hessian-free algorithm can be applied in the present disclosure.
(30) After training, optimized parameters including W.sub.1, W.sub.2, b.sub.1 and b.sub.2 are obtained, to be applied to equation (1) and equation (3) for real noise reduction application.
(31) In frequency domain, the parameters including W.sub.1, W.sub.2, b.sub.1 and b.sub.2 of equation (2) and equation (4) are optimized based on the following objective function:
(32)
(33) In equation (6), is a parameter set {W.sub.1, W.sub.2, b.sub.1, b.sub.2}, T is a total number of the clean frames u.sub.1, u.sub.2, . . . , u.sub.T, and is a constant used to control the tradeoff between reconstruction accuracy and regularization on connection weights (for example, can be set as 0.0002). The training data including the clean frames u.sub.1, u.sub.2, u.sub.T and the training parameters of W.sub.1-test, W.sub.2-test, b.sub.1-test and b.sub.2-test can be substituted into the equation (2) and equation (4) to obtain a reference frame .sub.t. When the training parameters of W.sub.1-test, W.sub.2-test, b.sub.1-test and b.sub.2-test can make the reference frame .sub.t be approximate to the clean frames u.sub.t, such training parameters of W.sub.1-test, W.sub.2-test, b.sub.1-test and b.sub.2-test are taken as the parameters of W.sub.1-test, W.sub.2-test, b.sub.1-test and b.sub.2-test of equation (2) and equation (4). When the noisy speech sample v approximates the electrical speech signal y, the training result of the parameters of W.sub.1, W.sub.2, b.sub.1 and b.sub.2 can be optimized. The optimization of equation (6) can be done by using any unconstrained optimization algorithm. For example, a Hessian-free algorithm can be applied in the present disclosure.
(34) After training, optimized parameters including W.sub.1, W.sub.2, b.sub.1 and b.sub.2 are obtained, to be applied to equation (2) and equation (4) for real noise reduction application.
(35) With reference to
(36) According to experiment result as mentioned above, the signal performances of the conventional log-MMSE estimator and the KLT estimator are not as good as those obtained by the procedures of the present disclosure. The procedures of the present disclosure have better noise reducing efficiency.
(37) 2. Signal Compression Stage
(38) With reference to
(39) The signal compressor 123 of the present disclosure comprises a compression unit 127, a boundary calculation unit 128, and a compression-factor-providing unit 129. The compression unit 127 and the boundary calculation unit 128 are connected to the envelope detection unit 122 to receive the amplitude envelope 30 of the t-th clean frame x.sub.t in real-time. With reference to
UB=
LB=
wherein .sub.0 is an initial value.
(40) The compression unit 127 receives the amplitude envelope 30 of the t-th clean frame x.sub.t and outputs a t-th output frame z.sub.t. Inputs of the compression-factor-providing unit 129 are connected to an input of the compression unit 127, an output of the compression unit 127, and an output of the boundary calculation unit 128. Results of calculating the upper boundary UB, the lower boundary LB, and the t-th output frame z.sub.t are received from unit 128. An output of the compression-factor-providing unit 129 is connected to the input of the compression unit 127, such that the compression-factor-providing unit 129 provides a compression factor .sub.t to the compression unit 127. The compression factor .sub.t is determined according to a previous compression factor .sub.t-1, the upper boundary UB, the lower boundary LB, and the t-th output frame z.sub.t. In brief, the procedures herein may determine the compression factor .sub.t for a frame based on the frame's amplitude upper boundary UB and lower boundary LB. When the t-th output frame z.sub.t is in a monitoring range between the upper boundary UB and the lower boundary LB, the compression factor .sub.t can be expressed as:
.sub.t=.sub.t-1+.sub.1(equation (9))
where .sub.1 is a positive value (i.e., .sub.1=1).
(41) In contrast, when the t-th output frame z.sub.t is beyond the monitoring range, the compression factor .sub.t can be expressed as:
.sub.t=.sub.t-1+.sub.2(equation (10))
where .sub.2 is a negative value (i.e., .sub.2=0.1).
(42) The t-th output frame z.sub.t can be expressed as:
z.sub.t=.sub.t(x.sub.t
where
(43) According to equations (9) and (10), a present compression factor .sub.t is obtained by a previous compression factor .sub.t-1. It can be understood that the compression factor .sub.t for the next frame can be modified based on the next frame's amplitude upper boundary UB and lower boundary LB. According to equation (11), the t-th output frame z.sub.t is repeatedly adjusted by the t-th clean frame x.sub.t and the results of calculating UB, LB, and .sub.t. According to experiment result, the signal compression capability is good. As illustrated in