A transformer noise suppression method

20170249955 · 2017-08-31

Assignee

Inventors

Cpc classification

International classification

Abstract

The noise suppression method of individual active noise reduction system comprises the steps that: (1) initial noise digital signals are received and converted to serve as input signals of a BP neural network; (2) the input signals are processed to generate secondary digital signals; (3) the secondary digital signals are output to a loudspeaker and secondary noise is generated; (4) remained noise digital signals obtained by overlapping the initial noise and the secondary noise are received; whether remained noise digital signals is continuously constant for the set times is judged; if yes, the secondary digital signals are kept outputting; (5) if not, BP neural network parameters are optimized and adjusted with the amplitude of the remained noise digital signals being minimum as the optimality principle; remained noise digital signals of previous step are served as new input signals and the step (2) is executed again.

Claims

1. A method of suppressing noise of a transformer, characterized by the following steps: using a system is consisting essentially of a controller including an intelligent chip, an initial noise measuring microphone, a residual noise measuring microphone, and a loudspeaker, wherein the controller performs noise suppression in the following steps: receiving, in a first step, an initial noise digital signal transmitted and converted by the initial noise measuring microphone in the vicinity of a noise source as an input signal of a BP (Back Propagation) neural network; processing, in a second step, the input signal by BP neural network to generate an secondary digital signal whose phase is deviated from the input signal; converting, in a third step, the secondary digital signal into an analog signal, amplifying and outputting the secondary signal to the loudspeaker to generate a secondary noise having an inhibitory effect on the initial noise; receiving, in a fourth step, the initial noise picked up by the residual noise measurement microphone and the residual noise digital signal superimposed and converted by the secondary noise to determine whether the amplitude of the residual noise digital signal has been continuously set the number of times unchanged, if so, the secondary digital signal output is kept, otherwise proceed to the next step; and minimizing, in a first step, the residual noise digital signal amplitude as the principle of optimization to optimize and adjust the BP neural network parameters, the initial noise digital signal of the next time as the new BP neural network input signal, and then repeat the second step, wherein the first-fifth steps are performed sequentially in that order.

2. The transformer noise suppression method according to claim 1 is characterized by the following steps: Optimizing and adjusting the parameters of the BP neural network in the fifth step is carried out by an improved particle swarm optimization algorithm which follow procedures below: Step 1: determining, dimension of particles, according to the structure of the BP neural network and generating N initial particles randomly; Step 2: forming weight coefficient ω.sub.hi (n) between the input layer neuron i and the hidden layer neuron h in the neural network, the weight coefficient W.sub.h (n) between the hidden layer neuron h and the output layer, the threshold value GE.sub.h(n) of hidden layer neuron h and the threshold value ge (n) of output layer neuron at the nth time are real-number encoded according to the predetermined order, and the corresponding real number particles wherein each particle corresponds to a group of network parameters, and an encoding format is: TABLE-US-00002 ω.sub.11 ω.sub.12 . . . ω.sub.K W.sub.1 GE.sub.1 . . . ω.sub.J1 ω.sub.J2 . . . ω.sub.JK W.sub.J GE.sub.J ge Step 3: Selecting the residual noise signal e(n) of the system as a judgment criterion of network parameters, and using the following fitness function F(n) as the particle position coding and formula: F ( n ) = ( x ( n ) h 1 ( n ) - y ( n ) h 2 ( n ) ) 2 2 [ 1 ] in formula [1] x(n) is the initial noise digital signal input at the nth time; H.sub.1(z) and H.sub.2(z) are the transfer functions of the primary channel and the secondary channel respectively; y(n) is the digital signal of the network output at the n-th time, and the formula is: y ( n ) = .Math. h = 1 J .Math. W h ( n ) .Math. f ( .Math. i = 1 K .Math. ω hi ( n ) .Math. x i ( n ) - GE h ( n ) ) - ge ( n ) [ 2 ] in formula [2], x.sub.i(n)=x(n−i+1) represents the input of neuron i in the input layer; f(x)=2/(1+exp(−2x))−1 denotes the activation function of the hidden layer of the network; K denotes the total number of neurons in the input layer; J denotes the total number of neurons in the hidden layer; Step 4: According to particle location code and formula [1], calculating a fitness value of each particle; comparing a particle's current position fitness value with the position fitness value of the particle before the iteration, if the former is smaller than the latter, then updating the particle's optimal position P.sub.i=[p.sub.i1, p.sub.i2, . . . , p.sub.i(J(K+2)+1)]; otherwise, the particle's optimal position remains unchanged; at the same time, comparing the particle's fitness value with the fitness value of the particle before the iteration, if the former is smaller than the latter, then updating the optimal position of entire particle group P.sub.g=[p.sub.g1, p.sub.g2, . . . ,p.sub.g(j(K+2)+1)]; otherwise the optimal position of entire particle group remains unchanged; Step 5: stopping the iteration if the iteration times has reached the maximum iteration number, and decoding the optimal position of the particle swarm to get the corresponding BP neural network parameters; otherwise proceeding to step 6; Step 6: defining an evolution degree of particle swarm: σ ( k ) = abs ( f gbest ( k ) - f avg ( k ) f gbest ( 1 ) - f avg ( 1 ) ) [ 3 ] in formula [3]f.sub.gbest(k) and f.sub.avg(k) are respectively the global optimal fitness value and average fitness value of the particle group at the kth iteration; an inertia factor of the dynamic change is defined as: w ( k ) = ( 1 - 1 1 + exp ( σ ( k ) ) ) .Math. w 0 [ 4 ] in formula [4], w.sub.0 is an initial value. a definition of adaptive mutation probability calculation formula is: p m ( k ) = { ( p m .Math. .Math. min + ( p m .Math. .Math. max - p m .Math. .Math. min ) .Math. .Math. f gbest ( k ) - f avg ( k ) .Math. .Math. f gbest ( k ) - f i ( k ) .Math. ) .Math. ξ ( k ) , f i ( k ) < f avg ( k ) p m .Math. .Math. max .Math. ξ ( k ) , f i ( k ) >= f avg ( k ) [ 5 ] .Math. ξ ( k ) = 1 .Math. + σ ( k ) [ 0 , 1 ] [ 6 ] wherein p.sub.m(k) is the mutation probability of the kth iteration of the particle swarm; p.sub.mmax and p.sub.mmim are the maximum and minimum values of the mutation rate respectively; f.sub.i(k) is the fitness value of the kth iteration of the particle I; εis the constant; a new velocity and position after the particle mutation is defined as:
v.sub.id(k+1)=w(k)v.sub.id(k)+c.sub.1r.sub.1(p.sub.id(k)−x.sub.id(k))+c.sub.2r.sub.2(p.sub.gd(k)−x.sub.id(k))   [7]
X.sub.i(k+1)=X.sub.i(k)+p.sub.m(k)(X.sub.R.sub.1(k)−X.sub.R.sub.2(k))+V.sub.i(k)   [8] wherein where v.sub.id(k) is the velocity of the dth dimension of the particle i in the kth iteration; V.sub.i(k) is the velocity vector of the particle i in the kth iteration; p.sub.id(k) is the best position of the dth dimension of the kth iteration of the particle i; p.sub.gd(k) is the best position of the dth dimension of the kth iteration of the whole population; x.sub.id(k) is the position of the dth dimension of the particle i in the kth iteration; X.sub.i(k) is the position vector of the kth iteration of particle i; c.sub.1 and c.sub.2 are nonnegative acceleration constants; r.sub.1 and r.sub.2 are random numbers transformed in the range [0,1]; R.sub.1 and R.sub.2 are unequal positive integers, in the range [1, N]; Step 7: calculating the inertia factor and particle mutation rate of this iteration by the formulas [4] and [5], and then updating the velocity and position of all the particles by using formulas [7] and [8] to generate new generation particles group and return to step 4.

3. The transformer noise suppression method according to claim 2 wherein the smart chip is a DSP chip, and the initial noise measurement microphone and the residual noise measurement microphone are connected with a corresponding input port of DSP chip via the automatic gain adjustment module of the sound processing chip and A/D converter module, the output port of the DSP chip is connected with the speaker via the D/A conversion module of the sound processing chip and the power amplifier.

4. The transformer noise suppressing method according to claim 3, wherein the BP neural network structure is set as a 2-10-1 filtering algorithm, the excitation functions of the hidden layer and the output layer are Sigmoid function and Purelin function.

5. The transformer noise suppressing method according to claim 4 The wherein size of the particle group is 30, the maximum genetic algebra is 10, and the dimension of the particle is 78.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0014] FIG. 1 is schematic structural view of an embodiment of the present invention.

[0015] FIG. 2 is a flowchart of a digital signal basic processing procedure of the embodiment in FIG. 1.

[0016] FIG. 3 is a system block diagram of an improved particle BP neural network of the embodiment in FIG. 1.

[0017] FIG. 4 is a time-domain comparison of the residual noise measurement microphone pick-up signals before and after noise reduction of the embodiment in FIG. 1.

[0018] FIG. 5 is a frequency domain diagram of the residual noise measurement microphone pickup signal before the noise reduction of the embodiment of FIG. 1.

[0019] FIG. 6 is a frequency domain diagram of a residual noise measurement microphone pickup signal after noise reduction in the embodiment of FIG. 1.

SPECIFIC EMBODIMENTS

[0020] The technical solution of the present invention will be described in further detail with reference to the accompanying drawings.

[0021] The present embodiment is a transformer noise suppression method based on an improved particle BP neural network. It will not only effectively suppress noise, but also further to overcome the drawback that BP neural network convergence rate is slow and easy to fall into the local minimum. As shown in FIG. 1, main components of the system include: the controller which contains TMS320VC5509DSP smart chip, 2 PCM6110 microphones as initial noise measuring microphone and residual noise measuring microphone respectively, Swans S6.5 low frequency speaker. The initial noise measurement microphone and the residual noise measurement microphone are connected with the corresponding input port of DSP chip via the automatic gain adjustment module of the sound processing chip and A/D converter module. The output port of the DSP chip is connected with the speaker via the D/A conversion module of the sound processing chip and the power amplifier. TLV320AIC23 audio chip is a sound processing chip, which contains the automatic gain module, A/D converter module and D/A converter module.

[0022] Specific to a transformer site, the signal acquisition frequency is set to 5000 Hz. The BP neural network structure is set as a 2-10-1 filtering algorithm, the excitation functions of the hidden layer and the output layer are Sigmoid function and Purelin function. The size of the particle group is 30, the maximum genetic algebra is 10. According to the way particle coding, the dimension of the particle is 78. And, c.sub.1=c.sub.2=1.3, w.sub.0=0.9, p.sub.mmax=0.1, p.sub.mmin=0.05. The distance between initial noise measuring microphone and transformer is 10 cm. The distance between speaker and transformer is 30 cm. The distance between residual noise measuring microphone and transformer is 20 cm.

[0023] The controller follows the steps below to achieve noise suppression in operation (see FIG. 2):

[0024] The first step is to receive the initial noise digital signal transmitted and converted by the initial noise measurement microphone in the vicinity of the noise source as the input signal of the BP (Back Propagation) neural network.

[0025] The second step is to process the input signal by BP neural network to generate the secondary digital signal whose phase is deviated from the input signal.

[0026] The third step is to convert the secondary digital signal into an analog signal, amplify and output the secondary signal to the loudspeaker to generate a secondary noise having an inhibitory effect on the initial noise.

[0027] The fourth step is to receive the initial noise picked up by the residual noise measurement microphone and the residual noise digital signal superimposed and converted by the secondary noise to determine whether the amplitude of the residual noise digital signal has been continuously set the number of times unchanged, if so, the secondary digital signal output is kept, otherwise proceed to the next step.

[0028] The fifth step is to minimize the residual noise digital signal amplitude as the principle of optimization to optimize and adjust the BP neural network parameters, the initial noise digital signal of the next time as the new BP neural network input signal, and then repeat the second step.

[0029] In order to overcome the drawback that the current technology is slow to converge and easily fall into local minima, optimizing and adjusting the parameters of BP neural network in the fifth step of transformer noise suppression method in the embodiment is carried out by the improved particle swarm optimization algorithm which follow the procedures below:

[0030] Step 1: According to the structure of BP neural network, we can determine the dimension of particles and generate N initial particles randomly.

[0031] Step 2: The weight coefficient ω.sub.hi (n) between the input layer neuron i and the hidden layer neuron h in the neural network, the weight coefficient W.sub.h (n) between the hidden layer neuron h and the output layer, the threshold value GE.sub.h(n) of hidden layer neuron h and the threshold value ge (n) of output layer neuron at the nth time are real-number encoded according to the predetermined order, and the corresponding real number particles are formed. Each particle corresponds to a group of network parameters. The encoding format is:

TABLE-US-00001 ω.sub.11 ω.sub.12 . . . ω.sub.K W.sub.1 GE.sub.1 . . . ω.sub.J1 ω.sub.J2 . . . ω.sub.JK W.sub.J GE.sub.J ge

[0032] Step 3: Select the residual noise signal e(n) of the system as the judgment criterion of network parameters, and use the following fitness function F(n) as the particle position coding and formula:

[00001] F ( n ) = x ( n ) h 1 ( n ) - y ( n ) h 2 ( n ) ) 2 2 [ 1 ]

[0033] In the above formula, x(n) is the initial noise digital signal input at the nth time; H.sub.1(z) and H.sub.2(z) are the transfer functions of the primary channel and the secondary channel respectively; y(n) is the digital signal of the network output at the n-th time, and the formula is:

[00002] y ( n ) = .Math. h = 1 J .Math. W h ( n ) .Math. f ( .Math. i = 1 K .Math. ω hi ( n ) .Math. x i ( n ) - GE h ( n ) ) - ge ( n ) [ 2 ]

[0034] In the above formula, x.sub.i(n)=x(n−i+1) represents the input of neuron i in the input layer; f(x)=2/(1+exp(−2x))−1 denotes the activation function of the hidden layer of the network; K denotes the total number of neurons in the input layer; J denotes the total number of neurons in the hidden layer.

[0035] Step 4: According to the particle location code and formula [1], the fitness value of each particle is calculated; comparing the particle's current position fitness value with the position fitness value of the particle before the iteration, if the former is smaller than the latter, then updates the particle's optimal position P.sub.i=[p.sub.i1, p.sub.i2, . . . , p.sub.i(J(K+2)+1)]. Otherwise, it remains unchanged. At the same time, comparing the particle's fitness value with the fitness value of the particle before the iteration, if the former is smaller than the latter, then updates the optimal position of entire particle group P.sub.g=[p.sub.g1, p.sub.g2, . . . , p.sub.g(J(K+2)+1)]. Otherwise, it remains unchanged.

[0036] Step 5: If the iteration times has reached the maximum iteration number, stop the iteration and decode the optimal position of the particle swarm to get the corresponding BP neural network parameters. Otherwise, proceed to the next step.

[0037] Step 6: Define the evolution degree of particle swarm:

[00003] σ ( k ) = abs ( f gbest ( k ) - f avg ( k ) f gbest ( 1 ) - f avg ( 1 ) ) [ 3 ]

[0038] In the above formula, f.sub.gbest(k) and f.sub.avg(k) are respectively the global optimal fitness value and average fitness value of the particle group at the kth iteration.

[0039] The inertia factor of the dynamic change is defined as:

[00004] w ( k ) = ( 1 - 1 1 + exp ( σ ( k ) ) ) .Math. w 0 [ 4 ]

[0040] In the above formula, w.sub.0 is the initial value.

[0041] The definition of adaptive mutation probability calculation formula is:

[00005] p m ( k ) = { ( p m .Math. .Math. min + ( p m .Math. .Math. max - p m .Math. .Math. min ) .Math. .Math. f gbest ( k ) - f avg ( k ) .Math. .Math. f gbest ( k ) - f i ( k ) .Math. ) .Math. ξ ( k ) , f i ( k ) < f avg ( k ) p m .Math. .Math. max .Math. ξ ( k ) , f i ( k ) >= f avg ( k ) [ 5 ] .Math. ξ ( k ) = 1 .Math. + σ ( k ) [ 0 , 1 ] [ 6 ]

where p.sub.m(k) is the mutation probability of the kth iteration of the particle swarm; p.sub.mmax and p.sub.mmim are the maximum and minimum values of the mutation rate respectively; f.sub.i(k) is the fitness value of the kth iteration of the particle I; εis the constant.

[0042] The new velocity and position after the particle mutation is defined as:


v.sub.id(k+1)=w(k)v.sub.id(k)+c.sub.1r.sub.1(p.sub.id(k)−x.sub.id(k))+c.sub.2r.sub.2(p.sub.gd(k)−x.sub.id(k))   [7]


X.sub.i(k+1)=X.sub.i(k)+p.sub.m(k)(X.sub.R.sub.1(k)−X.sub.R.sub.2(k))+V.sub.i(k)   [8]

where v.sub.id(k) is the velocity of the dth dimension of the particle i in the kth iteration; V.sub.i(k) is the velocity vector of the particle i in the kth iteration; p.sub.id(k) is the best position of the dth dimension of the kth iteration of the particle i; p.sub.gd(k) is the best position of the dth dimension of the kth iteration of the whole population; x.sub.id(k) is the position of the dth dimension of the particle i in the kth iteration; X.sub.i(k) is the position vector of the kth iteration of particle i; c.sub.1 and c.sub.2 are nonnegative acceleration constants; r.sub.1 and r.sub.2 are random numbers transformed in the range [0,1]; R.sub.1 and R.sub.2 are unequal positive integers, in the range [1, N].

[0043] Step 7: Firstly, calculate the inertia factor and particle mutation rate of this iteration by the above formulas [4] and [5], and then update the velocity and position of all the particles by using formulas [7] and [8] to generate new generation particles group and return to step four.

[0044] This can be achieved: when the digital signal x(n) at the nth time is input to the controller, it is filtered through a BP neural network with random parameters to get an output value y(n). After the secondary acoustic channel transfer function, the output value becomes S(n). That is, the secondary digital signal at the nth time generates the error digital signal e(n) after superposition. When the error digital signal e(n) at the nth time is fed back into the controller, combining with the reference digital signal x(n) at the nth time, the relevant parameters in the BP neural network are modified by using the improved particle swarm optimization algorithm. After the above processing, we can get the new weight coefficient and threshold coefficient corresponding to the BP neural network at the nth time, and then replace these original parameters with the new parameters. When the initial noise digital signal x(n+1) at the (n+1)-th time is input to the controller, it is filtered through an updated BP neural network to obtain a new output value. After the secondary acoustic channel transfer function, the output value becomes s(n+1). That is, the secondary digital signal at the (n+1) th time.

[0045] FIG. 4, FIG. 5 and FIG. 6 show the comparison of the signal picked up by the residual noise measurement microphone before and after noise reduction. It can be seen that, the effect is very significant. The present embodiment not only constructs a reasonable transformer noise suppression system, but also integrates the particle swarm optimization algorithm on the basis of BP neural network filtering to improve the correction ways of weights and thresholds in BP neural network by using improved particle swarm optimization algorithm. It overcomes the inherent defects of the algorithms such as gradient descent optimization, and can effectively suppress the noise of all frequency components of the transformer.

[0046] The above description is merely a specific embodiment of the present invention, but the scope of the present invention is not limited thereto. Any conceivable transformations or substitutions within the technical scope disclosed by the present inventors are within the scope of the present invention.