Adaptive filter for system identification

09837991 · 2017-12-05

Assignee

Inventors

Cpc classification

International classification

Abstract

The adaptive filter for sparse system identification is an adaptive filter that uses an algorithm in the feedback loop that is designed to provide better performance when the unknown system model is sparse, i.e., when the filter has only a few non-zero coefficients, such as digital TV transmission channels and echo paths. The algorithm is a least mean square algorithm with filter coefficients updated at each iteration, as well as a step size that is also updated at each iteration. The adaptive filter may be implemented on a digital signal processor (DSP), an application-specific integrated circuit (ASIC), or by field-programmable gate arrays (FPGAs).

Claims

1. A method of adaptive filtering for sparse system identification of an unknown system where the unknown system is sparse, comprising the steps of: tapping input signals to the unknown system; inputting the tapped input signals to a digital filter, the digital filter implementing a transfer function to produce corresponding output signals; tapping output signals of the unknown system corresponding to the input signals; summing the output signals of the unknown system with the inverse of the corresponding output signals of the digital filter to produce a feedback error signal; processing the feedback error signal with a least mean square algorithm having a variable step size computed according to:
μ(i+1)=αμ(i)+γ|e(i)| where μ is the step size, α and γ are positive control parameters subject to 0<α<1, e is the estimation error, and i is an index correlating the input signal and output signals in order to recalculate coefficients of the transfer function; and updating the transfer function applied by the digital filter with the recalculated coefficients for application to the next succeeding input signal.

2. The method of adaptive filtering according to claim 1, wherein the digital filter is a finite impulse response (FIR) filter.

3. The method of adaptive filtering according to claim 1, wherein the digital filter is an infinite impulse response (IIR) filter.

4. The method of adaptive filtering according to claim 1, wherein the least mean square algorithm is a sparse variable step size normalized least mean square algorithm (NLMS) and the algorithm produces a filter coefficient vector according to: w ( i + 1 ) = w ( i ) + μ ( i ) e ( i ) u T ( i ) .Math. u ( i ) .Math. 2 where w is the filter coefficient vector, u is the input row vector of the unknown system, T is the transposition function, and |.Math.| denotes the Euclidean-norm.

5. The method of adaptive filtering according to claim 1, wherein the least mean square algorithm is a variable step size RZA-LMS (VSSRZA-LMS) algorithm and the algorithm produces a filter coefficient vector according to: w ( i + 1 ) = w ( i ) + μ ( i ) e ( i ) u T ( i ) .Math. u ( i ) .Math. 2 - ρ sgn ( w ( i ) ) 1 + .Math. .Math. w ( i ) .Math. , where w is the filter coefficient vector, u is the input row vector of the unknown system, T is the transposition function, ρ and ε are each control parameters greater than zero, sgn represents the sign function, and |.Math.| denotes the Euclidean-norm.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 is a block diagram of an adaptive filter for system identification according to the present invention.

(2) FIG. 2 is a plot of mean square deviation (MSD) vs. iterations, comparing performance of the two embodiments of the present adaptive filter for system identification against the Normalized Least Mean Square (NLMS) algorithm and the Reweighted Zero Attracting Normalized Least Mean Square (RZA-NLMS) algorithm for a simulated 16-tap system with varying sparsity and white input.

(3) FIG. 3 is a plot of mean square deviation (MSD) vs. iterations, comparing performance of the two embodiments of the present adaptive filter for system identification against the Normalized Least Mean Square (NLMS) algorithm and the Reweighted Zero Attracting Normalized Least Mean Square (RZA-NLMS) algorithm for a simulated 16-tap system with varying sparsity and correlated input.

(4) FIG. 4 is a plot of mean square deviation (MSD) vs. iterations, comparing performance of the two embodiments of the present adaptive filter for system identification against the Normalized Least Mean Square (NLMS) algorithm and the Reweighted Zero Attracting Normalized Least Mean Square (RZA-NLMS) algorithm for a simulated 256-tap system with white input.

(5) Similar reference characters denote corresponding features consistently throughout the attached drawings.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

(6) The adaptive filter for sparse system identification is an adaptive filter that uses an algorithm in the feedback loop that is designed to provide better performance when the unknown system model is sparse, i.e., when the filter has only a few non-zero coefficients, such as digital TV transmission channels and echo paths. In a first embodiment, the algorithm is the Normalized Least Mean Square (NLMS) algorithm in which the filter coefficients are updated at each iteration according to:

(7) w ( i + 1 ) = w ( i ) + μ ( i ) e ( i ) u T ( i ) .Math. u ( i ) .Math. 2 ,
where the step size μ is varied according to μ(i+1)=αμ(i)+γ|e(i)|. In a second embodiment, the algorithm is a Reweighted Zero Attracting LMS (RZA-LMS) algorithm in which the filter coefficients are updated at each iteration according to:

(8) w ( i + 1 ) = w ( i ) + μ ( i ) e ( i ) u T ( i ) .Math. u ( i ) .Math. 2 - ρ sgn ( w ( i ) ) 1 + .Math. .Math. w ( i ) .Math. ,
where the step size μ is varied according to μ(i+1)=αμ(i)+γ|e(i)|. The adaptive filter may be implemented on a digital signal processor (DSP), an ASIC, or by FPGAs.

(9) FIG. 1 shows an exemplary adaptive filter for system identification, designated generally as 10 in the drawing, and how it may be connected to an unknown system 12. It will be understood that the configuration in FIG. 1 is exemplary, and that other configurations are possible. For example, the unknown system 12 may be placed in series at the input of the adaptive filter 10 and the adaptive filter 10 may be configured to produce a response that is the inverse of the unknown system response, the input signal being summed with the adaptive filter output after passing through a delay to produce an error feedback signal to the adaptive filter 10.

(10) In the configuration shown in FIG. 1, a series of input signals x(n) (or equivalently, a single continuous input signal and corresponding output signal of the unknown system are sampled by the adaptive filter at a predetermined sampling rate) enter the unknown system and produce a series of output signals y(n). Each input signal x(n) is also input to the adaptive filter 10 and is processed by a filter 14, which is programmed or configured with a transfer function that produces a corresponding output signal z(n) that is an estimate of the desired signal. Assuming that the adaptive filter is implemented by a DSP, the filter 14 may be a Finite Impulse Response (FIR) filter or an Infinite Impulse Response (IIR) filter. The filter output signal z(n) is passed through an inverter 16 and is summed with the output signal y(n) of the unknown system 12 by summer 18 to produce an error signal e(n). The error signal e(n) is processed by an algorithm unit 20, which calculates corrections to the coefficients of the transfer function being implemented by the filter 14. The algorithm unit 20 passes the corrected coefficients to a coefficient updater circuit 22, which updates the coefficients of the transfer function, which are applied to the next succeeding input signal x(n) that is input to the filter 14. Gradually the feedback loop adjusts the transfer function until the adaptive filter 10 produces an output signal z(n) that correlates closely with the desired output signal y(n).

(11) In a first embodiment, the algorithm applied by the algorithm unit 20 is a variation of the normalized least mean squares (NLMS) algorithm. Given the input row vector, u(i), to an unknown system, defined by a column vector w.sup.o of length M, then the output of the system, d(i), is given by:
d(i)=u(i)w.sup.o+v(i)  (2)
where i is the time index and v(i) is the added noise. If the estimate at any time instant i, of w.sup.o, is denoted by the vector w(i), then the estimation error is given by:
e(i)=d(i)−u(i)w(i)  (3)
The conventional normalized LMS (NLMS) algorithm is then given by:

(12) w ( i + 1 ) = w ( i ) + μ e ( i ) u T ( i ) .Math. u ( i ) .Math. 2 , ( 4 )
where T and |.Math.| denote transposition and the Euclidean-norm, respectively, and μ is the step size, defined in the range 0<μ<2, that gives the NLMS algorithm a much flexible choice for the step size than the LMS algorithm, and specifically when the unknown system is large and sparse.

(13) However, in the present adaptive filter, equation (4) is modified. Previous variable step size algorithms were all based on the l.sub.2-norm of the error. For a sparse environment, a more suitable basis would be the l.sub.1-norm. Therefore, the following variable step size recursion is applied:
μ(i+1)=αμ(i)+γ|e(i)|  (1)
where α and γ are positive control parameters. It should be noted that α is a positive parameter ranging between 0 and 1, exclusively; i.e., 0<α<1. The parameter α slowly filters out the starting value of the step size μ. Again, the γ|e(i)| term of the step size update of equation (5) takes place using the l.sub.1-norm, which gives the algorithm its desired performance in a sparse environment. Thus, the algorithm is a sparse variable step size NLMS (SVSSNLMS) algorithm in which the filter coefficient vector is given by:

(14) w ( i + 1 ) = w ( i ) + μ ( i ) e ( i ) u T ( i ) .Math. u ( i ) .Math. 2 ( 5 )
and μ(i) is updated according to equation (5). This gives a wider range of choice in the step size than the conventional NLMS algorithm.

(15) In a second embodiment, the algorithm unit 20 applies a modified version of the RZA-LMS algorithm proposed by Chen et al., discussed above. The l.sub.0-norm in compressed sensing problems performs better than the l.sub.2-norm in sparse environments. Since the use of the l.sub.0-norm is not feasible, an approximation can be used instead (such as the l.sub.1-norm). Thus, Chen et al. developed the Reweighted Zero Attracting LMS (RZA-LMS) algorithm, which is based on an approximation of the l.sub.0-norm and is given by:

(16) w k ( i + 1 ) = w k ( i ) + μ e ( i ) u k ( i ) - ρ sgn { w k ( i ) } 1 + δ .Math. w ( i ) .Math. , ( 7 )
which can be written in vector form as:

(17) w ( i + 1 ) = w ( i ) + μ e ( i ) u T ( i ) - ρ sgn { w ( i ) } 1 + δ .Math. w ( i ) .Math. , ( 8 )
where ρ and ò are control parameters greater than 0 and sgn denotes the sign or signum function (an odd mathematical function that extracts the sign of a real number). The algorithm performs better than the standard LMS algorithm in sparse systems.

(18) However, in the present adaptive filter, equation (8) is modified. Since convergence speed and low steady-state error are usually two conflicting parameters for an adaptive filter, the step size needs to be varied such that initially the convergence rate is fast, but as the algorithm approaches steady-state, the step size becomes small in order to give a low error response. Thus, the variable step size recursion of equation (5) is applied to form a variable step size RZA-LMS (VSSRZA-LMS) algorithm in which the filter coefficient vector is given by:

(19) w ( i + 1 ) = w ( i ) + μ ( i ) e ( i ) u T ( i ) .Math. u ( i ) .Math. 2 - ρ sgn ( w ( i ) ) 1 + .Math. .Math. w ( i ) .Math. , ( 9 )
and μ(i) is updated according to equation (5). As above, ρ and ε are each control parameters greater than zero.

(20) The two embodiments of the adaptive filter were tested by simulations using conventional computer software. Three scenarios are considered here. In the first one, the unknown system is represented by a 16-tap FIR filter. For the first 500 iterations, only one tap weight is non-zero. For the next 500 iterations, all odd numbered taps are 1 while the even numbered taps are zero. For the last 500 iterations, the even numbered taps are changed to −1. The input sequence is zero-mean Gaussian with variance 1. The value for μ is taken as 0.5 for both the NLMS and the RZA-NLMS algorithms while μ(0) is 0.5 for the VSS algorithms. The value for ρ is taken as 5e-4 for both the RZA-NLMS algorithm as well as the proposed VSSRZA-NLMS algorithm. The values for the step size control parameters are chosen as α=0.99 and γ=0.01 for both the VSS algorithms. Results are shown for a signal-to-noise ratio (SNR) of 20 dB. As depicted in FIG. 2, the proposed algorithms clearly outperform both fixed step size algorithms.

(21) The second experiment is performed with the same unknown system as the first experiment. However, the input in this case is modeled by u(i)=0.8u(i−1)+x(i), where x(i) is a zero-mean random sequence with variance 1. The variance of the resulting sequence is normalized to 1. The values for μ and custom character are taken to be the same as before. The value for γ is same as in the previous case while α=0.995 for both algorithms. The unknown system is modified slightly such that the three variants are now 1500 iterations each instead of 500 iterations as previously. The results are reported in FIG. 3. The difference in performance of the proposed algorithms compared with the previous algorithms is less in this case. However, the proposed algorithms clearly outperform the previous algorithms.

(22) In the third scenario, the unknown system is modeled using a 256-tap FIR filter with only 28 taps being non-zero. The value for μ is kept the same as previously, showing that the normalization factor makes the algorithms independent of the filter length. The value for ρ is changed to 1 e-5. The values for α and γ are kept the same as the first experiment. As can be seen from FIG. 4, the proposed algorithms outperform the previous algorithms, even at low SNR value.

(23) It is to be understood that the present invention is not limited to the embodiments described above, but encompasses any and all embodiments within the scope of the following claims.