ADAPTIVE FILTER METHOD, SYSTEM AND APPARATUS
20190109581 ยท 2019-04-11
Assignee
Inventors
Cpc classification
G06F17/16
PHYSICS
International classification
G06F17/16
PHYSICS
G06F17/11
PHYSICS
Abstract
The present disclosure relates to adaptive filtering optimization methods based on a hyperbolic sine cost function. While the adaptive filtering optimization methods belong to the variable step-size class, however, the present disclosure describes a new approach requiring tuning of only one parameter. The present disclosure is further related to a family of higher order hyperbolic sine cost functions.
Claims
1. A method for adaptive filtering, the method comprising: receiving, via processing circuitry, an input signal; generating, via the processing circuitry, an initial output signal based upon an initial set of one or more coefficients; determining, via the processing circuitry, an error signal based upon the difference between the initial output signal and a desired response signal; calculating, via the processing circuitry, a solution to a function based upon the error signal; and generating, via the processing circuitry, a subsequent output signal based upon a subsequent set of one or more coefficients, wherein the subsequent set of one or more coefficients is determined by adjusting the initial set of one or more coefficients based upon the calculation of the solution to the function, wherein the initial set of one or more coefficients is adjusted in order to minimize the function, wherein the function is a hyperbolic sine-based function.
2. The method according to 1, wherein the function is a second order hyperbolic sine-based function.
3. The method according to 1, wherein the function is a fourth order hyperbolic sine-based function.
4. The method according to 1, wherein the function is defined as
5. The method according to 1, wherein the function is defined as
6. The method according to 1, wherein a stochastic gradient of the function comprises a time-varying step-size.
7. The method according to 6, wherein the stochastic gradient of the function further comprises an upper bound step-size and a lower bound step-size.
8. The method according to 6, wherein the stochastic gradient of the function further comprises a generic upper bound step-size.
9. The method according to 8, wherein the generic upper bound step-size is defined as
10. The method according to 8, wherein the generic upper bound step-size is defined as
11. A device for adaptive filtering, comprising a processing circuitry configured to: receive an input signal; generate an initial output signal based upon an initial set of one or more coefficients; determine an error signal based upon the difference between the initial output signal and a desired response signal; calculate a solution to a function based upon the error signal; and generate a subsequent output signal based upon a subsequent set of one or more coefficients, wherein the subsequent set of one or more coefficients is determined by adjusting the initial set of one or more coefficients based upon the calculation of the solution to the function, wherein the initial set of one or more coefficients is adjusted in order to minimize the function, wherein the function is a hyperbolic sine-based function.
12. The device according to 11, wherein the function is a second order hyperbolic sine-based function, a fourth order hyperbolic sine-based function, or a combination thereof.
13. The device according to 11, wherein the function is defined as
14. The device according to 11, wherein the function is defined as
15. The device according to 11, wherein a stochastic gradient of the function comprises a time-varying step-size.
16. The device according to 15, wherein the stochastic gradient of the function further comprises an upper bound step-size and a lower bound step-size.
17. The device according to 15, wherein the stochastic gradient of the function further comprises a generic upper bound step-size.
18. The device according to 17, wherein the generic upper bound step-size is defined as
19. The device according to 17, wherein the generic upper bound step-size is defined as
20. A non-transitory computer-readable medium comprising a set of instructions, which, when executed by a processing circuitry, cause the processing circuitry to perform a method for adaptive filtering, comprising: receiving, via processing circuitry, an input signal; generating, via the processing circuitry, an initial output signal based upon an initial set of one or more coefficients; determining, via the processing circuitry, an error signal based upon the difference between the initial output signal and a desired response signal; calculating, via the processing circuitry, a solution to a function based upon the error signal; and generating, via the processing circuitry, a subsequent output signal based upon a subsequent set of one or more coefficients, wherein the subsequent set of one or more coefficients is determined by adjusting the initial set of one or more coefficients based upon the calculation of the solution to the function, wherein the initial set of one or more coefficients is adjusted in order to minimize the function, wherein the function is a hyperbolic sine-based function.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] A more complete appreciation of the disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
DETAILED DESCRIPTION
[0029] The terms a or an, as used herein, are defined as one or more than one. The term plurality, as used herein, is defined as two or more than two. The term another, as used herein, is defined as at least a second or more. The terms including and/or having, as used herein, are defined as comprising (i.e., open language). Reference throughout this document to one embodiment, certain embodiments, an embodiment, an implementation, an example or similar terms means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present disclosure. Thus, the appearances of such phrases or in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments without limitation.
[0030] Because of their mathematical tractability and convenient analysis, most gradient optimization methods are quadratic-based cost functions and referred to as linear-based or second order-statistics (SOS) cost functions. Least-mean-square (LMS) and normalized-least-mean-square (NLMS) are members of this class of optimization methods. Higher order-statistics (HOS) cost functions, stemming from a higher order power of adaptation error, and of which least-mean-fourth (LMF) and mixed-norm are examples, are another class of adaptive filters. HOS optimization methods, while demonstrating superior convergence speed compared with SOS optimization methods, have a higher misadjustment level when the noise is Gaussian. This is due, in part, to the steeper error surface of the HOS optimization method, allowing faster convergence while severely penalizing high deviations from the optimal solution. Recently, to improve the speed of convergence of SOS optimization methods, and to maintain a sufficient level of convergence, a new class of stochastic gradient optimization methods has been developed wherein the cost function has an exponential dependence on adaptation error. These optimization methods have a steeper surface than the quadratic cost function and can be seen as a linear combination of all the even moments. This type of optimization method outperforms LMS optimization methods with respect to convergence speed, offering increased robustness against an impulsive noise environment.
[0031] Mixed-norm optimization methods employ different error norms in order to achieve improved convergence performance. While this combination of different norms delivers an extra degree of freedom, this approach requires an optimization mixture between norms based on prior information of the input signal and noise statistics.
[0032] In the present disclosure, a new cost function, the least-hyperbolic-sine, is described. Least-hyperbolic-sine non-linearly adapts uses the error square as a driving argument. Accordingly, a stochastic gradient based optimization method, a hyperbolic sine error squared (HSS) optimization method, is described. HSS is classified as a variable step-size optimization method, with improvements in speed of convergence, adaptation to sudden changes, computational costs, and number of tuning parameters. Additionally, a derivation of the HSS optimization method is provided with supporting analysis to determine the required conditions for convergence, the excess mean steady-state error (EMSE), and the optimal solution with respect to the least-hyperbolic-sine cost function.
[0033] The following notations are used below: x denotes a column vector, x is a scalar, (.).sup.T is the transpose operator, E[.] is the mathematical expectation, and Tr[.] is the trace operator.
I. Optimization Method Formulation
[0034] The considered optimization method formulation is developed with reference to application in a system identification scheme illustrated in
e(k)=d(k)x.sup.T(k)w(k1) (1)
where the desired signal, d(k), is defined as
d(k)=x.sup.T(k)w.sup.o+v(k)(2)
[0035] v(k) is a zero-mean independent random variable, and w.sup.o is the optimal time-varying filter coefficients. Additionally, w=[w.sub.0,w.sub.1, . . . ,w.sub.M1].sup.T describes the set of filter coefficients, M is the filter length, (.).sup.T is the transpose operator, and x(k)=[x(k),x(k1), . . . ,x(kM+1)] is the input signal vector.
[0036] According to an embodiment of the present disclosure, the cost function is a hyperbolic sine with an error square argument, defined as
J(k)=sin h(e.sup.2(k)) (3)
[0037] The cost function is a convex and unimodal function. Its gradient with respect to the filter coefficient yields
.sub.WJ(k)=2e(k)cos h(e.sup.2(k))x.sup.T(k) (4)
[0038] where x(k) is the regression vector. To improve the convergence speed, one can introduce a scale parameter A, where A>0, to scale the squared error in the argument of the hyperbolic sine. The resulting, modified cost function will be
[0039] Accordingly, the gradient with the new cost function will be
.sub.wJ(k)=2e(k)cos h(Ae.sup.2(k))x.sup.T(k) (6)
[0040] Hence, the stochastic recursive form of the coefficients estimate is given as
w(k+1)=w(k)+2 e(k)cos h(Ae.sup.2(k))x(k)(7)
[0041] It is observed that the hyperbolic cosine scales up the step-size in cases of high instantaneous error, resulting in rapid convergence. This may, however, lead to optimization method instability. In order to utilize the large gradient property while maintaining a bounded gradient, thus preserving optimization method stability, a selecting function can be used such that
II. Generic Upper Bound of
[0043] According to an embodiment of the present disclosure, the upper bound of the step-size, .sub.max, is a generic value rather than a fixed number. .sub.max can be evaluated at each iteration as follows:
[0044] where 1 is used to avoid the case when Tr[R.sub.x] approaches zero, hence pushing the denominator to a value of zero. By using this generic value, stability of the optimization method is guaranteed and convergence speed is improved. This can be confirmed through experimental validation, whereby a simulation is performed comparing a fixed maximum step-size with a generic maximum step-size. A generic value allows the optimization method to adapt to abrupt changes in signal power.
III. An Optimal Solution
[0045] In evaluating the optimization method, according to an embodiment of the present disclosure, it should be verified that the behavior of the optimization method is controllable. To this end, an optimal solution is found based on the gradient of the hyperbolic sine cost function, as follows:
.sub.wJ(k)=2e(k)cos h(Ae.sup.2(k))x.sup.T(k)=0 (10)
[0046] To express the equation in terms of the optimal tap weights w.sup.o, and substituting for the e(k) from (1), the following is derived:
x(k)d(k)cos h(Ae.sup.2(k))=x(k)x.sup.Tw.sup.o cos h(Ae.sup.2(k)) (11)
[0047] Then, taking the mathematical expectation of both sides leads to
E[x(k)d(k)cos h(Ae.sup.2(k))]=E[x(k)x.sup.T(k)w.sup.o cos h(Ae.sup.2(k))](12)
[0048] Substituting a Taylor series expansion of the cos h function in (12) yields
[0050] Assuming both the input vector sequence {x(k)} and the error signal sequence {e(k)} to be asymptotically uncorrelated, E[x(k)x.sup.T(k)e(k).sup.4n]=R.sub.xE[e(k).sup.4n]. Moreover, since the error signal is small at the steady-state scenario, the terms that include higher order powers of the error e(k) can be neglected. Hence, these situations result in an expression for the optimal tap weight given as
w.sup.o=R.sub.x.sup.1P.sub.xd (14)
[0051] Therefore, the optimal solution, similar to the LMS optimization method, is the Wiener solution. Upon close investigation of the gradient component of (8), this similarity is observable. In fact, when the error signal e(k) is very small, for instance, at steady-state, the hyperbolic cosine can be approximated around the origin as cosh (e(k).sup.2)1, hence the cost function is, effectively, a match with the quadratic cost function, which is the LMS optimization method in the standard form.
IV. Steady-State Analysis
[0052] In performing a steady-state analysis, an analytical expression for the EMSE must be derived. This approach is understood in the art, as evidenced in Fundamentals of Adaptive Filtering, by Sayed, published in 2003 and A variable step size LMS algorithm, by Kwong and Johnston, published in IEEE Transactions on Signal Processing, vol. 40, no. 7, pp. 1633-1642, incorporated by reference herein in their entirety. In addition to the wide sense stationary channel model assumption, the following standard assumptions are introduced: [0053] A1. There exist a vector w.sup.o such that d(k)=w.sup.T(k)w.sup.o+v(k) [0054] A2. The additive noise sequence {v(k)} is an i.i.d. with variance .sub.v.sup.2=E[(v(k)).sup.2] [0055] A3. The sequence v(i) is independent of the input vector x(j) for all i, j. [0056] A4. The initial condition w.sub.1 is independent of all {d(j), x(j), v(j)} [0057] A5. The input signal auto-correlation matrix R.sub.x=E[x(k)x.sup.T(k)]>0 [0058] A6. The random variables {d(k), x(k), v(k)} are centralized with zero means
[0059] According to the energy conservation framework, the steady-state EMSE is given by S is given by
N.sub.S=E[f.sup.2(e(k))](16) [0061] and D.sub.S is given by
[0062] where e.sub.a(k) is the apriori error defined as
e.sub.a(k)=[w.sup.ow(k)].sup.Tx(k)(18)
f(e), at the steady-state zone, is defined from (8) as
[0063] Accordingly, N.sub.S becomes
N.sub.S=E[e.sup.2(k)cos h.sup.2(Ae.sup.2(k))](20)
[0064] For brevity, and owing to the steady-state analysis, the time index k is dropped. The estimation error e can be represented in terms of the apriori error and the noise signal as (e=e.sub.a+v). Accordingly, (20) becomes
N.sub.S=E[e.sub.a.sup.2 cos h.sup.2(Ae.sup.2)]+.sub.v.sup.2E[cos h.sup.2(Ae.sup.2)](21) [0065] where .sub.v.sup.2 is the variance of the noise. By applying the Cauchy-Schwartz inequality, (21) is further simplified as
N.sub.S{square root over (E[e.sub.a.sup.4].Math.E[cos h.sup.4(Ae.sup.2)])}+.sub.v.sup.2E[cos h.sup.2(Ae.sup.2)](22)
[0066] Furthermore, assuming a priori error to be zero-mean Gaussian, Jensen's inequality can be applied to solve the expectation for the hyperbolic cosine function. Thus, N.sub.S can be written in closed form as:
[0067] In a similar way, D.sub.S in (15) can be written as follows:
[0068] Substituting e=e.sub.a+v into (25) forms
[0069] Based on the assumptions (A1-A6), it can be shown that e.sub.a is a zero-mean Gaussian variable and is independent of the noise v. Therefore,
[0070] As before, applying the Cauchy-Schwartz inequality gives
[0071] Applying Jensen's inequality to (28), assuming a priori error to be zero-mean Gaussian, leads to
D.sub.S{square root over (3 )} cos h(A(S+.sub.v.sup.2))(29)
[0072] Eventually, using a Taylor series expansion of the hyperbolic cosine function, an approximate closed form expression for the steady-state EMSE in (15) is written as
[0073] The following remarks are determined from the derived EMSE in (30): [0074] The EMSE depends on the even powers of the noise power. [0075] The EMSE also depends on the tuning parameter A and is usually coupled with the high order even power of the noise variance .sub.v.sup.2.
V. Convergence Analysis
[0078] The updated selecting function (8) belongs to the general update equation of the error adaptive filtering optimization method, otherwise referred to as the general class error adaptive filter:
[0079] Due to the lack of differentiability of the min function in (32), a first derivative cannot be obtained at all points of f(e), and therefore, f[e(k)] can no longer be described as a Taylor series expansion. The approximation, however,
w(k+1)w(k)+{f{e(k)]x(k)f[e(k)]x(k)w.sup.T(k)x(k)+1/2f[e(k)]x(k)[w.sup.T(k)x(k)].sup.2}(33)
holds at every point except when e(k)=, where
[0080] Therefore, the analysis can proceed under the assumption that noise values rarely equal to .
i. Convergence Speed
[0081] For a small step size , the time-constant of the optimization method of the present disclosure associated with .sub.i(R.sub.x) (the i.sup.th eigenvalue of the auto-correlation matrix R.sub.x) is given by
[0082] Next, assuming e(k)#, then
[0083] Accordingly,
[0084] Eventually, based on (34) and (36), the optimization method of the present disclosure presents the following two cases:
which match the LMS case for =.sub.maxLMS.
[0085] As in the first case is smaller than the LMS time-constant, the convergence of the optimization method of the present disclosure, as compared with LMS and certain LMS variants, will be faster. If the tuning parameter A is not properly chosen, then
may not occur and, as a result, the optimization method of the present disclosure will behave similarly to the standard LMS optimization method with =.sub.maxLMS at each point.
[0086]
ii. Step-Size Bounds for Stability
[0087] As common in all gradient descent optimization methods, the choice of step-size is critical. To guarantee stability, the step-size must satisfy several bounds.
[0088] (8) can be rewritten as
w(k+1)=w(k)+(k)e(k)x(k) (39) [0089] where (k) is given as
[0090] It is sufficient to state that the mean value of (k), i.e., E[(k)], must satisfy the following condition:
[0092] Based on (40), the following two cases are presented: [0093] 1) if cos h
then
(k)=2 .sub.minLMS.Math.cos h(Ae.sup.2(k)) (42)
[0094] Taking expectations of (42), and using the Taylor series expansion, we can approximate E[(k)] as
E[(k)]2 .sub.minLMS{1+3/2A.sup.2F.sup.2(k)+3A.sup.2F(k).sub.v.sup.2+3/2A.sup.2.sub.v.sup.4}(43) [0095] where F=E[e.sub.a.sup.2] is the instantaneous EMSE and .sub.v.sup.2 is the variance of the noise. At steady-state, and ignoring the higher order powers of S, a new bound of is as follows:
then
(k)=2.sub.maxLMS (45) [0097] and the new bound will be as follows:
[0098] While this bound matches the LMS case, .sub.minLMS is first chosen as a lower bound.
VI. Computational Cost
[0099] Compared to a standard LMS optimization method, the optimization method of the present disclosure presents additional computational burden per iteration. For example, one comparison, three multiplications, and one hyperbolic cosine term. Rather than calculating the hyperbolic cosine, an appropriate look up table may be used in order to reduce the computational load.
[0100] Table I shows the computational complexity of different optimization methods, where M is the order of the filter and N is the total number of samples. We assume that the optimization method of the present disclosure uses a generic .sub.max rather than the fixed maximum step-size (generic .sub.max will be explained in the next section).
TABLE-US-00001 Optimization Method Multiplication Addition Comparison Look-Up Present Disc. 3N + 2MN MN 1 cosh Ang 5N + 2MN MN 1 0 MVSS 8N + 8 2N + 2 2 0 MRVSS 14N + 10 4N + 2 2 0 ECVSS 3N 0 1 exp
[0101] It is clear that the computational costs of the optimization method of the present disclosure are still of (M). However, when a variable step size is considered, the optimization method of the present disclosure matches ECVSS with respect to computational burden. Additionally, the relatively low number of parameters that must be tuned make it efficiently deployable and attractive in a broad variety of applications.
VII. Higher Order Hyperbolic Sine Cost Function
[0102] LMF optimization methods use even powers of the instantaneous error as a cost function. Generally, though plagued by stability issues, these optimization methods provide a better compromise between convergence speed and steady-state error.
[0103] According to an embodiment of the present disclosure, the cost function is a hyperbolic sine cost function which non-linearly adapts the error fourth as the driving argument, defined as
J(k)=sin h(e.sup.4(k)) (47)
[0104] This is a convex and uni-modal function. Its gradient with respect to the filter coefficients yields
.sub.wJ(k)=4e.sup.3(k)cos h(e.sup.4(k))x.sup.T(k)(48) [0105] where x(k) is the regression vector. To improve the convergence speed, a scale parameter (A>0) can be introduced to scale the squared error, in the argument of the hyperbolic sine. Therefore, the modified cost function will be
while the gradient with the new cost function will be
.sub.wJ(k)=e.sup.3(k)cos h(Ae.sup.4(k))x(k)(50)
[0106] Hence, the stochastic recursive form of the coefficient estimate is given as
w(k+1)=w(k)+e.sup.3(k)cos h(Ae.sup.4(k))w(k) (51)
[0107] When there is high instantaneous error, the hyperbolic cosine scales up the step-size and, as a result, leads to fast convergence. However, this may also produce optimization method instability. In order to utilize the large gradient property while maintaining a bounded gradient, thus preserving optimization method stability, the following selecting function may be used:
where .sub.max and .sub.min, are the upper and lower bounds of the step-size, , respectively.
i. Steady-State Analysis LMF
[0108] In order to evaluate the stead state performance of the above-described fourth order hyperbolic sine cost function-based optimization method, derivation of an approximate EMSE may follow the same approach described for the hyperbolic sine error square cost function, including associated assumptions (e.g., energy conservation relation framework and wide sense stationary channel model assumption).
[0109] Following the derivation, the approximate EMSE for the hyperbolic sine of order four argument is as follows:
[0110] where =45/29{square root over (3)}+(32/215). From the derived EMSE in (53), the following are determined: [0111] The EMSE depends on the even powers of the noise power. [0112] The EMSE is dependent on the tuning parameter A and is further coupled with the high order even power of the noise variance, .sub.v.sup.2. [0113] If the tuning parameter A increases such that cosh (Ae.sup.4)>.sub.maxLMF/.sub.minLMF for all e.sup.4, then the optimization method will behave like the LMF optimization method with a fixed =.sub.max. [0114] When A=0, the EMSE of the optimization method of the present disclosure, according to (53), is equal to the EMSE of the LMF. Hereafter, .sub.min will be described as .sub.minLMFand .sub.max will be described as .sub.maxLMF to reflect the relationship between the optimization method of the present disclosure and the standard LMF optimization method.
ii. Convergence Analysis
[0115] A convergence analysis, performed via the same methodology as described previously for the hyperbolic sine error squared cost function, can be performed to calculate the following two approximate cases for the cost function of the order four hyperbolic sine: [0116] 1) if
then
then
[0119] Since in the first case is smaller than the LMF time-constant, the convergence of the optimization method of the present disclosure is faster than the convergence of the LMF optimization method and certain other LMF variants. If the tuning parameter A is not properly chosen then
may not occur and the optimization method of the present disclosure will operate according to the standard LMF optimization method with a set =.sub.maxLMF for all points.
iii. Step-Size Bounds for Stability
[0120] As common in all gradient descent optimization methods, the choice of step-size is critical to its function. To guarantee stability, the step-size should satisfy a series of bounds.
[0121] Equation (52) can be rewritten as:
w(k+1)=w(k)+(k)e.sup.3(k)x(k)(56) [0122] where (k) is given as
[0123] In an instant, the mean value of (k), i.e., E[(k)], must satisfy the following condition:
[0124] Based on (57), the following two cases are presented:
[0125] 1) if cos h
then
(k)=.sub.minLMF.Math.cos h(Ae.sup.4(k))(59)
[0126] Including (59), and using the Taylor series expansion, E[(k)] can be approximated as
then
(k)=.sub.maxLMF (62) [0129] and the new bound is:
[0130] While this bound matches the LMF case, .sub.minLMF is first chosen as a lower bound.
VIII. Mixed-Norm Hyperbolic Sine Cost Function
[0131] According to an embodiment of the present disclosure, the cost function is a hyperbolic sine cost function which non-linearly adapts the second and fourth error moments as the driving argument. The pursuant optimization method embodies the concept of normalizing error and combining it with a generic upper bound value for , wherein is a generic value rather than a fixed number. One can evaluate .sub.max at each iteration as follows:
[0132] where 1 is used to avoid the case of a zero value denominator. By using this generic value, the stability of the optimization method is ensured while improving the convergence speed. Moreover, by introducing a normalized error, the optimization method of the present disclosure is imbued with the stability of LMS and the decreased steady-state error achieved by LMF.
[0133] Normalizing the error will work to balance the mixture between second order and fourth order moments, compared with traditional approaches where the balance factor is always fixed. A steady-state analysis can be conducted with an approach similar to that which was deployed for the second order hyperbolic sine cost function and the fourth order hyperbolic sine cost function, independently. Accordingly, the convergence analysis renders the following two cases: [0134] 1) if cos h
then the equation will be
w(k+1)=w(k)+e.sup.3(k)cos h(Ae.sup.4(k))x(k)(65) [0135] which matches the LMF equation. [0136] 2) if cosh
then me equation will be
[0137] In the second case, the normalized error is as follows: [0138] a) if error is small then
[0142] The above approximates a natural mix between the second and fourth moments of error driven by the instantaneous error. Specifically, from (67) and (68), instead of controlling the amount of mix via a single parameter, error becomes the driver of the mix norm, where the cost function is a fourth order moment and the error function is normalized.
IX. Simulation Results
[0143] Simulation results were carried out for system identification scenarios.
[0144] In the present disclosure, and further in context of the generalized system identification model of
where w.sup.o=[w.sub.0.sup.o, w.sub.1.sup.o, . . . w.sub.M.sub.
[0145] In order to fairly evaluate the optimization method of the present disclosure against others in the literature, different experiments will be conducted based on the simulation environment used in the referenced optimization methods.
Example 1LMS Family (see Ang and Farhang-Boroujeny, A new class of Gradient Adaptive Step-Size LMS Algorithms, IEEE Transactions on Signal Processing, Vol. 49, No. 1, 2001; Referred to Herein as the Ang Method and Incorporated by Reference in its Entirety)
[0146] According to an embodiment of the present disclosure, the adaptive filter and the unknown system are both of order 16, the input signal is zero-mean white Gaussian noise (of unit variance), the desired signal is corrupted by AWGN with zero-mean, and the SNR is 20 dB. The multiplicative optimization method, as recommended by Ang and Farhang-Boroujeny, was used in place of the linear counterpart optimization method. The Ang method parameter values are as follows: =0.95 and =210 .sup.4. Further, the Ang method was initialized with .sub.max to provide a high initial convergence speed. The optimization method of the present disclosure used a tuning parameter A=10 and .sub.minLMS was chosen such that a similar steady-state misadjustment level to Ang was achieved. Similar to Ang,
In testing the optimization methods tractability, a sudden change was introduced at iteration 4000 by multiplying all filter coefficients of the unknown system by 1.
[0147]
Example 2LMS-family (see Aboulnasr and Mayyas, A robust variable step-size LMS-type algorithm: analysis and simulations, IEEE Transactions on Signal Processing, Vol. 45, No. 3, 1997; referred to herein as the MVSS method and incorporated by reference in its entirety)
[0148] According to an embodiment of the present disclosure, the adaptive filter and the unknown system are both of order 4, the input signal is zero-mean white Gaussian noise, the desired signal is corrupted by AWGN noise with zero-mean, and the SNR is 30 dB. The parameters of the MVSS method are assigned as follows: =0.97, =0.99, =1, .sub.max=0.1, .sub.min=510 .sup.4. The MVSS method was initialized with .sub.max to provide a high initial convergence speed. For the optimization method of the present disclosure, the tuning parameter was set to A=120,
and .sub.minLMS was chosen to give a steady-state misadjustment level similar to that obtained by the MVSS method. At iteration 3000, an abrupt change was introduced, similar to Example 1.
[0149]
Example 3LMS-Family (see Zhao, et al., A Fast Variable Step-Size LMS Algorithm with System Identification, 2.SUP.nd .IEEE Conference on Industrial Electronics and Applications, 2007; Referred to Herein as the MRVSS Method and Incorporated by Reference in its Entirety)
[0150] According to an embodiment of the present disclosure, the adaptive filter and the unknown system are both of order 4, the input signal is zero-mean white Gaussian noise, the desired signal is corrupted by AWGN noise with zero-mean, and the SNR is 30 dB. The parameters of the MRVSS method are assigned as follows: =0.97, =0.995, b=110.sup.5, .sub.max=0.1. The MRVSS method was initialized with .sub.max to provide a high initial convergence speed. For the optimization method of the present disclosure, the tuning parameter was set to A=100 and .sub.minLMS was chosen to give an acceptable steady-state misadjustment level, while .sub.maxLMS was used in the MRVSS method. At iterations 3000, 5000, 7000, and 9000, all coefficients of the unknown system were multiplied by 1 in order to test the optimization methods ability to track sudden changes, similar to Example 1.
[0151]
Example 4LMS Family (see Rusu and Cowan, The exponentiated convex Variable Step-Size (ECVSS) Algorithm, Signal Processing, Vol. 90, No. 9, 2010; Referred to Herein as the ECVSS Method and Incorporated by Reference in its Entirety)
[0152] According to an embodiment of the present disclosure, the adaptive filter and the unknown system are both of order 32 with an impulse response of H(z)=.sub.n=0.sup.nZ.sup.n, where =0.80025. All coefficients were normalized to |H(z)|. The input signal is a zero-mean random bipolar sequence from {1, +1}, the desired signal is corrupted by AWGN with zero-mean, and the SNR is 30 dB. In order to establish a steady-state adjustment level similar to the optimization method of the present disclosure, while maintaining rapid convergence, the A parameter of the ECVSS method was set to 35. Per Rusu and Cowan, .sub.max=0.008565 and .sub.min=0.0008565.
[0153] The optimization method of the present disclosure was given a tuning parameter A=100, with the same .sub.minLMS as the ECVSS method, such that a similar steady-state misadjustment level would be achieved.
Following 4000 iterations, all the coefficients of the unknown system were multiplied by 1 in order to test each optimization method's ability to track sudden changes, similar to Example 1.
[0154]
Example 5LMF Family (Standard LMF)
[0155] According to an embodiment of the present disclosure, the adaptive filter and the unknown system are both of order 5. The input signal is a uniform zero-mean random bipolar sequence from {1, +1}, the desired signal is corrupted by sub-Gaussian noise, and the SNR is 10 dB. The step size, , was set to 0.001. The optimization method of the present disclosure was assigned a scaling parameter A=100 and .sub.max=0.01. In order to achieve the same steady-state misadjustment across the optimization methods, .sub.nth, was set to 0.001, matching the LMF .
[0156]
Example 6LMF-Family (see bin Mansoor, et al., Stochastic Gradient Algorithm Based on an Improved Higher Order Exponentiated Error Cost Function, Asilomar Conference on Signals, Systems and Computers, 2014; Referred to Herein as the EELMF Method and Incorporated by Reference in its Entirety)
[0157] According to an embodiment of the present disclosure, the adaptive filter and the unknown system are both of order 5. The input signal is bipolar {1, 1}, the desired signal is corrupted by sub-Gaussian noise with zero-mean, and the SNR is 10 dB. In order to maintain stability, the maximum scaling parameter used in the EELMF method is k=0.14. The step size, , was set to 0.001. In the optimization method of the present disclosure, the tuning parameter was set to A=100 and .sub.minLMF was chosen to be similar to the EELMF method to ensure a similar steady-state misadjustment level. .sub.maxLMF was set to 0.01.
[0158]
[0159] As shown in
[0160] To test this tracking ability, the maximum scaling parameter of the EELMF method was set to k=0.009 and an experiment following the same approach to
[0161]
Example 7LMF-Family (see Zerguine, et al., A Hybrid LMS-LMF Scheme for Echo Cancellation, IEEE International Conference on Acoustics, Speech, and Signal Processing, 1997; Referred to Herein as the LMS-LMF Type II Method and Incorporated by Reference in its entirety)
[0162] According to an embodiment, and similar to the optimization method of the present disclosure, the LMS-LMF Type II method employs two different . For the evaluation, the adaptive filter and the unknown system are both of order 16. The input signal is white-Gaussian with zero-mean, the desired signal is corrupted by sub-Gaussian noise with zero-mean, and the SNR is 10 dB. For the LMS-LMF Type II method, .sub.1 is set to 0.03 and .sub.2 is set to 0.001. For the optimization method of the present disclosure, the tuning parameter A is set to 100, .sub.maxLMF=0.002 and .sub.minLMF =0.001. At iteration 7,000, all filter coefficients were multiplied by 1.
[0163]
Example 8Mixed-Norm Family (see LMS-LMF Type II; see Sayin, et al., A Novel Family of Adaptive Filtering Algorithms Based on the Logarithmic Cost, IEEE Transactions on Signal Processing, Vol. 62, No. 17, 2014; Referred to Herein as the Logarithmic Method and Incorporated by Reference in its Entirety)
[0164] Next, an evaluation of the optimization method of the present disclosure compared with a logarithmic cost function method (Logarithmic method) and the LMS-LMF Type II method is completed.
[0165] According to an embodiment of the present disclosure, for all systems, the adaptive filter and the unknown system are of order 5. The input signal is a uniform zero-mean random bipolar sequence from {1, +1}, the desired signal is corrupted by sub-Gaussian noise, and the SNR is 10 dB. For the reference optimization methods, the step-size, , was set to 0.001. For the optimization method of the present disclosure, the scaling parameter A was set to 100 and was tuned such that all three optimization methods would have the same steady-state misadjustment level in order to have a fair comparison. For the LMS-LMF Type II method, .sub.1 was selected to maximize convergence speed while .sub.2 was selected in order to reach the same steady-state misadjustment level as the other optimization methods.
[0166]
Example 9 -Mixed-Norm Family
[0167] Next, according to an embodiment of the present disclosure, the robustness of the optimization method of the present disclosure under three types of noise distributions was evaluated. All noise distributions (Gaussian, Uniform, and Laplacian) have the same noise power. A similar evaluation approach to Example 5 was employed.
[0168] According to an embodiment of the present disclosure, the adaptive filter is of order 5. The input signal is a uniform zero-mean random bipolar sequence from {1, +1}, the desired signal is corrupted with the noise distributions described above, and the SNR is 10 dB.
[0169]
[0170] According to an embodiment of the present disclosure,
[0171] In an example, the optimization method of the present disclosure can be applied to estimating the impulse response of a small audio loudspeaker for determining the combined impulse response of a loudspeaker/room/microphone sound propagation path, wherein the loudspeaker and microphone are to be used in active noise control tasks. In other instances, and in a non-limiting manner, the optimization method of the present disclosure can be applied to adaptive control, electrical power, adaptive noise cancelling, echo cancellation for long-distance transmission, and acoustic echo cancellation.
[0172] In another embodiment, the optimization methods of the present disclosure may be applied to channel identification. For example, it is known that communications are often transmitted from one point to another via a medium such as an electrical wire, optical fiber, or wireless radio link. Non-idealities of the transmission medium, or channel, distort the fidelity of the transmitted signals, making deciphering the received information difficult. In cases where the effects of the distortion can be modeled as a linear filter, the resulting smearing of the transmitted symbols is known as inter-symbol interference (ISI). In these cases, an adaptive filter can be used to model the effects of the channel ISI for the purpose of deciphering the received information in an optimal manner. In this scenario, the transmitter sends to the receiver a sample sequence x(n) that is known to both the transmitter and receiver. The receiver then attempts to model the received signal d(n) using an adaptive filter whose input is the known transmitted sequence x(n). After a suitable period of adaptation, or optimization, via the selected optimization method of the present disclosure, the parameters of the adaptive filter in W(n) may be fixed and then used to decode future signals transmitted across the channel.
[0173] Next, with reference to
[0174] Further, the claimed advancements may be provided as a utility application, background daemon, or component of an operating system, or combination thereof, executing in conjunction with CPU 1700 and an operating system such as Microsoft Windows 7, UNIX, Solaris, LINUX, Apple MAC-OS and other systems known to those skilled in the art.
[0175] The hardware elements in order to achieve the device may be realized by various circuitry elements, known to those skilled in the art. For example, CPU 1700 may be a Xenon or Core processor from Intel of America or an Opteron processor from AMD of
[0176] America, or may be other processor types that would be recognized by one of ordinary skill in the art. Alternatively, the CPU 1700 may be implemented on an FPGA, ASIC, PLD or using discrete logic circuits, as one of ordinary skill in the art would recognize. Further, CPU 1700 may be implemented as multiple processors cooperatively working in parallel to perform the instructions of the inventive processes described above.
[0177] The device in
[0178] The device further includes a display controller 1708, such as a NVIDIA GeForce GTX or Quadro graphics adaptor from NVIDIA Corporation of America for interfacing with display 1710, such as a Hewlett Packard HPL2445w LCD monitor. A general purpose I/O interface 1712 interfaces with a keyboard and/or mouse 1714 as well as a touch screen panel 1716 on or separate from display 1710. General purpose I/O interface also connects to a variety of peripherals 1718 including printers and scanners, such as an OfficeJet or DeskJet from Hewlett Packard.
[0179] A sound controller 1720 is also provided in the device, such as Sound Blaster X-Fi Titanium from Creative, to interface with speakers/microphone 1722 thereby providing sounds and/or music.
[0180] The general purpose storage controller 1724 connects the storage medium disk 1704 with communication bus 1726, which may be an ISA, EISA, VESA, PCI, or similar, for interconnecting all of the components of the device. A description of the general features and functionality of the display 1710, keyboard and/or mouse 1714, as well as the display controller 1708, storage controller 1724, network controller 1706, sound controller 1720, and general purpose I/O interface 1712 is omitted herein for brevity as these features are known.
[0181] Obviously, numerous modifications and variations are possible in light of the above teachings. It is therefore to be understood that within the scope of the appended claims, the invention may be practiced otherwise than as specifically described herein.
[0182] Thus, the foregoing discussion discloses and describes merely exemplary embodiments of the present invention. As will be understood by those skilled in the art, the present invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting of the scope of the invention, as well as other claims. The disclosure, including any readily discernible variants of the teachings herein, defines, in part, the scope of the foregoing claim terminology such that no inventive subject matter is dedicated to the public.