Adaptive filter and method of operating an adaptive filter
10363765 ยท 2019-07-30
Assignee
Inventors
Cpc classification
B41M3/144
PERFORMING OPERATIONS; TRANSPORTING
G10L19/018
PHYSICS
H04B3/23
ELECTRICITY
International classification
H04L25/03
ELECTRICITY
H04B3/23
ELECTRICITY
Abstract
The present application relates to an adaptive filter using resource sharing and a method of operating the adaptive filter. The filter comprises at least one computational block, a monitoring block and an offset calculation block. The computational block is configured for adjusting a filter coefficient, c.sub.i(n), in an iterative procedure according to an adaptive convergence algorithm. The monitoring block is configured for monitoring the development of the determined filter coefficient, c.sub.i(n), during the performing of the iterative procedure. The offset calculation block is configured for determining an offset, Off.sub.i, based on a monitored change of the filter coefficient, c.sub.i(n), each first time period, T.sub.1, and for outputting the determined offset, Off.sub.i, to the computational block if the determined filter coefficient, c.sub.i(n), has not reached the steady state. The computational block is configured to accept the determined offset, Off.sub.i, and to inject the determined offset, Off.sub.i, into the iterative procedure.
Claims
1. A method of iterative determination of an offset filter coefficient of an adaptive filter using resources sharing comprising: adjusting a filter coefficient to form the offset filter coefficient in an iterative procedure comprising: determining a determined filter coefficient from the filter coefficient by sharing a shared resource, configured to execute an adaptive convergence algorithm on the filter coefficient and at least one other filter coefficient, and wherein a sharing factor equals a number of filter coefficients of the at least one other filter coefficient plus one; determining an offset, based on a monitored change of the determined filter coefficient and the sharing factor; and adding the offset to the determined filter coefficient, at a first time period, if the determined filter coefficient, has not reached a steady state.
2. The method according to claim 1, wherein the determined filter coefficient is determined over a second time period, wherein the first time period, is an integer multiple of the second time period.
3. The method according to claim 2, wherein the first time period, T.sub.1, is a integer multiple N of a sampling period, T.sub.s, wherein T.sub.1=N.Math.T.sub.s, N>1; and wherein the second time period, T.sub.2, is an integer multiple M of the second time period, T.sub.2, wherein T.sub.2=M.Math.T.sub.1, M>1.
4. The method according to claim 1, further comprising: determining the determined filter coefficient at a beginning of the first time period.
5. The method according to claim 1, further comprising: determining a change to the determined filter coefficient at an ending of the first time period and at the ending of at least one other first time period; and comparing the change to a threshold value to determine if the determined filter coefficient has reached the steady state.
6. The method according to claim 1 wherein the adaptive filter has a filter order; and the adaptive filter comprises a number of computational blocks being less than the filter order, wherein the computation blocks are configured to adjust a respective filter coefficient.
7. An adaptive filter using resource sharing, said filter comprising: at least one shared computational block configured for adjusting a filter coefficient and at least one other filter coefficient in an iterative procedure according to an adaptive convergence algorithm, wherein adjusting the filter coefficient determines a determined filter coefficient, and a sharing factor equals a number of filter coefficients of the at least one other filter coefficient plus one; a monitoring block configured for determining a change to the determined filter coefficient during the iterative procedure; and an offset calculation block configured to determine an offset based on the sharing factor and a monitored change of the determined filter coefficient during a first time period and for outputting the offset to the at least one shared computational block if the change to the determined filter coefficient has not reached a steady state, wherein the computational block is configured to receive the offset and to inject the offset, into the iterative procedure.
8. The adaptive filter according to claim 7: wherein the monitoring block is further configured for monitoring the development of the determined filter coefficient over a second time period wherein the first time period is an integer multiple of the second time period.
9. The adaptive filter according to claim 7, wherein the first time period, T.sub.1, is a integer multiple N of a sampling period, T.sub.s, wherein T.sub.1=N.Math.T.sub.s, N>1; and wherein the second time period, T.sub.2, is an integer multiple M of the second time period, T.sub.2, wherein T.sub.2=M.Math.T.sub.1, M>1.
10. The adaptive filter according to claim 7, wherein the monitoring block is further configured for determining the change to the determined filter coefficient at the beginning of the first time period.
11. The adaptive filter according to claim 7: wherein the monitoring block is further configured for determining the change of the determined filter coefficient at the ending of the first time period; and comparing the change to a threshold value to determine if the determined filter coefficient, has reached the steady state.
12. The adaptive filter according to claim 7: wherein the adaptive filter has a filter order; and the adaptive filter comprises a number of shared computational blocks being less than the filter order.
13. A computer program product comprising a non-transitory computer readable medium carrying instructions, which, when executing on one or more processing devices, cause the one or more processing devices to perform a method according to claim 7.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The accompanying drawings, which are incorporated herein and form a part of the specification, illustrate the present invention and, together with the description, further serve to explain the principles of the invention and to enable a person skilled in the pertinent art to make and use the invention.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
DETAILED DESCRIPTION
(16) Embodiments of the present disclosure will be described below in detail with reference to drawings. Note that the same reference numerals are used to represent identical or equivalent elements in figures, and the description thereof will not be repeated. The embodiments set forth below represent the necessary information to enable those skilled in the art to practice the invention. Upon reading the following description in light of the accompanying drawing figures, those skilled in the art will understand the concepts of the invention and will recognize applications of these concepts not particularly addressed herein. It should be understood that these concepts and applications fall within the scope of the disclosure and the accompanying claims.
(17) Referring now to
(18) Referring now to
(19) Assuming a linear relationship between input signal s(n) and output signal y(n), the adaptive filter can take the form of a finite-impulse-response (FIR) filter as exemplified in herein with reference to
(20)
(21) where S(n)=[s(n), s(n1), . . . , s(nL+1)].sup.T is the input signal vector.
(22) As shown in
(23) The adaptive convergence algorithm of an adaptive filter for adjusting the filter coefficients c.sub.i(n) is performed to minimize a cost function selected with respect to a respective use case of the adaptive filter. The adjusting of the filter coefficients c.sub.i(n) is performed in an iterative procedure:
C(n+1)=C(n)+(n).Math.G(e(n),S(n),(n))
(24) where G(e(n),S(n),(n)) is a nonlinear vector function, (n) is the so-called step size, e(n) is the error signal and S(n) is the input signal vector. (n) is a vector of states that may be used to describe pertinent information of the characteristics of the input signal, error signal and/or filter coefficients.
(25) The adaptive filter comprises a coefficient-adjusting module 125, which performs the aforementioned adaptive convergence algorithm. At least the error signal e(n) and the input signal vector S(n) is input to the coefficient adjusting module 125, which may further comprise at least L memory storage locations to store the filter coefficients c.sub.i(n) and to supply the stored filter coefficients c.sub.i(n) for generating the output signal y(n). Further parameters required by the adaptive convergence algorithm implemented in the coefficient-adjusting module 125 such as the step size (n) may be predefined and/or configurable.
(26) Least mean squares (LMS) functions are used in a class of adaptive filters to mimic a desired filter by finding the filter coefficients that relate to producing the least mean squares of the error signal e(n) (difference between the desired and the actual signal). It is a stochastic gradient descent methodology in that the filter coefficients are only adapted based on the error signal at a current time.
(27) In particular, LMS algorithms are based on the steepest descent methodology to find filter coefficients, which may be summarized as following:
C(n+1)=C(n)+.Math.e(n).Math.S(n)
c.sub.i(n+1)=c.sub.i(n)+.Math.e(n).Math.s(ni)
(28) where C(n)=[c.sub.0(n), c.sub.1(n), . . . , c.sub.L-1(n)].sup.T, S(n)=[s(n), s(n1), . . . , s(nL+1)].sup.T, is the step size and L is the order of the filter.
(29) The filter coefficients are determined in an iterative procedure starting with initial values of the filter coefficients C.sup.init(n)=[c.sub.0.sup.init(n), c.sub.1.sup.init(n), . . . , c.sub.L-1.sup.init(n)].sup.T. The initial values are predefined. In a non-limiting example, the initial values of the filter coefficients c.sub.i.sup.init(n) may be set to zero, i.e. C.sup.init(n)=[0,0, . . . ,0].sup.T=zeros(L), but non-zero initial values likewise possible. As the LMS algorithm does not use the exact values of the expectations, the filter coefficients would never reach the optimal convergence values in the absolute sense, but a convergence is possible in mean. Even though the filter coefficients may change by small amounts, they change about the convergence values. The value of step-size should be chosen properly. In the following, a filter coefficient changing by small amounts about its optimal convergence value, will be referred to as a filter coefficient, which has reached steady state.
(30) A LMS computational block 120.0 to 120.L1 may be arranged for each filter coefficient c.sub.i(n) of the adaptive filter shown in
(31) Referring now to
(32) For the sake of illustration, the exemplary adaptive filter schematically shown in
(33) Referring now to
(34) With reference to
(35) This means that when using computational resources sharing each filter coefficient is updated each k-th iteration in time, herein k=2. In general, the computational resources sharing may be implemented with higher values of k, which will be denoted as sharing factor k in the following, wherein k is integer and k>1. The number of LMS computational blocks corresponds to the filter order L=6 divided by the sharing factor k=2: L/k=3. The exemplified adaptive filter comprises the three LMS computational blocks 120.1, 120.3 and 120.5.
(36) Those skilled in the art understand that the above-described resource sharing scheme is only an illustrative scheme to improve the understanding of the concept of the present application but not intended to limit the present application.
(37) The adjusting of filter coefficients in adaptive filter requires computational blocks configured to carry out the adaptive convergence algorithm. Each computational block is enabled to perform the adjusting procedure of one filter coefficient c.sub.i(n) at one cycle. Therefore, the number of computational blocks in traditional adaptive filters corresponds to the order L of the adaptive filter or the number of tapped delay signal s(ni) provided by the tapped-delay-line. In adaptive filters using computational resources sharing, the number of computational blocks is less than the order L of the adaptive filter. Accordingly, only a subset of the filter coefficients is adjusted in one cycle. In an example of the present application, the number of filter coefficients is an integer multiple of the number of filter coefficients in each subset. The integer multiple corresponds to the sharing factor k.
(38) As schematically illustrated in and described with reference to
(39) The reduced rate of convergence is further illustrated with reference to
(40) In order to improve the convergence rate, an offset determined based on the monitored development of the filter coefficient c.sub.i(n). The determined offset is injected to the filter coefficient c.sub.i(n) at predefined periods of time. The injection of the determined offset is stopped in case the filter coefficient c.sub.i(n) varies about the convergence value, which is indicative of the filter coefficient c.sub.i(n) having reached the steady state.
(41) The offset Off.sub.i is determined based on a value difference of the filter coefficient c.sub.i(n). The value difference .sub.i is determined over a period of time N.Math.T.sub.s, where T.sub.s is the sampling time and f.sub.s is the sampling frequency of the adaptive filter (T.sub.s=1/f.sub.s) and N is an predetermined integer value, N1, wherein
(42)
(43) The value difference .sub.i is determined with n=0, wherein n is a sampling index relating to the sampling time T.sub.s. The sampling index n=0 is indicative of the start of the filter coefficient adjusting procedure.
(44) The value difference .sub.i(n) is further determined each period of time M.Math.N.Math.T.sub.s after the potential injection of the offset Off.sub.i. The value difference .sub.i.sup.j(j) is hence determined at n=j.Math.M.Math.N, where j=1, 2, 3 . . . (t=n.Math.T.sub.s). Accordingly,
(45)
(46) At each period of time M.Math.N.Math.T.sub.s, the offset Off.sub.i is added to the filter coefficient c.sub.i(n) provided that the current slope of the development of the filter coefficient c.sub.i(n) is below a predefined threshold. A current slope of the development of the filter coefficient c.sub.i(n) below the predefined threshold is considered to be indicative of a filter coefficient c.sub.i(n) slightly varying about the optimal convergence value.
(47) The offset Off.sub.i(j) is determined based on the value difference .sub.i.sup.j(j) and the sharing factor k:
(48)
(49) When using the above vector representation, the offset Off.sub.i(j) can be described as following:
(50)
(51) wherein C(n)=[c.sub.0(n), c.sub.1(n), . . . , c.sub.L-1(n)].sup.T,
(52) Hence, the offset can be written as following:
(53)
(54) and the injection can be written as following:
C(n)=C(n)+
(55) As schematically illustrated in
(56)
(57) The adaptive convergence algorithm is performed for at each cycle, whereas offsets are injected on a periodic basis having a period greater than the iteration cycle of the adaptive convergence algorithm.
(58) It should be noted that current slopes of the development of the filter coefficient c.sub.i(n) at the above-mentioned points in time (j+1).Math.M.Math.N.Math.T.sub.s, j=0, 1, 2 . . . are positive such that the offset Off.sub.i(j) is added. Otherwise, if the slope is negative, the offset Off.sub.i(j) would be subtracted. As described below, the slope may be approximated by a difference quotient determined with respect to a predefined period of time, which may be shorted than the injection period.
(59)
(60) The periodic injection of the offset is well recognizable in
(61) The convergence rate of the adaptive filter with computational resource sharing and offset injection substantially corresponds to the convergence rate of the adaptive filter without computational resource sharing. The convergence rate of the adaptive filter with computational resource sharing (and without offset injection) is significantly lower.
(62) Referring now to
(63) The improved convergence rate of a filter coefficient c.sub.i(n) is obtained by an offset value Off.sub.i(j), which is added or subtracted at a predetermined period of time. The adding or subtracting of the offset value Off.sub.i(j) depends on a current slope of the development of the filter coefficient c.sub.i(n), in particular the adding or subtracting depends on whether the slope is positive (raising filter coefficient) or negative (falling filter coefficient). The offset value Off.sub.i(j) is based on a periodically determined value difference .sub.i(j), wherein j is an index to the periods.
(64) The method performs with respect to the sampling time T.sub.s and the sampling index n, respectively, wherein time t=n.Math.T.sub.s and n=0, 1, 2 . . . .
(65) In an initial operation S100, the adaptive convergence algorithm is initialized and values are assigned for a first time period T.sub.1 and a second time period T.sub.2. Typically, the sample index n is set to n=n.sub.0 and the initial value of the filter coefficient c.sub.i(n) is set to c.sub.i(n)=c.sub.i.sup.init(n). In an example, the sample index n is set to n.sub.0=0. In an example, the initial value of the filter coefficient c.sub.i(n) is set to c.sub.i(n)=0. The sharing factor k is predefined by the implementation of the adaptive filter with computational resources sharing. A threshold value TH is assigned, which allows to determine whether or not the filter coefficient c.sub.i(n) has reached the steady state.
(66) The first time period T.sub.1 is defined with respect to two parameter N and M, where N and M are integers and N>1, M>1. For instance, the first time period T.sub.1=N.Math.M.Math.T.sub.s. The second time period T.sub.2 is defined with respect to the parameter N. For instance, the second time period T.sub.2=N.Math.T.sub.s. In an example, the parameter N is greater than the sharing factor k (N>k). The second time period T.sub.2 occurs M times in the first time period T.sub.1.
(67) In an operation S100, the sample index is increased by one (n=n+1).
(68) In an operation S120, the development of the filter coefficient c.sub.i(n) is monitored. The development is monitored on the basis of the change of the value of the filter coefficient c.sub.i(n) developing over time. For instance, a slope is determined from the value of the filter coefficient c.sub.i(n), in particular with respect to the second time period T.sub.2.
(69) In an operation S130, it is determined whether or not an offset is injected into the iteration of the filter coefficient c.sub.i(n). In particular, such offset is injected each first time period T.sub.1, only. More particularly, the offset is only injected in case the filter coefficient c.sub.i(n) has not reached the steady state, e.g. in case an absolute value of the monitored slope exceeds the predefined threshold TH, which is considered as indicative of the filter coefficient c.sub.i(n) still significantly differing from the optimal convergence value. The offset to be injected is based on the monitored slope and further on the sharing factor.
(70) In an operation S140, the iteration to calculate the filter coefficient c.sub.i(n) is performed. In accordance with the present example, the filter coefficient c.sub.i(n) is determined using the LMS algorithm:
c.sub.i(n+1)=c.sub.i(n)+.Math.e(n).Math.s(ni)
(71) The step size may be predefined in the initial operation. For the sake of completeness it should be noted that the step size may be a variable parameter dependent on the sampling index n: =(n).
(72) Referring now to
(73) In an operation S200, the filter coefficient c.sub.i(n) is monitored based on the development of the value of the filter coefficient c.sub.i(n) within the first time period T.sub.1. For instance, a slope or a value difference is determined at least at the beginning of each first time period T.sub.1 and the ending of each first time period T.sub.1. The slope or value difference is determined from the change of the values of the filter coefficient c.sub.i(n), e.g. over the second time period T.sub.2.
(74) In an operation S210, it is determined whether or not the second time period T.sub.2 has lapsed. For instance, if the current sampling index n is a multiple of N and n is not zero (n>0) then the second time period T.sub.2 has lapsed as indicated by following condition
n mod N=0
(75) In case the second time period T.sub.2 has lapsed, a slope or value difference is determined in an operation S220. The slope is determined based on the change of the filter coefficient value c.sub.i(n) over time/sampling index. In an example, the slope c.sub.i is determined based on the values of the filter coefficient c.sub.i(n) and filter coefficient c.sub.i(nN) at sampling index n and nN:
(76)
(77) Alternatively, the value difference .sub.i may be determined, which should be considered as an equivalent value to the aforementioned slope:
.sub.i=c.sub.i(n)c.sub.i(nN)=N.Math.c.sub.i(n)
(78) In an operation S230, it is determined whether or not the determined slope c.sub.i or change .sub.i relates to the beginning of the first time period T.sub.1, for instance the first occurrence of the second time period T.sub.2 in the first time period T.sub.1:
(nN)mod(N.Math.M)=0
(79) If the determined slope or value difference relates to the beginning of the first time period T.sub.1 then the slope c.sub.i or change .sub.i may be stored in an operation S240 for later use. The stored slope c.sub.i* or change .sub.i* is used for determining the offset.
(80) In an operation S250, the monitoring of the development of the filter coefficient c.sub.i(n) is completed.
(81) Referring now to
(82) In an operation S300, injecting an offset into the iteration of the filter coefficient c.sub.i(n) is performed provided that the filter coefficient c.sub.i(n) has not reached steady state.
(83) In an operation S310, it is determined whether or not the first time period T.sub.1 has lapsed. For instance, if the current sampling index n is a multiple of N.Math.M and n is not zero (n>0) then the first time period T.sub.1 has lapsed as indicated by following condition
n mod(N.Math.M)=0
(84) If the first time period T.sub.1 has lapsed, the offset Off.sub.i is determined in an operation S320. The offset Off.sub.i is based on the stored slope c.sub.i* or change .sub.i* to consider the development of the filter coefficient c.sub.i(n) over the first time period T.sub.1. The offset Off.sub.i is further based on the sharing factor k, which enables to consider the reduced convergence rate because of the computational resources sharing of the adaptive filter. For instance,
Off.sub.i=(k1).Math.c.sub.i*.Math.M.Math.N; or
Off.sub.i=(k1).Math..sub.i*.Math.M
(85) As aforementioned, the offset Off.sub.i is injected into the iteration of the filter coefficient c.sub.i(n) if the filter coefficient c.sub.i(n) has not reached steady state.
(86) In an operation S330, the current slope c.sub.i or the current value difference .sub.i is compared against the predefined threshold TH. The current slope c.sub.i is for instance determined from a difference quotient based on the filter coefficient c.sub.i(n) at different points in time, e.g. points in time n and (nN). The current value difference .sub.i is for instance determined from a value difference based on the filter coefficient c.sub.i(n) at different points in time, e.g. points in time n and (nN). In an example, the current slope c.sub.i is the slope determined by the previous operation relating to the monitoring of the filter coefficient c.sub.i(n). In an example, the current value difference .sub.i is the value difference determined by the previous operation relating to the monitoring of the filter coefficient c.sub.i(n).
|c.sub.i|<TH.sub.c; or
|.sub.i|<TH.sub.
(87) wherein TH.sub.TH.sub.c.Math.N in the present example.
(88) If an absolute value of the current slope c.sub.i or the current value difference .sub.i is less (or equal to) than the predefined threshold (TH.sub.c and TH.sub., respectively), it is assumed that the filter coefficient c.sub.i(n) has reached the steady state and only slightly varies about the optimal convergence value. In this case, the offset Off.sub.i is not injected.
(89) Otherwise, if the absolute value of the current slope c.sub.i or the current value difference .sub.i is greater than the predefined threshold, the offset Off.sub.i is injected into the iteration calculation of the filter coefficient c.sub.i(n) in an operation S340; for instance:
c.sub.i(n)=c.sub.i(n)+Off.sub.i
(90) In an operation S350, the injecting of an offset is completed.
(91) Referring now to
(92) In an operation S300, injecting an offset into the iteration of the filter coefficient c.sub.i(n) is performed provided that the filter coefficient c.sub.i(n) has not reached steady state.
(93) In an operation S310, it is determined whether or not the first time period T.sub.1 has lapsed. If the first time period T.sub.1 has lapsed, the offset Off.sub.i is determined in an operation S320.
(94) In an operation S330, the current slope c.sub.i or the current change .sub.i is compared against the predefined threshold TH (TH.sub.c and TH.sub., respectively).
(95) If an absolute value of the current slope c.sub.i or the current value difference .sub.i is less (or equal to) than the predefined threshold, it is assumed that the filter coefficient c.sub.i(n) has reached the steady state and only slightly varies about the optimal convergence value. In this case, the offset Off.sub.i is not injected.
(96) Otherwise, if the absolute value of the current slope c.sub.i or the current value difference .sub.i is greater than the predefined threshold, the offset Off.sub.i is injected into the iteration calculation of the filter coefficient c.sub.i(n).
(97) The operations S310 to S330 correspond to the respective operations described above with reference to
(98) In an operation S340, it is determined whether the development of the filter coefficient c.sub.i(n) over time shows an ascending or descending behavior. Whether the filter coefficient c.sub.i(n) ascends or descends over time can be determined from the current slope c.sub.i or the current value difference .sub.i. If the current slope c.sub.i or the current change .sub.i is greater than 0, the filter coefficient c.sub.i(n) ascends over time, otherwise if the current slope c.sub.i(n) or the current change .sub.i is less than 0, the filter coefficient c.sub.i(n) descends over time:
c.sub.i,.sub.i>0: ascending or c.sub.i,.sub.i<0: descending.
(99) If the filter coefficient c.sub.i(n) ascends over time, the offset Off.sub.i is added in an operation S370:
c.sub.i(n)=c.sub.i(n)+Off.sub.i
(100) If the filter coefficient c.sub.i(n) descends over time, the offset Off.sub.i is subtracted in an operation S380:
c.sub.i(n)=c.sub.i(n)Off.sub.i
(101) In an operation S390, the injecting of an offset is completed.
(102) Referring now to
(103) According to the filter order L, the tapped-delay-line has L1 delay elements 110.1 to 110.L1 and provides L tapped delay signal s(ni), i=0, . . . , L1,
(104) The exemplified adaptive filter of
(105) The LMS computational block 120.1 is for instance used to adjust the filter coefficients c.sub.0(n) to c.sub.2(n) and the LMS computational block 120.L/k is for instance used to adjust the filter coefficients c.sub.L-3(n) to c.sub.L-1(n). Those skilled in the art will understand that the computational resources sharing exemplary adaptive filter of
(106) The adaptive filter further comprises L multipliers 130 for multiplying each tapped delay signal s(ni) with the respective filter coefficient c.sub.i(n), where i=0 to L1, and L1 adders 140 for adding the weighted output signal contributions Y.sub.i(n) to obtain the output signal y(n). The adaptive filter further comprises at least L memory locations to store the L filter coefficients c.sub.i(n).
(107) The adaptive filter further comprises a monitoring block 200, which has access to the filter coefficients c.sub.i(n) and which is arranged to monitor the development of the filter coefficients c.sub.i(n). In particular, the monitoring block 200 is configured to carry out the method of monitoring in particular as described above with reference to the flow diagrams shown in
(108) The adaptive filter further comprises a offset calculation block 210, which receives information from the monitoring block 200 about the development of the values of the filter coefficients c.sub.i(n) and is arranged to compute offsets values Off.sub.i for the filter coefficients c.sub.i(n) on a periodic time scale and inject the computed offsets Off.sub.i into the adjusting procedure of the filter coefficients c.sub.i(n). In particular, the offset computation block 210 is configured to carry out the method of monitoring in particular as described above with reference to the flow diagrams shown in
(109) It should be noted that the offset injection should not be understood to be limited to the LMS (least mean square) algorithm for adjusting the filter coefficient, with regard to which the methodology to improve the convergence rate has been illustratively explained above. The LMS algorithm is but one of an entire family of algorithms, which are based on approximations to steepest descent procedures. The family of algorithms further comprises for instance the sign-error algorithm, the sign-delta algorithm, sign-sign algorithm, zero-forcing algorithm and power-to-two quantized algorithm. The steepest descent procedures are based on the mean-squared error (MSE) cost function, which has been shown as useful for adaptive FIR filters. However, further algorithm are known, which are based on non-MSE criteria. The illustrated offset injection is in principle applicable with iteratively determined filter coefficients in the above-mentioned general form:
C(n)=C(n)+
C(n+1)=C(n)+(n).Math.G(e(n),S(n),(n))
(110) Referring now to
(111) The exemplary adaptive filter comprises a number of computational blocks 260. In particular, the number of computational blocks 260 is determined at implementation or design stage. Each of the computational blocks 260 is enabled to perform the adjusting procedure of a filter coefficient c.sub.i(n) at one cycle. The adjusting procedure is carried out in accordance with an adaptive convergence algorithm. The computational blocks 260 are accordingly configured. The computational blocks 260 are not fixedly assigned to one or more tapped delay signals s(ni). A symbol routing logic 300 is provided in the adaptive filter, which is configurable to selectively route any tapped delay signals s(ni) to any computational block 260. Hence, each of the computational blocks 260 is freely assignable to one tapped delay signal s(n1) at one cycle.
(112) For managing the computational blocks 260, each of the computational blocks 260 is allocated to one of a number of w clusters 250.j, wherein j=1, . . . , w and w is a positive non-zero integer. The number w of clusters is configurable. Each of the plurality of clusters 250.1 to 250.w comprises an individual set of Cj computational blocks 260, wherein j=1, . . . w. The number of computational blocks 260, j comprised in each cluster 250.1 to 250.w may differ. For instance, the cluster 250.1 comprises a set of C1 computational blocks CB 260.1.1 to 260.1.C1, the cluster 250.2 comprises a set of C2 computational blocks CB 260.2.1 to 260.2.C2 and the cluster 250.w comprises a set of Cw computational blocks CB 260.w.1 to 260.w.Cw.
(113) The symbol routing logic 300 routes each one of a number of w sets of tapped delay signals {s(ni)}.1 to {s(ni)}.w to a respective one of the clusters 250.1 to 250.w. Each set of tapped delay signals {s(ni)} comprises Mj tapped delay signals s(ni), wherein j=1, . . . , w. The number of tapped delay signals s(ni) comprised by each set may differ. For instance, a first set {s(ni)} of tapped delay signals s(ni) is routed to the cluster 250.1 and comprises M1 tapped delay signals s(ni), a second set {s(ni)} of tapped delay signals s(ni) is routed to the cluster 250.2 and comprises M2 tapped delay signals s(ni), a w-th set {s(ni)} of tapped delay signals s(ni) is routed to the cluster 250.w and comprises Mw tapped delay signals s(ni).
(114) The number of sets of tapped delay signals {s(ni)}.1 to {s(ni)}.w correspond to the number of clusters 250.1 to 250.w.
(115) The filter coefficients c.sub.i(n) are stored in a coefficient memory storage 270, to which the computational blocks 260 have access to read a respective filter coefficient c.sub.i(n) from a respective memory location thereof and write an updated filter coefficient c.sub.i(n) to the respective memory location thereof.
(116) The allocation of each computational block 260 to a respective one of the clusters 250.1 to 250.w and the operation of the computational blocks 260 is under control of a cluster controller block 320. The cluster controller block 320 is configured to turn on/off the computational blocks 260 individually and/or cluster-wise. The cluster controller block 320 is further arranged to configure the computational blocks 260 to enable access to the required filter coefficient c.sub.i(n) corresponding to the tapped delay signal s(ni) supplied thereto by the symbol routing logic 300.
(117) The routing of the tapped delay signals s(n1) is under control of a routing controller block 310, which configures the symbol routing logic (300) accordingly. The routing controller block 310 is configured to allocate each tapped delay signal s(ni) to one of the sets of tapped delay signals {s(ni)}.1 to {s(ni)}.w. The routing controller block 310 configures the symbol routing logic 300 to route each set of tapped delay signals {s(ni)}.1 to {s(ni)}.w to each respective one of the clusters 250.1 to 250.w. Each set of tapped delay signals {s(ni)}.1 to {s(ni)}.w is assigned to one of the clusters 250.1 to 250.w. Each cluster 250.1 to 250.w receives the tapped delay signals s(ni) of one set of tapped delay signals {s(ni)}.1 to {s(ni)}.w.
(118) The routing controller block 310 and the cluster controller block 320 receive information from a monitoring block 200, which has access to the coefficients memory 270 and which is arranged to monitor the development of the filter coefficients c.sub.i(n). The monitoring block 200 is enabled to supply information relating to the development of the filter coefficients c.sub.i(n) to the routing controller block 310 and the cluster controller block 320, which are arranged to dynamically operate the exemplified adaptive filter based on the received information.
(119) The operation of the adaptive filter with controllable computational resource sharing according to an embodiment of the present application will be further explained with reference to
(120) As exemplified in the filter coefficient plot of
(121) The controllable computational resource sharing enables to consider the above considerations in the operation of the adaptive filter while meeting performance requirements at a reduced power consumption.
(122) The symbol routing logic 300 allows to partition the total number of L tapped delay signals s(ni) generated on each sampling cycle by the tapped-delay-line into w signal sets of tapped delay signals {s(ni)}.1 to {s(ni)}.w. Each signal set may comprise a different number of tapped delay signals s(ni). The total number of L tapped delay signals s(ni) are for instance partitioned into the five sets 400.1 to 400.5, each comprising a different number of successive tapped delay signals {s(ni)}, where i=i.sub.1, . . . , i.sub.2, where i.sub.1 and i.sub.2 are integers, i.sub.1<i.sub.2, 0<i.sub.1, i.sub.2<L1, and L is the order of the adaptive filter.
(123) The total number of L tapped delay signals s(ni) may be partitioned into sets based on the monitored, assumed or expected contribution levels to the output signal y(n). The total number of L tapped delay signals s(ni) may be partitioned into sets based on the monitored, assumed or expected value amounts of the associated filter coefficients c.sub.i(n). Initially a partitioning of the tapped delay signals s(ni) to signal sets may be predefined, for instance, the tapped delay signals s(ni) may be evenly assigned to signal sets having substantially the same number of tapped delay signals s(ni), e.g. when the initial values of the filter coefficients are set to zero. When initially starting the filter coefficient adjusting with initial non-zero values, the allocation of the tapped delay signals s(ni) to different signal sets may be based in the initial non-zero values, which may be considered to be indicative of levels of contribution or significance levels of the respective tapped delay signal s(ni). During operation of the adaptive filter, the total number of L tapped delay signals s(ni) may be repartitioned for instance in response to the monitored value amounts of the filter coefficients c.sub.i(n).
(124) As illustratively shown in
(125) Each of the signal sets is associated to one of the clusters, herein five clusters according to the five signal sets. For instance, the cluster 1 is associated with the third signal set 400.3. Computational blocks are allocated to each one of the five clusters. The numbers of computational blocks allocated to the clusters may differ. However, as understood from the above discussion, the crucial factor, which defines the computational performance of a cluster, is given by the individual sharing factor k.sub.i, wherein i=1 to w and w is the number of clusters. The sharing factor k.sub.i defines the ratio between the number of tapped delay signals and filter coefficients c.sub.i(n), respectively, assigned to a cluster i and the number of computational blocks allocated to the cluster i. The sharing factors k.sub.i of the different clusters may differ from each other.
(126) The allocation of tapped delay signal s(ni) may be performed based on one or more threshold levels applied to the amount values of the filter coefficients c.sub.i(n) or the initial values of the filter coefficients c.sub.i(n). The allocation of the tapped delay signals to different sets values may be based on normalized values of the filter coefficients c.sub.i(n). Normalized values of the filter coefficients c.sub.i(n) may improve the comparableness. The allocation of the tapped delay signals s(ni) to the five signal sets exemplified in
(127) In an example of the present application, clusters, to which signal sets with less dominant filter coefficients c.sub.i(n) are assigned, may be operated with a higher sharing factor than clusters, to which signal sets with more dominant filter coefficients c.sub.i(n) are assigned.
(128) The cluster controller block 320 is arranged to allocate the computational blocks to the clusters. Initially, the computational blocks may be allocated to clusters according to an initial allocation scheme; for instance, the computational blocks may be evenly allocated to clusters comprising substantially the same number of computational blocks. During operation of the adaptive filter, the allocation of the computational blocks may be adapted for instance in response to the monitored contribution levels and/or the state of convergence of the filter coefficients c.sub.i(n).
(129) As further illustratively shown in
(130) The tapped delay signs are allocated to one of the N+1 signal sets (corresponding to the number N+1 of value subranges) based on the allocation of the values of the respective filter coefficient c.sub.i(n) to the one of the N+1 value subranges. Accordingly, each signal sets may comprise one, two or more subsets of successive tapped delay signals s(ni). Herein, the signal sets 400.1 and 400.2 each comprise two continuous subsets of tapped delay signal s(ni) and the signal set 400.4 comprises one subsets of successive tapped delay signal s(ni). Each of the three signal sets 400.1 to 400.3 is assigned to one of three clusters.
(131) The number of computational blocks allocated to each of the three cluster may be further selected based on the normalized values of the filter coefficients c.sub.i(n) in the respective signal set. In case the normalized values of the filter coefficients c.sub.i(n) of a signal set are low in comparison to the other ones, a low number of computational blocks is allocated to the respective cluster, which means that the filter coefficients c.sub.i(n) of the signal set with low values are adjusted using a high sharing factor k. In case the normalized values of the filter coefficients c.sub.i(n) of a signal set are high in comparison to the other ones, a high number of computational blocks is allocated to the respective cluster, which means that the filter coefficients c.sub.i(n) of the signal set with high values are adjusted using a low sharing factor k. In case the normalized values of the filter coefficients c.sub.i(n) of a signal set are medium in comparison to the other ones, a medium number of computational blocks is allocated to the respective cluster, which means that the filter coefficients c.sub.i(n) of the signal set with high values are adjusted using a medium sharing factor k.
(132) Referring back to
(133) Further referring to
(134) For instance, in case the filter coefficients c.sub.i(n), which are assigned to one cluster for adjusting procedure has reached the steady state, the computational blocks of the cluster can be disabled at least temporarily to reduce the power consumption. In particular, the computational blocks of the cluster may be disabled for a predefined off-time interval T.sub.off, after which the disabled computational blocks are put into operation again.
(135) For the above-description, it is well understood that the suggested design of the adaptable filter with configurable computational resources sharing enables to flexibly and dynamically assign computational power for a configurable subset of tapped delay signals s(ni) and filter coefficients c.sub.i(n), respectively. Thereby, the available computational power of the computational blocks employed for performing the adjusting procedure according to an adaptive convergence algorithm is efficiently usable while the overall number of implemented computational blocks can be reduced to an economic number.
(136) Although not shown in
(137) Those of skill in the art would understand that information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals, bits, symbols, and chips that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles, or any combination thereof.
(138) Those of skill would further appreciate that the various illustrative logical blocks, modules, circuits, and algorithm steps described in connection with the disclosure herein may be implemented as electronic hardware, computer software, or combinations of both. To illustrate clearly this interchangeability of hardware and software, various illustrative components, blocks, modules, circuits, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
(139) The various illustrative logical blocks, modules, and circuits described in connection with the disclosure herein may be implemented or performed with a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
(140) The steps of a method or algorithm described in connection with the disclosure herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an ASIC. The ASIC may reside in a user terminal. In the alternative, the processor and the storage medium may reside as discrete components in a user terminal.
(141) In one or more exemplary designs, the functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a general purpose or special purpose computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code means in the form of instructions or data structures and that can be accessed by a general-purpose or special-purpose computer, or a general-purpose or special-purpose processor. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
(142) The previous description of the disclosure is provided to enable any person skilled in the art to make or use the disclosure. Various modifications to the disclosure will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other variations without departing from the spirit or scope of the disclosure. Thus, the disclosure is not intended to be limited to the examples and designs described herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
LIST OF REFERENCE SIGNS
(143) 100: adaptive filter; 110: delay element Z.sup.1; 110.1: delay element Z.sup.1; 110.5: delay element Z.sup.1; 110.L1: delay element Z.sup.1; 120: computational block/LMS computational block; 120.0: computational block; 120.1: computational block; 120.3: computational block; 120.5: computational block; 120.L1: computational block; 125: coefficient-adjusting module; 130: multiplier; 130.0: multiplier; 130.L1: multiplier; 140: adder; 140.2: adder; 140.L: adder; 200: monitoring block; 210: offset calculation block; 230: cluster controller; 250: cluster/cluster of computational blocks; 250.1: cluster/cluster of computational blocks; 250.2: cluster/cluster of computational blocks; 250.j: cluster/cluster of computational blocks; 250.w: cluster/cluster of computational blocks; 260: computational block; 260.1: computational block; 260.2: computational block; 260.j: computational block; 260.w: computational block; 260.1.1: computational block (of cluster 250.1); 260.1.C1: computational block (of cluster 250.1); 260.2.1: computational block (of cluster 250.2); 260.2.C2: computational block (of cluster 250.2); 260.w.1 computational block (of cluster 250.w); 260.w.Cw computational block (of cluster 250.w); 270: memory storage/filter coefficients memory; 300: routing logic/symbol routing logic; 310: routing controller/routing controller block; 320: cluster controller/cluster controller block; 400: signal set/set of tapped delay signals; 400.1: signal set/set of tapped delay signals; 400.2: signal set/set of tapped delay signals; 400.3: signal set/set of tapped delay signals; 400.4: signal set/set of tapped delay signals; 400.5: signal set/set of tapped delay signals.