ITERATIVE LEARNING CONTROL METHOD FOR MULTI-PARTICLE VEHICLE PLATOON DRIVING SYSTEM

20230078812 · 2023-03-16

Assignee

Inventors

Cpc classification

International classification

Abstract

The present invention discloses an iterative learning control (ILC) method for a multi-particle vehicle platoon driving system, and relates to the field of ILC. The method includes: firstly, discretizing a multi-particle train dynamic equation using a finite difference method to obtain a partial recurrence equation, and then transforming the partial recurrence equation into a spatially interconnected system model; secondly, transforming the spatially interconnected system model into an equivalent one-dimensional dynamic model using a lifting technology, and in order to compensate input delay, designing an ILC law based on a state observer, and thirdly, transforming a controlled object into an equivalent discrete repetitive process according to the ILC law, and converting a controller combination problem into a linear matrix inequality based on stability analysis of the repetitive process. The method is simple and easy to implement, considers structure uncertainty of the system, and has a good control performance and robustness.

Claims

1. An ILC method for a multi-particle vehicle platoon driving system, comprising: step 1: establishing a spatially interconnected system model of the multi-particle vehicle platoon driving system; wherein a dynamic equation of the multi-particle vehicle platoon driving system is described as
m{umlaut over (x)}.sub.i({tilde over (t)})=k(x.sub.i+1({tilde over (t)})−x.sub.i({tilde over (t)}))−k(x.sub.i({tilde over (t)})−x.sub.i−1({tilde over (t)}))+u.sub.i({tilde over (t)}−τ) y.sub.i({tilde over (t)})=x.sub.i({tilde over (t)})  (1)  wherein X.sub.i({tilde over (t)}) represents a position of an ith train, u.sub.i({tilde over (t)}) is control input and represents traction of the train, τ is a time-delay constant caused by signal transmission, y.sub.i({tilde over (t)}) is measurement output and represents a speed of the train, m denotes mass of each train and has a variation range±ω.sub.m, and k is a spring coefficient and has a variation range±ω.sub.k; sampling time T is selected, and the equation (1) is approximately discretized using a finite difference method: that is, ( x . i ( t ~ ) ) t , s = x 1 ( t + 1 , s ) - x 1 ( t , s ) T , ( x .Math. i ( t ~ ) ) t , s = x 2 ( t + 1 , s ) - x 2 ( t , s ) T  wherein t and s are discrete time and a train serial number respectively, and substitution of the above formula into the equation (1) gives a partial recurrence eqation: x 1 ( t + 1 , s ) = x 1 ( t , s ) + Tx 2 ( t , s ) ( 2 ) x 2 ( t + 1 , s ) = - 2 kT m x 1 ( t , s ) + x 2 ( t , s ) + kT m x 1 ( t , s + 1 ) + kT m x 1 ( t , s - 1 ) + T m u ( t - τ , s ) y ( t , s ) = x 2 ( t , s ) assuming that information transferred between the trains is their position information, that is, interconnected variables are set as: w.sub.+(t,s)=w.(t,s)=x.sub.1(t,s), v.sub.+(t,s)=x.sub.1(t,s−1), and v.(t,s)=x.sub.1(t,s+1), and an output variable is set as: y(t,s)−x.sub.2(t,s), the equation (2) is transformed into an uncertain spatially interconnected system model, i.e., [ x ( t + 1 , s ) w ( t , s ) q ( t , s ) y ( t , s ) ] = [ A 11 s A 12 s B 11 s B 12 s A 21 s 0 0 0 C 11 s C 12 s D 11 s D 12 s C 21 s 0 0 0 ] [ x ( t , s ) v ( t , s ) p ( t , s ) u ( t - τ , s ) ] ( 3 ) wherein A 11 s = [ 1 T - 2 kT m 1 ] , A 12 s = [ 0 0 kT m kT m ] , A 21 s = [ 1 0 1 0 ] , B 11 s = [ 0 0 T m - T m ] , B 12 s = [ 0 T m ] C 11 s = [ - 2 ϖ k 0 - 2 ϖ m k m 0 ] , C 12 s = [ ϖ k ϖ k ϖ m k m ϖ m k m ] , C 21 s = [ 0 1 ] , D 11 s = [ 0 0 ϖ m m - ϖ m m ] , D 12 s = [ 0 ϖ m m ] in the above formula, x(t,s) ∈ R.sup.n, u(t,s) ∈ R.sup.q and y(t,s) ∈ R.sup.m represent state, input and output variables of an sth subsystem respectively; τ(0<τ<α) denotes the time-delay constant; w(t,s)=[w.sub.+.sup.T(t,s) w.sup.T(t,s)].sup.T and v(t,s)=[v.sub.+.sup.T(t,S) v.sub.−.sup.T(t,s)].sup.Trepresent interconnection between adjacent subsystems, and satisfy
w.sub.+(s)=v.sub.+(s+1) w(s)=v(s−1)  (4)  boundary conditions thereof are set as: v.sub.+(1)=w.sub.−(1)=0 and v.sub.−(n)=w.sub.+(n)=0, and n is a number of the subsystems; p(t,s) denotes a pseudo-input channel of structure uncertainty, q(t,s) denotes a pseudo-output channel of the structure uncertainty, and
p(t,s)θ.sub.sq(t,s)  (5)  wherein an uncertainty block is: θ.sub.s=diag{δ.sub.I, I.sub.r, . . . , δ.sub.II.sub.r}, δ.sub.I ∈ R, |δ.sub.i|≤1, i=1, . . . ,f; δ.sub.i describes a variation of a dynamic parameter of the system, and I.sub.r, is a unit matrix with a dimension of r.sub.f; step 2: transforming the spatially interconnected system model; wherein the model (3) is transformed into an equivalent one-dimensional dynamic model using a lifting technology, and lifting vectors are defined as follows: X(t)=[x(t,1).sup.T, x(t,2).sup.T, . . . , x(t,n).sup.T].sup.T V(t)=[v(t,1).sup.T, v(t,2).sup.T, . . . , v(t,n).sup.T].sup.T W(t)=[w(t,1).sup.T, w(t,2).sup.T, . . . , w(t,n).sup.T].sup.T P(t)=[p(t,1).sup.T, p(t,2).sup.T, . . . , p(t,n).sup.T].sup.T Q(t)=[q(t,1).sup.T, q(t,2).sup.T, . . . , q(t,n).sup.T].sup.T U(t−τ)=[u(t−τ,1).sup.T, u(t−τ,2).sup.T, . . . , u(t−τ,n).sup.T].sup.T Y(t)=[y(t,1).sup.T, y(t,2).sup.T, . . . , y(t,n).sup.T].sup.T then, the whole uncertain spatially interconnected system model can be equivalently described by the following model: [ X ( t + 1 ) W ( t ) Q ( t ) Y ( t ) ] = [ A 11 A 12 B 11 B 12 A 21 0 0 0 C 11 C 12 D 11 D 12 C 21 0 0 0 ] [ X ( t ) V ( t ) P ( t ) U ( t - τ ) ] ( 6 ) wherein A 11 = diag { A 11 1 , .Math. , A 11 n } , A 12 = diag { A 12 1 , .Math. , A 12 n } , A 21 = diag { A 21 1 , .Math. , A 21 n } B 11 = diag { B 11 1 , .Math. , B 11 n } , B 12 = diag { B 12 1 , .Math. , B 12 n } , C 11 = diag { C 11 1 , .Math. , C 11 n } C 12 = diag { C 12 1 , .Math. , C 12 n } , C 21 = diag { C 21 1 , .Math. , C 21 n } , D 11 = diag { D 11 1 , .Math. , D 11 n } D 12 = diag { D 12 1 , .Math. , D 12 n } the model (6) contains the interconnected variables, and therefore, the model (6) is required to be further simplified; based on an interconnection characteristic of the formula (4) and the boundary conditions thereof, an equation relationship between the interconnected variables is obtained:
W(t)=ηV(t)  (7)  wherein η is a permutation matrix unrelated to time t; the formula (7) is substituted into (6) to obtain
V(t)=η.sup.−1A.sub.21X(t)  (8)  and then, the formula (8) is substituted into (6) to eliminate the interconnected variables W(t) and V(t), so as to obtain the following equivalent uncertain model: [ X ( t + 1 ) Q ( t ) Y ( t ) ] = [ A _ 11 B 11 B 12 C _ 11 D 11 D 12 C 21 0 0 ] [ X ( t ) P ( t ) U ( t - τ ) ] ( 9 ) wherein A _ 11 = A 11 + A 12 η 1 A 21 C _ 11 = C 11 + C 12 η - 1 A 21 according to the formula (5), the following formula is obtained:
P(t)=θQ(t)  (10)  wherein the uncertainty block is: θ=diag{θ.sub.1, . . . ,θ.sub.n}, θ.sub.i≤I, i=1, . . . ,n; the formula (10) is substituted into (9) to obtain
Q(t)=(I−D.sub.11θ).sup.1(C.sub.11X(t)+D.sub.12U(t−τ))  (11) then, the formula (11) is substituted into (9) to eliminate uncertainty variables P(t) and Q(t) using an elimination method, so as to obtain a general-form state space model:
X(t+1)=(A+ΔA)X(t)+(B+ΔB)U(t−τ) Y(t)=CX(t)  (12)  wherein A=Ā.sub.11, B=B.sub.12, C=C.sub.21, ΔA=B.sub.11θ(1−D.sub.11θ).sup.−1,C.sub.11 ΔB=B.sub.11θ(I−D.sub.11θ).sup.−1D.sub.12 step 3: designing an ILC law based on a state observer; wherein the state space model (12) is described as an ILC structure form:
X.sub.k+1(t+1)=(A+ΔA)X.sub.k+1(t)+(B+ΔB)U.sub.k+1(t−τ) Y.sub.k+1(t)=CX.sub.k+1(t)  (13)  wherein k+1 represents a current operation trial of the system, t ∈ [0,α] denotes a limited working cycle of each trial of the system, and the input-delay constant satisfies a condition: τ<α; then, the ILC law is expressed as
U.sub.k+1(t)=U.sub.k(t)+r.sub.k+1(t)  (14)  the current control signal U.sub.k+1(t) is equal to the control signal U.sub.k(t) of a previous trial plus the update term r.sub.k+1(t) which is calculated from previous error information; a system tracking error in a (k+1)th trial is
e.sub.k−1(t)=Y.sub.r(t−τ)−Y.sub.k+1(t)  (15)  wherein Y.sub.r(t) is a desired output trajectory; considering delay in output response, the tracking error is re-described as
e.sub.k−1(t)=Y.sub.r(t−τ)−Y.sub.k−1(t)  (16) a state error vector is introduced:
X.sub.k+1(t+1)=X.sub.k+1(t)−X.sub.k(t)  (17) assuming that Y.sub.r(0)=Y.sub.k(0)=CX.sub.k(0), and X.sub.k(0)=0, that is, the system is returned to a same initial state in each trial, X _ k + 1 ( t + 1 ) = X k + 1 ( t ) - X k ( t ) ( 18 ) = ( A + Δ A ) [ X k + 1 ( t - 1 ) - X ? ( t - 1 ) ] + ( B + Δ B ) [ U k + 1 ( t - 1 - τ ) - U ? ( t - 1 - τ ) ] = ( A + Δ A ) X _ k + 1 ( t ) + ( B + Δ B ) r k + 1 ( t - 1 - τ ) and e k + 1 ( t ) = Y r ( t - τ ) - Y k + 1 ( t ) ( 19 ) = e k ( t ) - C [ X k + 1 ( t ) - X k ( t ) ] = - C ( A + Δ A ) X _ k + 1 ( t ) - C ( B + Δ B ) r k + 1 ( t - 1 - τ ) + e k ( t ) ? indicates text missing or illegible when filed in order to compensate input delay, the following state observer is constructed based on output information in the current trial:
{tilde over (X)}.sub.k+1(t+1)=A{tilde over (X)}.sub.k+1(t)+Br.sub.k+1(t−1)+L[Y.sub.k+1(t)−Y.sub.k+1(t−τ)]  (20)  wherein Y.sub.k+1(t)Y.sub.k+1(t−1)−Y.sub.k(t−1)CX.sub.k+1(t), and {tilde over (X)}.sub.k+1(t), is a τ-step ahead prediction of the state X.sub.k+1(t); that is, {tilde over (X)}.sub.k+1(t) is an estimation value of X.sub.k+1(t+τ); and L is observer gain to be designed; an observation error is defined as
{tilde over (e)}.sub.k+1(t)=X.sub.k+1(t)−{tilde over (X)}.sub.k+1(t−τ)  (21) it is assumed that the update term in the ILC law (14) is
r.sub.k+1(r)=K.sub.1{tilde over (X)}.sub.k+1(t+1)+K.sub.2e.sub.k(t+τ)+K.sub.3e.sub.k(t+1+τ)−e.sub.k(t+τ))  (22)  wherein K.sub.1, K.sub.2 and K.sub.3 are learning gain to be designed; the update term is formed by state feedback information and PD-type previous tracking error information, and when the learning gain K.sub.2 is equal to K.sub.3, the formula (22) is simplified to P-type ILC; (22) is substituted into (20) to obtain X ~ k + 1 ( t + 1 - τ ) = ( A + BK 1 ) X ~ k + 1 ( t - τ ) + LC e ~ k + 1 ( t - τ ) + B ( K 2 - K 3 ) e k ( t - 1 ) + BK 3 e k ( t ) ( 23 ) and e ~ k + 1 ( t + 1 ) = X _ k + 1 ( t + 1 ) - X ~ k + 1 ( t + 1 - τ ) ( 24 ) = ( Δ A + Δ BK 1 ) X ~ k + 1 ( t - τ ) + ( A + Δ A ) e ~ k + 1 ( t ) - LC e ~ k + 1 ( t - τ ) + Δ B ( K 2 - K 3 ) e k ( t - 1 ) + Δ BK 3 e k ( t ) it is assumed that, ζ.sub.k+1(t)=[X.sub.k+1.sup.1(t−τ)e.sub.1+1.sup.1(t)].sup.1, and K=K.sub.2-K.sub.3, so as to obtain the following linear discrete repetitive process model with time delay: ζ k + 1 ( t + 1 ) = A ^ ζ k + 1 ( t ) + A ^ t ζ k + 1 ( t - τ ) + B ^ 1 e k ( t - 1 ) + B ^ 0 e k ( t ) ( 25 ) e k + 1 ( t ) = C ^ ζ k + 1 ( t ) + D ^ 1 e k ( t - 1 ) + D ^ 0 e k ( t ) wherein A ^ = [ A + BK 1 0 Δ A + Δ BK 1 A + Δ A ] , A ^ t = [ 0 LC 0 - LC ] B ^ 1 = [ BK Δ BK ] , B ^ 0 = [ BK 3 Δ BK 3 ] C ^ = [ - C ( A + Δ A ) - C ( B + Δ B ) K 1 - C ( A + Δ A ) ] D ^ 1 = - C ( B + Δ B ) K , D ^ 0 = I - C ( B + Δ B ) K 3 step 4: performing system stability analysis and learning gain solving on the linear discrete repetitive process model; wherein a Lyapunov function is selected as V ( k , t ) = V 1 ( t , k ) + V 2 ( k , t ) ( 26 ) V 1 ( t , k ) = ζ k + 1 1 ( t ) S ζ k + 1 ( t ) + .Math. i = 1 t ζ k + 1 T ( t - i ) Q ζ k + 1 ( t - i ) V 2 ( k , t ) = e k T ( t - 1 ) P 1 e k ( t - 1 ) + e k T ( t ) ( P 2 - P 1 ) e k ( t ) wherein S = diag { S 1 , S 2 } > 0 , Q = diag { Q 1 , Q 2 } > 0 , and P 2 > P 1 > 0 ;  V.sub.1(t,k) represents an energy change along a trial, and V.sub.2(k,t) represents an energy change between the trials; and increments of subfunctions are Δ V 1 ( t , k ) = ζ k + 1 T ( t + 1 ) S ζ k + 1 ( t + 1 ) - ζ k + 1 T ( t ) ( S - Q ) ζ k + 1 ( t ) - ζ k + 1 T ( t - τ ) Q ζ k + 1 ( t - τ ) ( 27 ) = H k + 1 T ( t ) Φ 1 T S Φ 1 H k + 1 ( t ) - ζ k + 1 T ( t ) ( S - Q ) ζ k + 1 ( t ) - ζ k + 1 T ( t - τ ) Q ζ k + 1 ( t - τ ) Δ V 2 ( k , t ) = e k + 1 T ( t ) P 2 e k + 1 ( t ) - e k T ( t - 1 ) P 1 e k ( t - 1 ) - e k T ( t ) ( P 2 - P 1 ) e k ( t ) ( 28 ) = H k + 1 T ( t ) Φ 2 T P 2 Φ 2 H k + 1 ( t ) - e k T ( t - 1 ) P 1 e k ( t - 1 ) - e k T ( t ) ( P 2 - P 1 ) e k ( t ) wherein H k + 1 ( t ) = [ ζ k + 1 ( t ) ζ k + 1 ( t - τ ) e k ( t - 1 ) e k ( t ) ] , Φ 1 = [ A ^ A ^ τ B ^ 1 B ^ 0 ] , Φ 2 = [ C ^ 0 D ^ 1 D ^ 0 ] a total increment of the function is Δ V ( k , t ) = Δ V 1 ( t , k ) + Δ V 2 ( k , t ) = H k + 1 T ( t ) .Math. H k + 1 ( t ) ( 29 ) wherein .Math. = [ A ^ A ^ τ B ^ 1 B ^ 0 C ^ 0 D ^ 1 D ^ 0 ] T [ S 0 * P 2 ] [ A ^ A ^ τ B ^ 1 B ^ 0 C ^ 0 D ^ 1 D ^ 0 ] - [ S - Q 0 0 0 * Q 0 0 * * P 1 0 * * * P 2 - P 1 ] if ΔV(k,t)<0 is satisfied for all k and t, the model (25) is stable along the trials, which has an equivalent condition of Π<0; a Schur complement lemma is applied to Π<0, left and right sides of an inequality are multiplied by diag{I,I,S.sup.1,S.sup.1,P.sub.2.sup.1,P.sub.2.sup.1}, variable substitution is performed, and it is assumed that S.sub.1.sup.−1=W.sub.1, S.sub.2.sup.1=W.sub.2, S.sub.1.sup.1Q.sub.1S.sub.1.sup.−1=X.sub.1, S.sub.2.sup.1Q.sub.2S.sub.2=X.sub.2, P.sub.2.sup.1P.sub.1P.sub.2.sup.1=Z.sub.1, P.sub.2.sup.1=Z.sub.2, K.sub.1S.sub.1.sup.1=K.sub.1W.sub.1=R.sub.1, LCS.sub.2.sup.1=LCW.sub.2=R, KP.sub.2.sup.1=K.sub.2=R.sub.2, K.sub.1P.sub.2.sup.1=K.sub.3Z.sub.2=R.sub.1, so as to obtain the following conclusion: for the nominal linear discrete repetitive process model with time delay described in the formula (25), if there exist matrices W=diag{W.sub.1,W.sub.2}>0, X=diag{X.sub.1,X.sub.2}>0, Z.sub.1>0, Z.sub.2>0 and R, R.sub.1, R.sub.2 and R.sub.3 satisfying the following linear matrix inequality: Ξ 1 = [ - W 1 0 0 AW 1 + BR 1 0 0 R BR ? BR 1 * - W 2 0 0 AW 2 0 - R 0 0 * * - Z 2 - C ( AW 1 + BR 1 ) - CAW 2 0 0 - CBR 2 Z 2 - CBR ? * * * - W 1 + X 1 0 0 0 0 0 * * * * - W 2 + X 2 0 0 0 0 * * * * * - X 1 0 0 0 * * * * * * - X 2 0 0 * * * * * * * - Z 1 0 * * * * * * * 0 - Z 2 + Z 1 ] < 0 ( 30 ) ? indicates text missing or illegible when filed  the model (25) is stable along the trials, and the learning gain of the update term (22) and the gain of the state observer (20) are:
Ξ.sub.1+MΘN+N.sup.TΘ.sup.TM.sup.T<0  (32)  when the structure uncertainty of the system is considered, the Schur complement lemma is applied to Π<0, the left and right sides of the inequality are multiplied by diag|I, I, S.sup.1, S.sup.1, P.sub.2.sup.1, P.sub.2.sup.1|,variable substitution is performed, and it is assumed that S.sub.1.sup.1=W.sub.1, S.sub.2.sup.1=W.sub.2, S.sub.1.sup.1Q.sub.1S.sub.1.sup.1=X.sub.1, S.sub.2.sup.1Q.sub.2S.sub.2.sup.1=X.sub.2, P.sub.2.sup.1P.sub.1P.sub.2.sup.1=Z.sub.1, P.sub.2.sup.1=Z.sub.2, K.sub.1S.sub.1=K.sub.1W.sub.1=R.sub.1, LCS.sub.2.sup.1=LCW.sub.2=R, KP.sub.2.sup.1=KZ.sub.2=R.sub.2, K.sub.3P.sub.2.sup.1=K.sub.3Z.sub.2=R.sub.3, so as to obtain
Ξ.sub.1+MΘN+N.sup.TΘ.sup.TM.sup.T<0  (32)  wherein M=[0 B.sub.11.sup.T−B.sub.11.sup.TC.sup.1 0 0 0 0 0 0].sup.r N=[0 0 0 C.sub.11W.sub.1+D.sub.12R.sub.1 C.sub.11W.sub.2 0 0 D.sub.12R.sub.2 D.sub.12R.sub.1] Θ=θ(I−D.sub.11θ).sup.−1, θ.sup.Tθ≤I the formula (32) is equivalent to Ξ 1 + [ ε - 1 N T ε M ] [ I - D 11 * I ] 1 [ ε - 1 N ε M T ] < 0 ( 33 ) wherein ε > 0 ; according to the Schur complement lemma, the formula (33) is described as [ Ξ 1 ε 1 N T ε M * - I D 11 * * - I ] < 0 left and right sides of the above formula are multiplied by diag{I, εI, εI} and ε.sup.2 is replaced by ε, so as to obtain the following conclusion: for the uncertain linear discrete repetitive process model with time delay described in the formula (25), if there exist the matrices W=diag{W.sub.1,W.sub.2}>0, X=diag{X.sub.1,X.sub.2}>0, Z.sub.1>0, Z.sub.2>0 and R, R.sub.1, R.sub.2 and R.sub.3 satisfying the following linear matrix inequality: [ Ξ 1 Ξ 2 * Ξ 3 ] < 0 wherein Ξ 2 = [ 0 0 0 C _ 11 W 1 + D 12 R 1 C _ 11 W 2 0 0 D 12 R 2 D 12 R 3 0 ε B 11 T - ε B 11 T C T 0 0 0 0 0 0 ] T Ξ 3 = [ - ε I ε D 11 * - ε I ]  the model (25) is robustly stable along the trials, and the learning gain of the update term (22) and the gain of the state observer (20) are given by the formula (31); step 5: giving specific parameters of the multi-particle vehicle platoon driving system, determining the learning gain of the ILC law and the corresponding observer gain, applying the control signal of the current trial to the ILC state space model to obtain the output of the current trial, and then repeatedly adjusting and controlling the ILC law to allow the output of each train of the vehicle platoon driving system to asymptotically reach the desired speed trajectory.

Description

BRIEF DESCRIPTION OF DRAWINGS

[0079] FIG. 1 is a structural diagram of a multi-particle vehicle platoon driving system in the present application;

[0080] FIG. 2 is a structural diagram of a spatially interconnected system model in the present application;

[0081] FIG. 3 is an output curve of a first train particle under a nominal situation in the present application;

[0082] FIG. 4 is an output curve of a second train particle under the nominal situation in the present application;

[0083] FIG. 5 is an output curve of a third train particle under the nominal situation in the present application;

[0084] FIG. 6 is an RMS (Root-Mean-Square) contrast curve of a nominal system in the present application;

[0085] FIG. 7 is an output curve of the first train particle under an uncertain situation in the present application;

[0086] FIG. 8 is an output curve of the second train particle under the uncertain situation in the present application;

[0087] FIG. 9 is an output curve of the third train particle under the uncertain situation in the present application; and

[0088] FIG. 10 is an RMS contrast curve of an uncertain system in the present application.

DESCRIPTION OF EMBODIMENTS

[0089] An embodiment of the present invention is further described below with reference to the drawings.

[0090] FIG. 1 is a structural diagram of a multi-particle vehicle platoon driving system.

[0091] For the dynamic equation (1) of the system, the time-delay constant is τ=4; the mass of each train is m=2 [kg], and the variation range is ω.sub.m=0.2 [kg]; the spring coefficient is k=2 [N/m], and the variation range is ω.sub.k=0.2 [N/m].a the sampling time is T=0.05[s].

[0092] FIG. 2 is a structural diagram of a spatially interconnected system model, and all parameter matrices of the model (3) are

[00017] A 11 s = [ 1 0.05 - 0.1 1 ] , A 12 s = [ 0 0 0.05 0.05 ] , A 21 s = [ 1 0 1 0 ] , B 11 s = [ 0 0 0.025 0.025 ] B 12 ? = [ 0 0.025 ] , C 11 ? = [ - 0.4 0 - 0.4 0 ] , C 12 ? = [ 0.2 0.2 0.2 0.2 ] , C 21 ? = [ 0 1 ] D 11 s = [ 0 0 0.1 - 0.1 ] , D 12 s = [ 0 0.1 ] ? indicates text missing or illegible when filed

[0093] Considering a multi-particle vehicle platoon driving system with three interconnected trains, and supposing that an initial condition of the system is x.sub.0(0,1)=x.sub.0(0,2)=x.sub.0(0,3)=0, and a limited working cycle in each trial is 20s, a speed reference trajectory of each train is

[00018] y ? ( t , 1 ) = { 0.6 t 0 t 5 3 5 < t 20 y ? ( t , 2 ) = { 0.3 t 0 t 10 3 10 < t 20 y ? ( t , 3 ) = { 0.2 t 0 t 15 3 15 < t 20 ? indicates text missing or illegible when filed

and a reference trajectory signal is given by a waveform generator.

[0094] The formula (25) is solved to obtain the following PD-type ILC learning gain and observer gain of a nominal system:

[00019] K 1 = [ 0.403 - 40.08 - 1.952 0.003 0.036 0.002 - 1.951 0 3.944 - 39.969 - 1.993 - 0.002 0.036 0.002 - 1.993 0 1.549 - 40.019 ] K 2 = [ 24.578 0.029 - 0.002 0.029 25.746 0.001 - 0.002 0.003 24.692 ] , K 3 = [ 24.538 0.03 - 0.002 0.03 25.706 0.002 - 0.002 0.003 24.652 ] L = [ - 0.009 0.024 - 0.006 0.002 - 0.006 0.007 - 0.006 0.002 - 0.01 0.029 - 0.006 0.002 - 0.006 0.007 - 0.006 0.002 - 0.009 0.024 ] T

[0095] P-type ILC learning gain and corresponding observer gain are

[00020] K 1 = [ 0.504 - 40.1 - 1.995 0.008 - 0.002 - 0.001 - 1.995 0.012 1.788 - 40.119 - 2.006 - 0.012 - 0.002 - 0.001 - 2.006 0.007 0.371 - 40.108 ] K ? = [ 25.284 0.006 - 0.026 0.006 25.238 0.006 - 0.026 0.006 25.268 ] L = [ - 0.01 0.025 - 0.006 0.003 - 0.006 0.007 - 0.006 0.003 - 0.01 0.029 - 0.005 0.003 - 0.008 0.008 - 0.008 0.003 - 0.01 0.029 ] T ? indicates text missing or illegible when filed

[0096] The above iterative learning controller is implemented by an STM32F103RCT6 chip. An input signal of the chip is collected by a BENTLY 74712 speed sensor, stored into a stm32 chip through a conditioning circuit for calculation and used to form an iterative learning update law, and a signal obtained after calculation by a CPU (Central Processing Unit) is used as the control signal U.sub.k+1(t) for a current trial. The control signal acts on a stepping motor DM3622 through a D/A (Digital/Analog) conversion circuit and is used for updating speeds of train particles until the given speed reference trajectory is reached.

[0097] FIG. 3 is an output curve of a first train particle under a nominal situation, FIG. 4 is an output curve of a second train particle under the nominal situation, and FIG. 5 is an output curve of a third train particle under the nominal situation. In order to further evaluate a tracking performance of the system, an RMS (Root-Mean-Square) error performance index is introduced:

[00021] RMS = .Math. s = 1 3 1 400 .Math. i = 1 400 e k 2 ( t , s )

[0098] FIG. 6 is an RMS contrast curve of a nominal system, and it is observed that with the presence of an input delay constant, a state of the system after T steps estimated by a state observer is used as feedback to advance output response of the system, so as to compensate input delay. With an increase of iterations, the control signal is updated constantly, and output of each train gradually reaches the desired speed trajectory, which proves effectiveness of the method according to the present invention. Furthermore, compared with P-type ILC, more tracking error information is used in PD-type ILC, and an RMS error converges faster along the trials, thus achieving a perfect tracking performance.

[0099] The formula (34) is solved to obtain PD-type ILC learning gain and observer gain of an uncertain system:

[00022] K 1 = [ 1.845 - 27.653 - 1.793 - 0.173 - 0.052 - 0.129 - 1.792 - 0.173 3.585 - 27.61 - 1.792 - 0.173 - 0.052 - 0.129 - 1.793 - 0.172 1.845 - 27.653 ] K 2 = [ 16.409 0.594 - 0.092 0.594 15.722 0.594 - 0.092 0.594 16.409 ] , K 3 = [ 12.84 0.574 - 0.118 0.574 12.148 0.574 - 0.118 0.574 12.84 ] L = [ - 0.001 0.01 0.001 - 0.008 - 0.003 0.002 0.001 - 0.008 - 0.005 0.02 0.001 - 0.008 - 0.003 0.002 0.001 - 0.008 - 0.001 0.01 ] T

[0100] Similarly, P-type ILC learning gain and corresponding observer gain are

[00023] K 1 = [ 1.851 - 27.954 - 1.799 - 0.231 - 0.052 - 0.139 - 1.799 - 0.231 3.598 - 27.861 - 1.799 - 0.231 - 0.052 - 0.138 - 1.799 - 0.231 1.852 - 27.953 ] K 3 = [ 14.111 0.69 0.142 0.69 13.279 0.69 - 0.142 0.69 14.111 ] L = [ - 0.001 0.008 0.001 - 0.008 - 0.002 0.002 0.001 - 0.008 - 0.004 0.018 0.001 - 0.008 - 0.002 0.002 0.001 - 0.008 - 0.001 0.008 ] T

[0101] FIG. 7 is an output curve of the first train particle under an uncertain situation, FIG. 8 is an output curve of the second train particle under the uncertain situation, FIG. 9 is an output curve of the third train particle under the uncertain situation, and FIG. 10 is an RMS contrast curve of the uncertain system.

[0102] It is observed that with the simultaneous presence of the time-delay constant and structure uncertainty, a future state of the system is estimated by the state observer to form feedback acting on the system, such that the system outputs timely response after the time delay t, thus improving a control process. With the increase of the iterations, the output of each train asymptotically reaches the desired speed trajectory, and a tracking error converges along the trials, which shows the effectiveness and robustness for the structure uncertainty of the system of the method according to the present invention. In addition, almost 7 trials are required for perfect tracking by the PD-type ILC, and compared with the P-type ILC, convergence time is shorter, a convergence speed is higher, and the tracking performance is better.

[0103] The above description is only a preferred embodiment of the present application, and the present invention is not limited to the above embodiment. It is to be understood that other improvements and variations directly derived or suggested to those skilled in the art without departing from the spirit and idea of the present invention should be considered to be included within the protection scope of the present invention.