Co-DMPC-based chassis multi-agent system (MAS) cooperative control method for autonomous vehicles, controller, and storage medium

12509115 ยท 2025-12-30

Assignee

Inventors

Cpc classification

International classification

Abstract

The present disclosure provides a cooperative distributed model predictive control (Co-DMPC)-based chassis multi-agent system (MAS) cooperative control method for autonomous vehicles, a controller, and a storage medium. A distributed state-space equation with state coupling and control input coupling characteristics is established. Meanings and transformation methods of predicted trajectories, assumed trajectories, and optimal trajectories of the states and control inputs are designed, providing a communication basis for information exchange between the agents. In order to coordinate the global performance indexes of a vehicle, a local agent optimization problem considering cost coupling is established, and the influence of the cooperative relationship on the control effect is quantitatively analyzed through adaptive weight coefficients. A method of performing a plurality of iterations within a unit sampling time is adopted, and iteration errors are utilized to enable the controller to achieve a balance between solution accuracy and efficiency.

Claims

1. A cooperative distributed model predictive control (Co-DMPC)-based chassis multi-agent system (MAS) cooperative control method for autonomous vehicles, comprising: S1: building a two-degree-of-freedom vehicle model with four-wheel steering and direct yaw moment control (DYC); S2: establishing a distributed state-space equation, wherein a centralized state-space equation of a vehicle is divided into two agents, namely, a steering agent and a DYC agent according to different control variables; as for the steering agent, the distributed state-space equation has control variables of a front wheel steering angle .sub.f and a rear wheel steering angle .sub.r and state variables of a lateral velocity v.sub.y and a vehicle lateral displacement Y; as for the DYC agent, the distributed state-space equation has a control variable of a direct yaw moment M.sub.z and tracked state variables of a yaw rate and a yaw angle S3: building a cooperative control framework, comprising: an input layer, wherein the two-degree-of-freedom vehicle model computes a reference yaw rate .sub.ref and a reference lateral velocity v.sub.ref according to deviation between a current state of the vehicle and a target path; a cooperative control layer, wherein a centralized chassis control system is decomposed into a distributed control system consisting of a plurality of controllers, to achieve tracking of expected targets; through the distributed state-space equation established in the S2, a Co-DMPC algorithm is employed to select a cost function considering global performance; an iterative update method is used in a unit sampling time, enabling agents to independently compute local optimization and exchange computation results multiple times, and an entire chassis multi-agent system obtains a global optimal solution; a moment distribution layer, wherein driving/braking torques of four wheels are optimally distributed based on a total driving force computed by a proportional-integral-derivative (PID) speed controller and a direct yaw moment computed by the cooperative control layer as constraints; and an actuation layer, wherein the control variables are applied to the four wheels through a steering actuator and a hub motor to achieve accurate control of the vehicle; S4: establishing an information transmission protocol, wherein a same prediction time domain N.sub.p and a same control time domain N.sub.c are used in all the agents; as for an agent i, in a control time domain [t, t+N.sub.c], a control variable at a time point t+k is predicted at a time point t as u.sub.i(k|t), and following three sets of control trajectories are defined: u.sub.i.sup.p(k|t): predicted control trajectory; u.sub.i*(k|t): optimal control trajectory; u.sub.i.sup.a(k|t): assumed control trajectory; wherein u.sub.i.sup.p(k|t) represents a control variable at a future time point predicted based on the current state and a control variable at a previous time point; u.sub.i*(k|t) represents an optimal solution obtained after solving local optimization problems; u.sub.i.sup.a(k|t) represents an assumed control variable transmitted by the agent i to a neighbor agent and is obtained by shifting a predicted control variable by one sampling period; in a prediction time domain [t, t+N.sub.p], a state variable at the time point t+k is predicted at the time point t as x.sub.i(k|t), and following three sets of state trajectories are defined: x.sub.i.sup.p(k|t): predicted state trajectory; x.sub.i*(k|t): optimal state trajectory; x.sub.i.sup.a(k|t): assumed state trajectory; the agent i uses, at the time point t, a predicted state at the future time point and an assumed state x.sub.j.sup.p(k|t) and an assumed control u.sub.j.sup.a(k|t) from the neighbor agent to solve a distributed optimization problem; when k=0, x.sub.i.sup.p(0|t)=x.sub.i(t) and x.sub.j.sup.p(0|t)=x.sub.j(t), wherein x.sub.i(t) and x.sub.j(t) are directly measured current states of the vehicle; after obtaining an optimal state x.sub.i*(k|t) and an optimal control u.sub.i*(k|t), a first u.sub.i*(0|t) of a control trajectory is applied to actual control; a single-step recursion is performed on the optimal state trajectory and the optimal control trajectory to obtain assumed trajectories as follows: u.sub.i.sup.a(k1|t+1)=u.sub.i*(k|t), k[1, N.sub.c1]
x.sub.i.sup.a(k1|t+1)=x.sub.i*(k|t), k[1,N.sub.p] (1) wherein a length of the control tr to complete a last term, u.sub.i.sup.a(N.sub.c1|t+1)=0; and a state of the vehicle not under control at a next time point is x.sub.i.sup.a(N.sub.p|t+1)=Ax.sub.i.sup.a(N.sub.p1|t+1)=Ax.sub.i*(N.sub.p|t); and S5: controlling the four wheels through the steering actuator and the hub motor using the control variables.

2. The Co-DMPC-based chassis MAS cooperative control method for the autonomous vehicles according to claim 1, wherein the two-degree-of-freedom vehicle model in the S1 is: { v . y = - v x - C f m s ( v y + l f v x - f ) - C r m s ( v y + l r v x - r ) . = l f C f I z ( v y + l f v x - f ) + l r C r I z ( v y - l r v x - r ) + M z I z ( 2 ) wherein v.sub.x is a longitudinal velocity, v.sub.y is a lateral velocity, is a yaw rate, m.sub.s is vehicle mass, C.sub.f and C.sub.r are front and rear wheel cornering stiffnesses respectively, l.sub.f and l.sub.r are distances from front and rear wheel centers to a center of mass respectively, .sub.f and .sub.r are front and rear wheel steering angles respectively, I.sub.z is a moment of inertia of the vehicle rotating around a z axis, and M.sub.z is a direct yaw moment; and the two-degree-of-freedom vehicle model is combined with path tracking to obtain following dynamic equations: { v . y = C f + C r m s v x v y + ( C r l r - C f l f m s v x - v x ) + C f m s f + C r m s r y . = C r l r - C f l f I z v x v y - C f l f 2 + C r l r 2 I z v x + l f C f I z f - l r C r I z r + M z I z Y . = v y cos + ( v x cos - v y sin ) . = ( 3 ) wherein is a lateral displacement of the vehicle and is a yaw angle.

3. The Co-DMPC-based chassis MAS cooperative control method for the autonomous vehicles according to claim 2, wherein the distributed state-space equation of the steering agent in the S2 is:
{dot over (x)}.sub.1(t)=A.sub.11x.sub.1(t)+B.sub.11u.sub.1(t)+A.sub.12x.sub.2(t)+B.sub.12u.sub.2(t)(4) wherein x 1 = [ v y Y ] , A 11 = [ - C f + C r m s v x 0 cos 0 ] , u 1 = [ f r ] , B 1 1 = [ C f m s C r m s 0 0 ] x 2 = [ ] , A 12 = [ C r l r - C f l f m s v x - v x 0 0 v x cos - v y sin ] , u 2 = M z , B 1 2 = [ 0 0 ] . ( 5 )

4. The Co-DMPC-based chassis MAS cooperative control method for the autonomous vehicles according to claim 3, wherein the distributed state-space equation of the DYC agent in the S2 is:
{dot over (x)}.sub.2(t)=A.sub.22x.sub.2(t)+B.sub.22u.sub.2(t)+A.sub.21x.sub.1(t)+B.sub.21u.sub.1(t)(6) wherein A 2 2 = [ C f l f 2 + C r l r 2 I z v x 0 1 0 ] , B 2 2 = [ 1 I z 0 ] , A 2 1 = [ C r l r - C f l f I z v x 0 0 0 ] , B 2 1 = [ l f C f I z - l r C r I z 0 0 ] . ( 7 )

5. The Co-DMPC-based chassis MAS cooperative control method for the autonomous vehicles according to claim 4, wherein the Co-DMPC algorithm in the cooperative control layer comprises: S6: establishing a local agent state equation, comprising: S6.1 establishing a multi-agent local continuous linear state equation: x . i ( t ) = A ii x i ( t ) + B ii .Math. .Math. i ( t ) + .Math. j = 1 , j i m ( A ij x j ( t ) + B ij u j ( t ) ) ( 8 ) wherein m is a number of the agents; S6.2 discretizing the above formula into: x i ( k + 1 .Math. "\[LeftBracketingBar]" t ) = A ~ ii x i ( k .Math. "\[LeftBracketingBar]" t ) + B ~ ii u i ( k .Math. "\[LeftBracketingBar]" t ) + .Math. j = 1 , j i m ( A ~ ij x j ( k .Math. "\[LeftBracketingBar]" t ) + B ~ ij u j ( k .Math. "\[LeftBracketingBar]" t ) ) ( 9 ) wherein
.sub.ii=I+TA.sub.ii, {tilde over (B)}.sub.ii=TB.sub.ii, .sub.ij=TA.sub.ij, {tilde over (B)}.sub.ij=TB.sub.ij(10) T represents the sampling period; S6.3 obtaining by recursion x.sub.i.sup.p in a time domain [1, N.sub.p]: x i p ( 1 .Math. "\[LeftBracketingBar]" t ) = A i i x i ( t ) + B i i u i p ( 0 .Math. "\[LeftBracketingBar]" t ) + .Math. j = 1 , j i m ( A i j x j ( t ) + B i j u j a ( 0 .Math. "\[LeftBracketingBar]" t ) ) x i p ( 2 .Math. "\[LeftBracketingBar]" t ) = A i i x i p ( 1 .Math. "\[LeftBracketingBar]" t ) + B i i u i p ( 1 .Math. "\[LeftBracketingBar]" t ) + .Math. j = 1 , j i m ( A i j x j a ( 1 .Math. "\[LeftBracketingBar]" t ) + B ij u j a ( 1 .Math. "\[LeftBracketingBar]" t ) ) = A ii 2 x i ( t ) + A i i B i i u i p ( 0 .Math. "\[LeftBracketingBar]" t ) + .Math. j = 1 , j i m A i i ( A i j x j ( t ) + B i j u j a ( 0 .Math. "\[LeftBracketingBar]" t ) ) + B i i u i p ( 1 .Math. "\[LeftBracketingBar]" t ) + .Math. j = 1 , j i m ( A i j x j a ( 1 .Math. "\[LeftBracketingBar]" t ) + B ij u j a ( 1 .Math. "\[LeftBracketingBar]" t ) ) .Math. x i p ( N p .Math. "\[LeftBracketingBar]" t ) = A ii N p x i ( t ) + A ii N p - 1 B ii u i p ( 0 .Math. "\[LeftBracketingBar]" t ) + .Math. j = 1 , j i m A ii N p - 1 ( A i j x j ( t ) + B i j u j a ( 0 .Math. "\[LeftBracketingBar]" t ) ) + A i i N p - 2 B i i u i p ( 1 .Math. "\[LeftBracketingBar]" t ) + .Math. j = 1 , j i m A i i N p - 2 ( A i j x j a ( 1 .Math. "\[LeftBracketingBar]" t ) + B ij u j a ( 1 .Math. "\[LeftBracketingBar]" t ) ) .Math. + A i i N p - N c B i i u i p ( N c - 1 .Math. "\[LeftBracketingBar]" t ) + .Math. j = 1 , j i m A i i N p - N c ( A i j x j a ( N c - 1 .Math. "\[LeftBracketingBar]" t ) + B i j u j a ( N c - 1 .Math. "\[LeftBracketingBar]" t ) ) + .Math. j = 1 , j i m A i i N p - N c - 1 A ij x j a ( N c .Math. "\[LeftBracketingBar]" t ) + .Math. + .Math. j = 1 , j i m A i i A i j x j a ( N p - 2 .Math. "\[LeftBracketingBar]" t ) + .Math. j = 1 , j i m A i j x j a ( N p - 1 .Math. "\[LeftBracketingBar]" t ) ( 11 ) and S6.4 expressing the recursion in a following augmented matrix: X i = F ii x i ( t ) + G ii U i + .Math. j = 1 , j i m ( F ij X j + G ij U j ) ( 12 ) wherein X i = [ x i p ( 1 .Math. "\[LeftBracketingBar]" t ) x i p ( 2 .Math. "\[LeftBracketingBar]" t ) x i p ( 3 .Math. "\[LeftBracketingBar]" t ) .Math. x i p ( N p .Math. "\[LeftBracketingBar]" t ) ] , U i = [ u i p ( 0 .Math. "\[LeftBracketingBar]" t ) u i p ( 1 .Math. "\[LeftBracketingBar]" t ) u i p ( 2 .Math. "\[LeftBracketingBar]" t ) .Math. u i p ( N c - 1 .Math. "\[LeftBracketingBar]" t ) ] , X j = [ x j ( t ) x j a ( 1 .Math. "\[LeftBracketingBar]" t ) x j a ( 2 .Math. "\[LeftBracketingBar]" t ) .Math. x j a ( N p - 1 .Math. "\[LeftBracketingBar]" t ) ] , U j = [ u j a ( 0 .Math. "\[LeftBracketingBar]" t ) u j a ( 1 .Math. "\[LeftBracketingBar]" t ) u j a ( 2 .Math. "\[LeftBracketingBar]" t ) .Math. u j a ( N c - 1 .Math. "\[LeftBracketingBar]" t ) ] ( 13 ) F i i = [ A ~ i i A ~ i i 2 A ~ i i 3 .Math. A ~ i i N p ] , G i i = [ B ~ ii 0 0 .Math. 0 A ~ ii B ~ ii B ~ ii 0 .Math. 0 A ~ ii 2 B ~ ii A ~ ii B ~ ii B ~ ii .Math. 0 .Math. .Math. .Math. .Math. A ~ ii N p - 1 B ~ ii A ~ ii N p - 2 B ~ ii A ~ ii N p - 3 B ~ ii .Math. A ~ ii N p - N c B ~ ii ] ( 14 ) F ij = [ A ~ ij 0 0 .Math. 0 A ~ ii A ~ ij A ~ ij 0 .Math. 0 A ~ ii 2 A ~ ij A ~ ii A ~ ij A ~ ij .Math. 0 .Math. .Math. .Math. .Math. A ~ ii N p - 1 A ~ ij A ~ ii N p - 2 A ~ ij A ~ ii N p - 3 A ~ ij .Math. A ~ ij ] , G ij = [ B ~ ij 0 0 .Math. 0 A ~ ii B ~ ij B ~ ij 0 .Math. 0 A ~ ii 2 B ~ ij A ~ ii B ~ ij B ~ ij .Math. 0 .Math. .Math. .Math. .Math. A ~ ii N p - 1 B ~ ij A ~ ii N p - 2 B ~ ij A ~ ii N p - 3 B ~ ij .Math. A ~ ii N p - N c B ~ ij ] . ( 15 )

6. The Co-DMPC-based chassis MAS cooperative control method for the autonomous vehicles according to claim 5, wherein, in the cooperative control layer, the Co-DMPC algorithm further comprises: S7: solving the local optimization, comprising: S7.1 expressing the local optimization problem considering global performance indexes as: min U i ( t ) J i ( x i p , u i p , x j a , u j a ) = i J ( i ) ( x i p , u i p , x j a , u j a ) + .Math. j = 1 , j i m j J ( j ) ( x j p , u j p , x i a , u i a ) = i .Math. X i - W q .Math. Q ~ i 2 + i .Math. U i .Math. R ~ i 2 + .Math. j = 1 , j i m [ j .Math. X j - W j .Math. Q ~ 2 j + j .Math. U j .Math. R ~ j 2 ] ( 16 ) wherein the equation (16) is subject to following constraints: x i p ( k + 1 .Math. "\[LeftBracketingBar]" t ) = A ~ ii x i p ( k .Math. "\[LeftBracketingBar]" t ) + B ~ ii x i p ( k .Math. "\[LeftBracketingBar]" t ) + .Math. j = 1 , j i m ( A ~ ij x j a ( k .Math. "\[LeftBracketingBar]" t ) + B ~ ij u j a ( k .Math. "\[LeftBracketingBar]" t ) ) ( 17 )
x.sub.i.sup.p(0|t)=x.sub.i(t)(18)
x.sub.j.sup.p(0|t)=x.sub.j(t)(19)
u.sub.i.sup.p(k|t)U.sub.ij(20) U.sub.ii is a set of constraints for control input; .sub.i and .sub.j are coupling coefficients of cost functions between different agents, and .Math. i = 1 m i = 1 ; w i ( t + k ) is a reference state value of the agent i, and W.sub.i is a reference state sequence composed of w.sub.i(t+k); Q.sub.i and R.sub.i are a state weight coefficient and a control weight coefficient respectively, and {tilde over (Q)}.sub.i=diag(Q.sub.i, Q.sub.i, . . . , Q.sub.i).sub.N.sub.p.sub.N.sub.p, {tilde over (R)}.sub.i=diag(R.sub.i, R.sub.i, . . . , R.sub.i).sub.N.sub.c.sub.N.sub.c wherein Q.sub.i is penalty for deviation between an actual trajectory and an expected trajectory, reflecting desire of the vehicle to adhere to the expected trajectory; R.sub.i is penalty for the control input, reflecting desire of the vehicle to travel with lower control energy; the constraint (17) is a dynamic constraint for the vehicle, the constraints (18) and (19) require an initial state of the vehicle in the prediction time domain to be equal to an actual state of the vehicle, and the constraint (20) is an input constraint for the agents, preventing an optimization result from exceeding a range of actuator motion; S7.2 transforming an optimization problem into a quadratic programming problem to improve computing efficiency: J i ( x i p , u i p , x j a , u j a ) = i .Math. F ii x i ( k ) + G i i U i + .Math. j = 1 , j i m ( F ij X j + G ij U j ) - W i .Math. Q 2 + i .Math. U i .Math. R i 2 + .Math. j = 1 , j i m j .Math. F jj x j ( k ) + G jj U j + .Math. k = 1 , k j m ( F jk X k + G jk U k ) - W j .Math. Q i 2 + .Math. j = 1 , j i m j .Math. U j .Math. R j 2 = U i T J i U i + 2 ( R i + i ) U i + const ( 21 ) wherein const is a constant term not affecting the optimization result, and i = i G i i T Q i G i i + i R i + .Math. j = 1 , j i m j G j i T Q i G j i .Math. i = .Math. j = 1 m j [ x j T ( k ) F jj T - W j T ] Q j G j i i = .Math. j = 1 m j [ .Math. k = 1 , k i m U k T G jk T + .Math. k = 1 , k j m X k T F jk T ] Q j G j i ; ( 22 ) and S7.3 obtaining a local optimal solution by a local agent after completing a single optimization computation.

7. The Co-DMPC-based chassis MAS cooperative control method for the autonomous vehicles according to claim 6, wherein, in the cooperative control layer, the Co-DMPC algorithm further comprises: adopting a cyclic iterative update method to enable the agents to exchange multiple times an optimal control sequence and an optimal state sequence after the local optimization within the unit sampling time and achieve the global optimal solution, wherein two termination conditions are designed for iterative update as follows: recording an error between two consecutive iterative optimization results as Err.sub.i=|U.sub.i.sup.nU.sub.i.sup.n-1|, wherein U.sub.i.sup.n represents the optimization result of the agent i in an n.sup.th iteration; when the error Err.sub.i is less than a threshold .sub.i, i=1 indicates an iteration error of the steering agent, and i=2 indicates an iteration error of the DYC agent, the controller optimization result converges to the global optimal solution and iteration is terminated; and if the iteration error fails to converge to .sub.t, a maximum number of the iteration p.sub.max is set and when a number of the iteration is greater than p.sub.max, computation is terminated.

8. The Co-DMPC-based chassis MAS cooperative control method for the autonomous vehicles according to claim 7, wherein implementation of the moment distribution layer comprises: establishing a total driving force equation of the vehicle according to a PID algorithm: F x ( t ) = K p e ( t ) + K i 0 t e ( ) d + K d de ( t ) dt ( 23 ) wherein K.sub.p represents a proportional coefficient, K.sub.i represents an integral coefficient, K.sub.d represents a derivative coefficient, and e(t)=v.sub.xv.sub.xref; considering a following optimization problem: min F xij , i = fl , fr , rl , rr J = F xfl 2 + F yfl 2 ( F zfl ) 2 + F xfr 2 + F yfr 2 ( F zfr ) 2 + F xrl 2 + F yrl 2 ( F zrl ) 2 + F xrr 2 + F yrr 2 ( F zrr ) 2 ( 24 ) wherein J is an abbreviation of a tire force distribution optimization problem; F.sub.xij represents longitudinal forces of a left front wheel, a right front wheel, a left rear wheel, and a right rear wheel; F.sub.zij represents vertical forces of the left front wheel, the right front wheel, the left rear wheel, and the right rear wheel; F.sub.yij represents lateral forces of four tires; ij=fl, fr, rl, rr represents the left front wheel, the right front wheel, the left rear wheel, and the right rear wheel respectively; since the steering agent has computed the steering angles of the four wheels, the lateral tire force F.sub.yij is a constant and has no effect on solution of the above optimization problem; the formula (24) is simplified into: min F xij , i = fl , fr , rl , rr J = F xfl 2 ( F zfl ) 2 + F xfr 2 ( F zfr ) 2 + F xrl 2 ( F zrl ) 2 + F xrr 2 ( F zrr ) 2 the above formula is subject to following constraints: F xfl m s + F xfr m s + F xrl m s + F xrr m s = F x ( 26 ) d 2 ( F xfr + F xrr ) - d 2 ( F xfl + F xrl ) = M z ( 27 ) T min R w F xi T max R w , i = f l , f r , r l , r r ( 28 ) wherein R.sub.w represents a wheel radius, the constraint (26) requires the total driving force distributed to the four wheels to meet requirements of the PID speed controller, the constraint (27) requires a yaw moment formed by a longitudinal tire force to satisfy a computation result of the DYC agent, and the constraint (28) requires a magnitude of a driving force to be within the range of the actuator motion; T.sub.min is a minimum driving torque, that is, braking torque, T.sub.max is a maximum driving torque, and T.sub.max=T.sub.min; the formula (25) is solved to obtain the driving/braking torques of the four wheels.

9. An autonomous vehicle controller, configured to execute the Co-DMPC-based chassis MAS cooperative control method for the autonomous vehicles according to claim 1.

10. A non-transitory storage medium, configured to store program codes of the Co-DMPC-based chassis MAS cooperative control method for the autonomous vehicles according to claim 1.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 is a diagram of a two-degree-of-freedom vehicle model with rear-wheel steering and direct yaw moment.

(2) FIG. 2 is a diagram of an MAS control architecture.

(3) FIG. 3 is a diagram of an information exchange and transformation architecture.

DETAILED DESCRIPTION OF THE EMBODIMENTS

(4) To facilitate the understanding of persons skilled in the art, the present disclosure is further described below with reference to embodiments and the accompanying drawings. The contents mentioned in the implementation are not intended to limit the present disclosure.

(5) S1: A two-degree-of-freedom vehicle model with four-wheel steering+DYC is built. The role of a suspension is ignored in the modeling process. This model assumes that the vehicle body only moves in a plane parallel to the ground, the steering angles of the left and right wheels on an axle are the same, and the longitudinal velocity of the vehicle is constant. The model is shown in FIG. 1. The simplified model is expressed as follows:

(6) { v . y = - v x - C f m s ( v y + l f v x - f ) - C r m s ( v y + l r v x - r ) . = - l f C f I z ( v y + l f v x - f ) + l r C r I z ( v y - l r v x - r ) + M z I z ( 30 ) where v.sub.x is a longitudinal velocity, v.sub.y is a lateral velocity, is a yaw rate, m.sub.s is vehicle mass, C.sub.f and C.sub.r are front and rear wheel cornering stiffnesses respectively, l.sub.f and l.sub.r are distances from front and rear wheel centers to a center of mass respectively, .sub.s and .sub.r are front and rear wheel steering angles respectively, I.sub.z is a moment of inertia of the vehicle rotating around a z axis, and M.sub.z is a direct yaw moment.

(7) The two-degree-of-freedom vehicle model is combined with a path tracking model to obtain the following dynamic equations under the centralized control strategy:

(8) 0 ( 31 ) { v . y = - C f + C ar m s v x v y + ( C r l r - C f l f m s v x - v x ) + C f m s f + C r m s r . = C r l r - C f l f I z v x v y - C f l f 2 + C r l r 2 I z v x + l f C f I z f - l r C r I z r + M z I z Y . = v y cos + ( v x cos - v y sin ) . = where Y is a lateral displacement of the vehicle and is a yaw angle.

(9) S2: A distributed state-space equation is established. A centralized state-space equation of the vehicle is divided into two agents, namely, a steering agent and a DYC agent according to different control variables. As for the steering agent, the control variables are .sub.f and .sub.r, and the state variables are v.sub.y and Y. Its state-space equation is:
{dot over (x)}.sub.1(t)=A.sub.11x.sub.1(t)+B.sub.11u.sub.1(t)+A.sub.12x.sub.2(t)+B.sub.12u.sub.2(t)(32) where

(10) x 1 = [ v y Y ] , ( 33 ) A 1 1 = [ - C f + C r m s v x 0 cos 0 ] , u 1 = [ f r ] , B 11 = [ C f m s C r m s 0 0 ] x 2 = [ ] , A 1 2 = [ C r l r - C f l f m s v x - v x 0 0 v x cos - v y sin ] , u 2 = M z , B 12 = [ 0 0 ]

(11) As for the DYC agent, the control variable is M.sub.z and the tracked state variables are and . Its state-space equation is:
{dot over (x)}.sub.2(t)=A.sub.22x.sub.2(t)+B.sub.22u.sub.2(t)+A.sub.21x.sub.1(t)+B.sub.21u.sub.1(t)(34) where

(12) A 2 2 = [ - C f l f 2 + C r l r 2 I z v x 0 1 0 ] , ( 35 ) B 2 2 = [ 1 I z 0 ] , A 2 1 = [ C r l r - C f l f I z v x 0 0 0 ] , B 2 1 = [ l f C f I z - l r C r I z 0 0 ]

(13) S3: A cooperative control framework as shown in FIG. 2 is built. It includes four layers: (1) In an input layer, the vehicle model computes a reference yaw rate .sub.ref and a reference lateral velocity v.sub.yref according to the deviation between the current state of the vehicle and a target path. (2) In a cooperative control layer, the conventional centralized chassis control method is decomposed into a distributed control system consisting of a plurality of controllers, to achieve tracking of expected targets. Through the coupled state-space equation established in the S2, a Co-DMPC algorithm is employed to select a cost function considering global performance. An iterative update method is used in a unit sampling time, enabling each agent to independently compute its own optimization problem and exchange computation results multiple times. By reasonably setting the iteration termination conditions, the entire system can eventually obtain the global optimal solution. (3) In a moment distribution layer, the driving/braking torques of the four wheels are optimally distributed based on a total driving force computed by a PID speed controller and a direct yaw moment computed by the upper layer as constraints. (4) In an actuation layer, the control variables are applied to the four wheels through a steering actuator and a hub motor to achieve accurate control of the vehicle.

(14) S4: An information transmission protocol is established. A same prediction time domain N.sub.p and a same control time domain N.sub.c are used in all the agents. As for an agent i, in a control time domain [t, t+N.sub.c], the control variable at a time point t+k is predicted at a time point t as u.sub.i(k|t). The following three sets of control trajectories are defined: (1) u.sub.i.sup.p(k|t): predicted control trajectory; (2) u.sub.i*(k|t): optimal control trajectory; (3) u.sub.i.sup.a(k|t): assumed control trajectory.

(15) u.sub.i.sup.p(k|t) represents a control variable at a future time point predicted by the controller based on the current state and the control variable at a previous time point. u.sub.i*(k|t) represents an optimal solution obtained by the controller after solving the local optimization problems. u.sub.i.sup.a(k|t) represents an assumed control variable transmitted by the agent i to the neighbor agent and is obtained by shifting the predicted control variable u.sub.i.sup.p(k|t) by one sampling period T, that is, moving the entire trajectory toward a future time point to make the elements originally at the time point t=0 become elements at the time point t=1.

(16) In a prediction time domain [t, t+N.sub.p], the state variable at a time point t+k is predicted at a time point t as x.sub.i(k|t). The following three sets of state trajectories are defined and their meanings are similar to those of the control trajectories: (1) x.sub.i.sup.p(k|t): predicted state trajectory; (2) x.sub.i*(k|t): optimal state trajectory; (3) x.sub.i.sup.a(k|t): assumed state trajectory.

(17) The relationships between the predicted trajectories, the optimal trajectories, and the assumed trajectories are shown in FIG. 3.

(18) The agent i uses, at the time point t, the predicted state x.sub.i.sup.p(k|t) at the future time point and the assumed state x.sub.j.sup.a(k|t) and the assumed control u.sub.j.sup.a(k|t) from the neighbor agent j to solve the distributed optimization problem. When k=0, x.sub.i.sup.p(0|t)=x.sub.i(t) and x.sub.i.sup.p(0|t)=x.sub.j(t), where x.sub.i(t) and x.sub.j(t) are directly measured current states of the vehicle. After obtaining the optimal state x.sub.i*(k|t) and the optimal control u.sub.i*(k|t), the first u.sub.i*(0|t) of the control trajectory is applied to actual control. A single-step recursion is performed on the optimal state trajectory and the optimal control trajectory to obtain the assumed trajectories. That is:
u.sub.i.sup.a(k1|t+1)=u.sub.i*(k|t), k[1,N.sub.c1]
x.sub.i.sup.a(k1|+1)=x.sub.i*(k|t), k[1,N.sub.p](36)

(19) It should be noted that the length of the control trajectory is N.sub.c1 and to complete the last term, u.sub.i.sup.a(N.sub.c1|t+1)=0. The state of the vehicle not under control at a next time point is x.sub.i.sup.a(N.sub.p|t+1)=A.sub.iix.sub.i.sup.a(N.sub.p1|t+1)=A.sub.iix.sub.i*(N.sub.p|t), where A.sub.ii is a state matrix of the agent i.

(20) S5: A local agent state equation is derived. According to the characteristics of MAS state coupling, a local continuous linear state equation can be expressed as:

(21) x . i ( t ) = A ii x i ( t ) + B ii u i ( t ) + .Math. j = 1 , j i m ( A ij x j ( t ) + B ij u j ( t ) ) ( 37 ) where m is the number of agents. The above formula is discretized into:

(22) x i ( k + 1 .Math. "\[LeftBracketingBar]" t ) = ii x i ( k .Math. "\[LeftBracketingBar]" t ) + B ii u i ( k .Math. "\[LeftBracketingBar]" t ) + .Math. j = 1 , j i m ( ij x j ( k .Math. "\[LeftBracketingBar]" t ) + B ij u j ( k .Math. "\[LeftBracketingBar]" t ) ) ( 38 ) where
.sub.ii=I+TA.sub.ii, {tilde over (B)}.sub.ii=TB.sub.ii, .sub.ij=TA.sub.ij, {tilde over (B)}.sub.ij=TB.sub.ij(39) x.sub.i.sup.p in the time domain [1, N.sub.p] can be obtained by recursion as follows:

(23) x i p ( 1 .Math. "\[LeftBracketingBar]" t ) = A i i x i ( t ) + B i i u i p ( 0 .Math. "\[LeftBracketingBar]" t ) + .Math. j = 1 , j i m ( A ij x j ( t ) + B i j u j a ( 0 .Math. "\[LeftBracketingBar]" t ) ) ( 40 ) x i p ( 2 .Math. "\[LeftBracketingBar]" t ) = A ii x i p ( 1 .Math. "\[LeftBracketingBar]" t ) + B i i u i p ( 1 .Math. "\[LeftBracketingBar]" t ) + .Math. j = 1 , j i m ( A ij x j a ( 1 .Math. "\[LeftBracketingBar]" t ) + B i j u j a ( 1 .Math. "\[LeftBracketingBar]" t ) ) = A i i 2 x i ( t ) + A i i B i i u i p ( 0 .Math. "\[LeftBracketingBar]" t ) + .Math. j = 1 , j i m A ii ( A ij x j ( t ) + B ij u j a ( 0 .Math. "\[LeftBracketingBar]" t ) ) + B i i u i p ( 1 .Math. "\[LeftBracketingBar]" t ) + .Math. j = 1 , j i m ( A ij x j a ( 1 .Math. "\[LeftBracketingBar]" t ) + B i j u j a ( 1 .Math. "\[LeftBracketingBar]" t ) ) .Math. x i p ( N p .Math. "\[LeftBracketingBar]" t ) = A i i N p x i ( t ) + A i i N p - 1 B i i u i ( 0 .Math. "\[LeftBracketingBar]" t ) + .Math. j = 1 , j i m A i i N p - 1 ( A i j x j ( t ) + B i j u j a ( 0 .Math. "\[LeftBracketingBar]" t ) ) + A i i N p - 2 B i i u i p ( 1 .Math. "\[LeftBracketingBar]" t ) + .Math. j = 1 , j i m A i i N p - 2 ( A i j x j a ( 1 .Math. "\[LeftBracketingBar]" t ) + B i j u j a ( 1 .Math. "\[LeftBracketingBar]" t ) ) .Math. + A i i N p - N c B i i u i p ( N c - 1 .Math. "\[LeftBracketingBar]" t ) + .Math. j = 1 , j i m A i i N p - N c ( A i j x j a ( N c - 1 .Math. "\[LeftBracketingBar]" t ) + B i j u j a ( N c - 1 .Math. "\[LeftBracketingBar]" t ) ) + .Math. j = 1 , j i m A i i N p - N c - 1 A i j x j a ( N c .Math. "\[LeftBracketingBar]" t ) + .Math. + .Math. j = 1 , j i m A i i A i j x j a ( N p - 2 .Math. "\[LeftBracketingBar]" t ) + .Math. j = 1 , j i m A i j x j a ( N p - 1 .Math. "\[LeftBracketingBar]" t )

(24) The above recursion is expressed in the following augmented matrix:

(25) X i = F ii x i ( t ) + G ii U i + .Math. j = 1 , j i m ( F ij X j + G ij U j ) where ( 41 ) X i = [ x i p ( 1 .Math. "\[LeftBracketingBar]" t ) x i p ( 2 .Math. "\[LeftBracketingBar]" t ) x i p ( 3 .Math. "\[LeftBracketingBar]" t ) .Math. x i p ( N p .Math. "\[LeftBracketingBar]" t ) ] , ( 42 ) U i = [ u i p ( 0 .Math. "\[LeftBracketingBar]" t ) u i p ( 1 .Math. "\[LeftBracketingBar]" t ) u i p ( 2 .Math. "\[LeftBracketingBar]" t ) .Math. u i p ( N c - 1 .Math. "\[LeftBracketingBar]" t ) ] , X j = [ x j ( t ) x j a ( 1 .Math. "\[LeftBracketingBar]" t ) x j a ( 2 .Math. "\[LeftBracketingBar]" t ) .Math. x j a ( N p - 1 .Math. "\[LeftBracketingBar]" t ) ] , U j = [ u j a ( 0 .Math. "\[LeftBracketingBar]" t ) u j a ( 1 .Math. "\[LeftBracketingBar]" t ) u j a ( 2 .Math. "\[LeftBracketingBar]" t ) .Math. u j a ( N c - 1 .Math. "\[LeftBracketingBar]" t ) ]

(26) F ii = [ A ~ ii A ~ ii 2 A ~ ii 3 .Math. A ~ ii N p ] , G ii = [ B ~ ii 0 0 .Math. 0 A ~ ii B ~ ii B ~ ii 0 .Math. 0 A ~ ii 2 B ~ ii A ~ ii B t j B ~ ii .Math. 0 .Math. .Math. .Math. .Math. A ~ ii N p - 1 B ~ ii A ~ ii N p - 2 B ~ ii A ~ ii N p - 3 B ~ ii .Math. A ~ ii N p - N c B ~ ii ] ( 43 ) F ij = [ A ~ ij 0 0 .Math. 0 A ~ ii A ~ ij A ~ ij 0 .Math. 0 A ~ ii A ~ ij A ~ ii A ~ ij A ~ ij .Math. 0 .Math. .Math. .Math. .Math. A ~ ii N p - 1 A ~ ij A ~ ii N p - 2 A ~ ij A ~ ii N p - 3 A ~ ij .Math. A ~ ij ] , G ij = [ B ~ ij 0 0 .Math. 0 A ~ ii B ~ ij B ~ ij 0 .Math. 0 A ~ ii B ~ ij A ~ ii B ~ ij B ~ ij .Math. 0 .Math. .Math. .Math. .Math. A ~ ii N p - 1 B ~ ij A ~ ii N p - 2 B ~ ij A ~ ii N p - 3 B ~ ij .Math. A ~ ii N p - N c B ~ i ] ( 44 )

(27) S6: The local optimization problem is solved. Without considering the global performance indexes, that is, in the case of non-cooperative distribution, the local optimization problem for the agent i can be designed as follows:

(28) min U i ( t ) J ( i ) ( x i p , u i p , x j a , u j a ) = .Math. k = 1 N .Math. x i ( k .Math. "\[LeftBracketingBar]" t ) - w i ( t + k ) .Math. Q i 2 + .Math. k = 1 N c .Math. u i ( k - 1 .Math. "\[LeftBracketingBar]" t ) .Math. R i 2 = .Math. X i - W i .Math. Q ~ i 2 + .Math. U i .Math. R ~ i 2 ( 45 )

(29) J.sub.(i)x.sub.i.sup.p, u.sub.i.sup.p, x.sub.j.sup.a, u.sub.j.sup.a) represents a cost function that only considers local control performance. The equation is subject to the following constraints:

(30) x i p ( k + 1 .Math. "\[LeftBracketingBar]" t ) = A ~ ii x i p ( k .Math. "\[LeftBracketingBar]" t ) + B ii x i p ( k .Math. "\[LeftBracketingBar]" t ) + .Math. j = 1 , j i m ( A ~ ij x j a ( k .Math. "\[LeftBracketingBar]" t ) + B ij u j a ( k .Math. "\[LeftBracketingBar]" t ) ) ( 46 )
x.sub.i.sup.p(0|t)=x.sub.i(t)(47)
x.sub.j.sup.p(0|t)=x.sub.j(t)(48)
u.sub.i.sup.p(k|t)U.sub.ij(49) where U.sub.ii is a set of constraints for control input, w.sub.i(t+k) is a reference state value of the agent i, and W.sub.i is a reference state sequence composed of w.sub.i(t+k). Q.sub.i and R.sub.i are a state weight coefficient and a control weight coefficient respectively, and {tilde over (Q)}.sub.i=diag(Q.sub.i, Q.sub.i, . . . , Q.sub.i).sub.N.sub.p.sub.N.sub.p, {tilde over (R)}.sub.i=diag(R.sub.i, R.sub.i, . . . , R.sub.i).sub.N.sub.c.sub.N.sub.c. Specifically, Q.sub.i is penalty for the deviation between an actual trajectory and an expected trajectory, reflecting desire of the vehicle to adhere to the expected trajectory; R.sub.i is penalty for control input, reflecting desire of the vehicle to travel with lower control energy. The constraint (46) is a dynamic constraint for the vehicle. The constraints (47) and (48) require the initial state of the vehicle in the prediction time domain to be equal to the actual state of the vehicle. The constraint (49) is an input constraint for the agent, preventing the optimization result from exceeding the range of the actuator motion.

(31) The difference between the Co-DMPC algorithm and the non-cooperative DMPC algorithm mainly lies in whether the global performance indexes are considered. Each agent has strong dependence on the states or control inputs of its neighbor agents, so that the Co-DMPC requires strong communication capabilities. In addition, cost coupling exists between an agent and its neighbor agents applying the Co-DMPC algorithm when describing the optimization problem. Therefore, the optimization problem considering the global performance indexes can be expressed as:

(32) 0 min U i ( t ) J i ( x i p , u i p , x j a , u j a ) = i J ( i ) ( x i p , u i p , x j a , u j a ) + .Math. j = 1 , j i m j J ( j ) ( x j p , u j p , x i a , u i a ) = i .Math. X i - W i .Math. Q ~ i 2 + i .Math. U i .Math. R ~ i 2 + .Math. j = 1 , j i m [ j .Math. X j - W j .Math. Q ~ j 2 + j .Math. U j .Math. R ~ j 2 ] ( 50 ) where J.sub.i(x.sub.i.sup.p, u.sub.i.sup.p, x.sub.j.sup.a, u.sub.j.sup.a) represents a cost function considering global performance, .sub.i and .sub.j are coupling coefficients of cost functions between different agents, and

(33) .Math. i = 1 m i = 1 .
The constraints are the same as those in the formulas (46) to (49). In addition, in order to improve the computing efficiency of the controller, the optimization problem is transformed into a quadratic programming problem:

(34) J i ( x i p , u i p x j a , u j a ) = i .Math. F i i x i ( k ) + G i i U i + .Math. j = 1 , j i m ( F ij X j + G ij U j ) - W i .Math. Q i 2 + i .Math. U i .Math. R i 2 + .Math. j = 1 , j i m j .Math. F jj x j ( k ) + G jj U j + .Math. k = 1 , k j m ( F jk X k + G jk U k ) - W j .Math. Q j 2 + .Math. j = 1 , j i m j .Math. U j .Math. R j 2 = U i T i U i + 2 ( .Math. i + i ) U i + const ( 51 ) where const is a constant term not affecting the optimization result, and

(35) i = i G i i T Q i G i i + i R i + .Math. j = 1 , j i m j G j i T Q i G j i .Math. i = .Math. j = 1 m j [ x j T ( k ) F jj T - W j T ] Q j G j i i = .Math. j = 1 m j [ .Math. m U k T G jk T + .Math. k = 1 , k j m X k T F jk T ] Q j G j i ( 52 )

(36) After completing a single optimization computation, the local agent only obtains a local optimal solution and cannot achieve accurate control of the vehicle. In order to reduce the optimization error, the present disclosure adopts a cyclic iterative update method to exchange multiple times an optimal control sequence and an optimal state sequence after optimization within a single control cycle, thereby achieving a global optimal solution. At the same time, it should be noted that the use of the iterative update method actually increases the computational burden of the controller and the occupancy rate of the communication network. Therefore, setting the iteration termination conditions reasonably is an important factor to ensure the real-time performance of the controller.

(37) The present disclosure designs two termination conditions in response to the above requirements. Firstly, the error between two consecutive iterative optimization results is recorded as Err.sub.i=|U.sub.i.sup.nU.sub.i.sup.n-1|. U.sub.i.sup.n represents the optimization result of the agent i in the n.sup.th iteration. When the iteration error Err.sub.i is less than a threshold .sub.i (i=1 indicates the iteration error of the steering agent and i=2 indicates the iteration error of the DYC agent), the controller optimization result converges to a global optimal solution. At this time, the iteration is terminated and the next step is carried out. If the controller iteration error fails to converge to .sub.i, a maximum number of iterations p.sub.max needs to be set to prevent the controller from performing invalid computation.

(38) S7: The direct yaw moment is distributed. After computing the direct yaw moment of the vehicle, the DYC agent needs to distribute it reasonably to the four wheels according to the state of the vehicle, thereby avoiding tire force saturation. The distributed torques must also meet the requirements of driving the vehicle forward and tracking the target speed.

(39) Firstly, a total driving force equation of the vehicle is established according to a PID algorithm:

(40) F x ( t ) = K p e ( t ) + K i 0 t e ( ) d + K d d e ( t ) d t ( 53 ) where K.sub.p represents a proportional coefficient, K.sub.i represents an integral coefficient, and K.sub.d represents a derivative coefficient, e(t)=v.sub.xv.sub.xref:

(41) Next, the following optimization problem is considered:

(42) min F xij , i = fl , fr , rl , rr J = F xfl 2 + F yfl 2 ( F xfl ) 2 + F xfr 2 + F yfr 2 ( F zfr ) 2 + F xrl 2 + F xrr 2 ( F zrl ) 2 + F x r r 2 + F yrr 2 ( F z r r ) 2 ( 54 ) where J is an abbreviation of a tire force distribution optimization problem; F.sub.xij represents longitudinal forces of a left front wheel, a right front wheel, a left rear wheel, and a right rear wheel; F.sub.zij represents vertical forces of the left front wheel, the right front wheel, the left rear wheel, and the right rear wheel; F.sub.yij represents lateral forces of four tires; ij=fl, fr, rl, rr represents the left front wheel, the right front wheel, the left rear wheel, and the right rear wheel respectively; and is an adhesion coefficient. Since the steering agent has computed the steering angles of the four wheels, the lateral tire force F.sub.yij is a constant and has no effect on solution of the above optimization problem. Therefore, the formula (54) can be expressed as:

(43) min F xij , i = fl , fr , rl , rr J = F xfl 2 ( F zfl ) 2 + F xfr 2 ( F zfr ) 2 + F xfl 2 ( F zrl ) 2 + F xrr 2 ( F zrr ) 2 ( 55 )

(44) The above formula is subject to the following constraints:

(45) F xfl m s + F xfr m s + F xrl m s + F xrr m s = F x ( 56 ) d 2 ( F xfr + F xrr ) - d 2 ( F xfl + F xrl ) = M z ( 57 ) T min R w F xi T max R w , i = fl , fr , rl , rr ( 58 ) where R.sub.w represents a wheel radius, d represents a track width, F.sub.x represents a total longitudinal force, the constraint (56) requires the total driving force distributed to the four wheels to meet the requirements of the speed controller, the constraint (57) requires the yaw moment formed by the longitudinal tire force to satisfy the computation result of the DYC agent, and the constraint (58) requires the magnitude of the driving force to be within the range of the actuator motion; T.sub.min is a minimum driving torque (that is, braking torque), T.sub.max is a maximum driving torque, and T.sub.max=T.sub.min. The fmincon command is used in MATLAB to solve the formula (55) to obtain the driving/braking torques of the four wheels.

(46) S8: The computation results of the steering agent and the DYC agent are sent to the actuator.

(47) Based on the above control method, in the embodiments of the present disclosure:

(48) An autonomous vehicle controller configured to execute the control method is provided.

(49) A storage medium configured to store program codes of the control method is provided.

(50) The above descriptions are merely practical embodiments of the present disclosure, and are not intended to limit the protection scope of the present disclosure. Any equivalent implementations or modifications made without departing from the technologies of the present disclosure shall fall within the protection scope of the present disclosure.