Adaptive control mechanism for uncertain systems with actuator dynamics
11079737 · 2021-08-03
Assignee
Inventors
- Benjamin C. Gruenwald (Evansville, IN, US)
- Tansel Yucelen (Tampa, FL)
- Kadriye Merve Dogan (Tampa, FL, US)
- Jonathan A. Muse (Beavercreek, OH, US)
Cpc classification
International classification
Abstract
The present disclosure describes an actuator system comprising an actuator unit that is configured to be positioned next to a structure; and an adaptive controller unit that is configured to receive a command input for the actuator unit and output an actuator command based on a reference model of a physical system that includes the actuator unit and the structure, wherein the actuator command does not alter trajectories of the reference model. In various embodiments, the uncertain dynamical system of the physical system is augmented with the actuator dynamics to provide improved stability.
Claims
1. An actuator system for a vehicle, comprising: an actuator unit comprising an actuator positioned next to a structure, where the actuator unit receives and reacts to an actuator command for the actuator; and an adaptive controller comprising computing circuitry that, in response to a command input for the actuator, produces the actuator command based on a reference model of a vehicle that includes the actuator unit and the structure, wherein the computing circuitry outputs the actuator command to the actuator unit which reacts to the actuator command, wherein the reference model is given by: [0.sub.n.sub.
.sub.r(t) is a reference actuator output,
u(t)=−Kx(t)−.sup.T(t)Nx(t), where x(t) is a system state vector and Ŵ(t) satisfies:
(t)=γProj.sub.m[
(t),σ(⋅)PG], where γ is a learning rate, σ(⋅)=[Nx(t)e.sub.x.sup.T(t), Nx(t)e
.sup.T(t)], e.sub.x(t)
x(t)−x.sub.r(t), e
(t)
(t)−
.sub.r(t), P is a solution of a matrix inequality, G=[B.sup.T, 0.sub.m×m].sup.T, and
(t) is the actuator output.
2. The system of claim 1, wherein the vehicle comprises an uncertain dynamical system.
3. The system of claim 2, wherein the reference model incorporates adaptive feedback from the actuator unit to stabilize the uncertain dynamical system.
4. The system of claim 1, wherein the vehicle is a hypersonic vehicle.
5. The system of claim 1, wherein the vehicle is an aircraft.
6. A method for adaptive control of an actuator unit comprising: positioning the actuator unit comprising an actuator next to a physical structure of a vehicle; determining a reference model of the vehicle that includes the actuator unit, wherein the reference model depicts a closed-loop dynamical system performance of the vehicle, wherein the reference model is given by: [0.sub.n.sub.
.sub.r(t) is a reference actuator output,
u(t)=−Kx(t)−.sup.T(t)Nx(t), where x(t) is a system state vector and Ŵ(t) satisfies:
(t)=γProj.sub.m[
(t),σ(⋅)PG], where γ is a learning rate, σ(⋅)=[Nx(t)e.sub.x.sup.T(t), Nx(t)e
.sup.T(t)], e.sub.x(t)
x(t)−x.sub.r(t), e
(t)
(t)−
.sub.r(t), P is a solution of a matrix inequality, G=[B.sup.T, 0.sub.m×m].sup.T, and
(t) is the actuator output.
7. The method of claim 6, wherein the vehicle comprises an uncertain dynamical system.
8. The method of claim 7, further comprising incorporating adaptive feedback from the actuator unit within the reference model to stabilize the uncertain dynamical system.
9. The method of claim 6, wherein the vehicle is a hypersonic vehicle.
10. The method of claim 6, wherein the vehicle is an aircraft.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Many aspects of the present disclosure can be better understood with reference to the following drawings. The components in the drawings are not necessarily to scale, emphasis instead being placed upon clearly illustrating the principles of the present disclosure. Moreover, in the drawings, like reference numerals designate corresponding parts throughout the several views.
(2)
(3)
(4)
(5)
(6)
DETAILED DESCRIPTION
(7) In accordance with the present disclosure, we present new adaptive control systems and methods for uncertain dynamical systems with actuator dynamics. Specifically, both for stabilization and command following cases, we analyze the stability of an adaptive control system architecture or structure using tools and methods from linear matrix inequalities and Lyapunov theory. Next, we theoretically show that the architecture, in accordance with an exemplary embodiment, does not significantly alter the trajectories of the given reference model as compared with the hedging approach, where this is practically important owing to the fact that the architecture can lead to better closed-loop dynamical system performance. Finally, we illustrate the performance of the architecture, in accordance with an exemplary embodiment, through a numerical example on a hypersonic vehicle model and compare our results with the hedging approach.
(8) As discussed, for pseudo-control hedging, a given reference model captures a desired closed-loop dynamical system performance, and based on the model, the hedging approach alters the trajectories of this model to enable adaptive controllers to be designed such that their stability is not affected by the presence of actuator dynamics. While the hedging approach is introduced to the literature around the 2000s, recent papers show generalizations to this approach as well as sufficient stability conditions predicated on linear matrix inequalities (LMIs). In particular, considering the actuator dynamics of interest, when the solution to the resulting LMIs is feasible, then stability of the closed-loop dynamical system is guaranteed. In the present disclosure, a new adaptive control architecture for uncertain dynamical systems with actuator dynamics is presented.
(9) The notation used in this paper is fairly standard. For self-containedness, note that denotes the set of real numbers,
.sup.n denotes the set of n×1 real column vectors,
.sup.n×m denotes the set of n×m real matrices,
.sub.+ (respectively,
.sub.+) denotes the set of positive (respectively, nonnegative) real numbers,
.sub.+.sup.n×n (respectively,
.sub.+.sup.n×n) denotes the set of n×n positive-definite (respectively, nonnegative-definite) real matrices,
.sup.n×n denotes the set of n×n real matrices with diagonal scalar entries, and “
” denotes equality by definition. In addition, we write (⋅).sup.T for the transpose operator, (⋅).sup.−1 for the inverse operator, det(⋅) for the determinant operator, tr(⋅) for the trace operator, and ∥⋅∥.sub.2 for the Euclidean norm.
(10) Mathematical Preliminaries
(11) In this section, we present a concise overview of the standard model reference adaptive control problem in the absence of actuator dynamics. Consider the uncertain dynamical system given by
{dot over (x)}.sub.p(t)=A.sub.px.sub.p(t)=B.sub.p[u(t)+W.sup.Tx.sub.p(t)], x.sub.p(0)=x.sub.p0, (1)
where x.sub.p(t)∈.sup.n.sup.
.sup.m is a control input. In addition, A.sub.p∈
.sup.n.sup.
.sup.n.sup.
is a known control input matrix, and W∈
.sup.n.sup.
is an unknown weight matrix. In Equation (1), it is implicitly assumed that the pair (A.sub.p, B.sub.p) is controllable. To address command following, let c(t)∈
.sup.n.sup.
.sup.n.sup.
{dot over (x)}.sub.c(t)=E.sub.px.sub.p(t)−c(t), x.sub.c(0)=x.sub.c0, (2)
Here, E.sub.p∈.sup.n.sup.
{dot over (x)}(t)=Ax(t)+B[u(t)+W.sup.Tx.sub.p(t)]+B.sub.rc(t), x(0)=x.sub.0, (3)
where x(t)[x.sub.p.sup.T(t), x.sub.c.sup.T(t)].sup.T∈
.sup.n, n=n.sub.p+n.sub.c is the augmented state vector,
(12)
(13) Next, consider the reference model capturing a desired (i.e., ideal) closed-loop dynamical system performance given by
{dot over (x)}.sub.r(t)=A.sub.rx(t)+B.sub.rc(t), x.sub.r(0)=x.sub.r.sub.
where x.sub.r(t)∈.sup.n
is the reference state vector and A.sub.r∈
.sup.n×n
is the Hurwitz reference model matrix. The classical objective of the model reference adaptive control problem is to construct an adaptive feedback control law such that the state vector x(t) closely follows the reference state vector x.sub.r(t) in the presence of system uncertainties captured by the term “W.sup.Tx.sub.p(t)” in Equation (1). For addressing this problem when the actuator dynamics is not present, consider the feedback control law given by
u(t)=u.sub.n(t)+u.sub.a(t), (8)
where u.sub.n(t) and u.sub.a(t) are the nominal and the adaptive feedback control laws, respectively.
(14) In Equation (8), let the nominal feedback control law be given by
u.sub.n(t)=−Kx(t), K∈.sup.m×m, (9)
such that A.sub.rA−BK holds. Using Equations (8) and (9) in Equation (3) yields
{dot over (x)}(t)=A.sub.rx(t)+B.sub.rc(t)+B[u.sub.a(t)+W.sup.Tx.sub.p(t)], (10)
Motivated from the structure of Equation (10), the adaptive feedback control law in Equation (8) is now given by
u.sub.a(t)=−.sup.T(t)x.sub.p(t), (11)
where (t)∈
.sup.n.sup.
(t)=γProj.sub.m[
(t),x.sub.p(t)e.sup.T(t)PB]
(0)=
.sub.0, (12)
with γ∈.sub.+
being the learning rate, e(t)
x(t)−x.sub.r(t) being the system error state vector, and P∈
.sub.+.sup.n×n being the solution of the Lyapunov equation given by
0=A.sub.r.sup.TP+PA.sub.r+R, (13)
R∈.sub.+.sup.n×n. In Equation (12), the projection operator is used, and hence, we need its definition at Equation [13].
(15) Definition 1 (Projection Operator).
(16) Let Ω={0∈.sup.n:(θ.sub.i.sup.min≤θ.sub.i≤θ.sub.i.sup.max).sub.i=1, 2, . . . , n} be a convex hypercube in
.sup.n, where (θ.sub.i.sup.min, θ.sub.i.sup.max) represent the minimum and maximum bounds for the i.sup.th component of the n-dimensional parameter vector θ. In addition, let Ω.sub.c={0∈
.sup.n:(θ.sub.i.sup.min+ϵ≤θ.sub.i≤θ.sub.i.sup.max−ϵ).sub.i=1, 2, . . . , n} be a second hypercube for a sufficiently small positive constant ϵ, where Ω.sub.ϵ⊂Ω. With y∈
.sup.ny 2 Rn, the projection operator Proj:
.sup.n×
.sup.n.fwdarw.
.sup.n is then defined compenent-wise by
(17)
(18) In the light of Definition 1 (Projection Operator), it follows that roj(θ−θ*).sup.T(Proj(θ, y)−y)≤0, θ*∈.sup.n. Note that this definition can also be generalized to matrices as Proj.sub.m(Θ, Y)=(Proj(col.sub.1(Θ), col.sub.1(Y), . . . , Proj(col.sub.m(Θ), col.sub.m(Y))), where Θ∈
.sup.n×m, Y∈
.sup.n×m and col.sub.i
denotes ith column operator. In particular, for a given Θ*∈
.sup.n×m, it follows from Proj(θ−θ*).sup.T(Proj(θ, y)−y)≤0 that [(Θ−Θ*).sup.T(Proj.sub.m(Θ, Y)−Y)]=Σ.sub.i=1.sup.m[col.sub.i(Θ−Θ*).sup.T(Proj(col.sub.i(Θ), col.sub.i(Y))−col.sub.i(Y))]≤0. Now, with regard to Equation (12), the projection bounds are defined such that |[
(t)].sub.ij|≤
.sub.max,i+(j−1)n.sub.
.sub.max,i+(j−1)n.sub.
.sub.+ denotes symmetric element-wise projection bounds. Furthermore, the following remarks are immediate.
(19) Remark 1 (Adaptive Command Following in the Absence of Actuator Dynamics).
(20) Using Equation (11) in Equation (10) along with Equation (7), the system error dynamics can be written as
ė(t)=A.sub.re(t)−B{tilde over (W)}.sup.T(t)x.sub.p(t), e(0)=e.sub.0, (15)
where {tilde over (W)}(t)(t)−W∈
.sup.n.sup.
(e,{tilde over (W)})=e.sup.TPe+γ.sup.−1tr{tilde over (W)}.sup.T{tilde over (W)}. Specifically, from the time derivative of this Lyapunov function, i.e., {dot over (V)} (e(t), {tilde over (W)}(t))≤−e.sup.T(t)Re(t)≤0, one can conclude the boundedness of the pair (e(t), {tilde over (W)}(t)) as well as lim.sub.t.fwdarw.∞{dot over (V)} (e(t), {tilde over (W)}(t))=0, where the latter results from Barbalat's lemma.
(21) Remark 2 (Adaptive Stabilization in the Absence of Actuator Dynamics).
(22) If the main objective is to stabilize the uncertain dynamical system given by Equation (1), the above command following formulation can be simplified since Equation (2) is not necessarily needed in this case. In particular, the feedback control law becomes u(t)=−K.sub.px.sub.p(t)−.sup.T(t)x.sub.p(t), such that A.sub.p−B.sub.pK.sub.p is Hurwitz and
(t)∈
.sup.n.sup.
(t)=γ.sub.pProj.sub.m[
(t), x.sub.p(t)x.sub.p.sup.T(t)P.sub.pB.sub.p], where γ.sub.p∈
.sub.+ is the learning gain and P.sub.p∈
.sub.+.sup.n.sup.
.sub.+.sup.n.sup.
(x.sub.p, {tilde over (W)})=x.sub.p.sup.TPx.sub.p+γ.sub.p.sup.−1tr{tilde over (W)}.sup.T{tilde over (W)} and following the identical steps described in the above remark.
(23) Remark 1 (Adaptive Command Following in the Absence of Actuator Dynamics) and Remark 2 (Adaptive Stabilization in the Absence of Actuator Dynamics) no longer hold for adaptive control of uncertain dynamical systems with actuator dynamics. Thus, the next section presents a new model reference adaptive control architecture for stabilization and command following cases in the presence of actuator dynamics.
(24) An Exemplary Proposed Approach.
(25) Based on the mathematical preliminaries covered in the previous section, we first introduce the actuator dynamics problem. For this purpose, consider the uncertain dynamical system given by
{dot over (x)}.sub.p(t)=A.sub.px.sub.p(t)+B.sub.p[(t)+W.sup.Tx.sub.p(t)], x.sub.p(0)=x.sub.p0, (16)
where (t)∈
.sup.m is the actuator output of the actuator dynamics G.sub.A (available for feedback as in the hedging approach) satisfying
{dot over (v)} (t)=−M((t)−u(t)),
(0)=
.sub.0, (17)
(26) Here, M∈.sup.m×m∩
.sup.m×m has diagonal entries λ.sub.i,i>0, i=1, . . . , m, which represent the actuator bandwidth of each control channel. The remainder of this section is divided into two subsections devoted for adaptive stabilization and command following cases.
(27) Adaptive Stabilization with Actuator Dynamics.
(28) In this section, we design an adaptive feedback control law to stabilize the uncertain dynamical system (Equation 16) subject to the actuator dynamics (Equation 17). For this purpose, let the feedback control law be given by
u(t)=−K.sub.px.sub.p(t)−.sup.T(t)x.sub.p(t), (18)
where K.sub.p∈.sup.m×n.sup.
(t)∈
.sup.n.sup.
(t)=γ.sub.pProj.sub.m[
(t),σ.sub.p(⋅)PG]
(0)=
.sub.p0, (19)
where γ.sub.p∈.sub.+ is the learning rate, σ.sub.p(⋅)=[x.sub.p(t)x.sub.p.sup.T(t), x.sub.p(t)
.sup.T(t)]∈
.sup.n.sup.
.sub.+.sup.(n.sup.
.sup.(n.sup.
|[(t)].sub.ij|≤W.sub.max.i+(j−1)n.sub.
for i=1, . . . , n.sub.p and j=1, . . . , m. It then follows that Equation (17) can be rewritten using Equation (18) as
{dot over (v)}(t)=−M(t)−M(K.sub.p+
.sup.T(t))x.sub.p(t). (21)
(29) Consider now, by adding and subtracting B.sub.p.sup.T(t)x.sub.p(t), an equivalent form of Equation (16) given by
{dot over (x)}.sub.p(t)=(A.sub.p+B.sub.p.sup.T(t))x.sub.p(t)+B.sub.p
(t)−{tilde over (W)}.sup.T(t)x.sub.p(t), (22)
where {tilde over (W)}(t)(t)−W. It follows that Equations (21) and (22) can be written compactly as
(30)
Various results are now presented.
(31) Theorem 1 (Adaptive Stabilization with Actuator Dynamics).
(32) Consider the uncertain dynamical system given by Equation (16), the actuator dynamics given by Equation (17), the feedback control law given by Equation (18), and the update law given by Equation (19). In addition, let
(33)
be quadratically stable with ϵ∈.sub.+
being a design parameter. Then, the solution (z.sub.p(t), {tilde over (W)}(t)) of the closed-loop dynamical system is bounded, lim.sub.t.fwdarw.∞x.sub.p(t)=0, and lim.sub.t.fwdarw.∞
(t)=0.
(34) Remark 3 (LMI Analysis for Stabilization).
(35) In Theorem 1 (Adaptive Stabilization with Actuator Dynamics), we assume that Equation (25) is quadratically stable. Using LMIs, we can satisfy the quadratic stability of Equation (25) for given projection bounds .sub.max for the elements of
(t), the bandwidths of the actuator dynamics M, and the design parameter ϵ. To elucidate this important point, let W.sub.i.sub.
.sup.n.sup.
(36)
where i.sub.1∈{1, 2}, l∈{1, . . . , 2.sup.mn.sup.(t), and let
(37)
be the corners of the hypercube constructed from all the permutations of . For a given M, it can then be shown that
A.sub.i.sub.<0, P≥0, (28)
implies that A.sup.T((t), M, ϵ)P+PA(
(t), M, ϵ)<0; thus, one can solve the LMI given by Equation (28) to calculate P, which is then used in the weight update law (Equation 19).
(38) Alternatively, if one is interested in finding the minimum value of M such that A.sup.T((t), M, ϵ)P+PA(
(t), M, ϵ)<0 holds, we note the following. Let
(39)
Since ξ.sup.T[A.sup.T((t), M, ϵ)P+PA(
(t), M, ϵ)]ξ<0 for all ξ≠0, det(S)≠0, and ξ.sup.T[A.sup.T(
(t), M, ϵ)P+PA(
(t), M, ϵ)]ξ=ξ.sup.TS.sup.−1[Ā.sup.T(
(t), M, ϵ)
(t), M, ϵ)]S.sup.−1ξ<0, one can equivalently show
(40)
and
(41)
the matrix inequality
Ā.sub.i.sub.<0,
implies that Ā.sup.T((t), M, ϵ)
(t), M, ϵ)<0. We can then cast Equation (33) as a convex optimization problem as minimize M; subject to Equation (33);
such that the minimum actuator bandwidth M can be computed for given projection bounds .sub.max and the user defined parameter ϵ. For the resulting minimum M, the solution P is then used in a weight update law for Equation (19) as P=S.sup.−1
(42) For the case of stabilization, it is seen that by augmenting the uncertain dynamical system with the actuator dynamics, a convenient quadratic stability condition can be used to prove stability and allow for computation of minimum actuator bandwidth values. In the next section, we design a reference model motivated from this quadratically stable matrix structure to achieve improved command following performance.
(43) Adaptive Command Following with Actuator Dynamics: Beyond Pseudo-Control Hedging.
(44) Building upon the results of the previous section, for adaptive command following, we incorporate the integrator dynamics given by Equation (2) with the uncertain dynamical system subject to actuator dynamics given by Equation (16) such that the augmented dynamics are given by
{dot over (x)}(t)=Ax(t)+B.sub.rc(t)+B[(t)+W.sup.Tx.sub.p(t)], x(0)=x.sub.0, (34)
where x(t)[x.sub.p.sup.T(t), x.sub.c.sup.T(t)].sup.T∈
.sup.n, n=n.sub.p+n.sub.c, is the augmented stated vector, and A, B, and B.sub.r are given by Equations (4)-(6) respectively.
(45) Motivated by the results of previous section, we consider the reference model using an actuator model as
(46)
where N=[I.sub.n.sub..sup.n.sup.
.sup.m×n is designed such that A.sub.r=A−BK is Hurwitz and
(t)∈
.sup.n.sup.
[x.sub.r.sup.Y(t),
.sub.r.sup.T(t)].sup.T, B.sub.r0=[B.sub.r.sup.T, 0.sub.m×n.sub.
(47)
we can write Equation (35) compactly as
ż.sub.r(t)=F.sub.r((t))z.sub.r(t)+B.sub.r0c(t) (37)
(48) To achieve tracking of the considered reference model Equation (37), let the feedback control law be given by
u(t)=−Kx(t)−.sup.T(t)Nx(t), (38)
where Ŵ(t) satisfies the weight update law
{circumflex over ({dot over (W)})}(t)=γProj.sub.m[(t),σ(⋅)PG],
(0)=
.sub.0, (39)
with γ∈.sub.+ being the learning rate, σ(⋅)=[Nx(t)e.sub.x.sup.T(t), Nx(t)e
.sup.T(t)]∈
.sup.n.sup.
x(t)−x.sub.r(t) being the system error state vector, e
(t)
(t)−
.sub.r(t) being the actuator output error, P∈
.sub.+.sup.(n+m)×(n+m) being a solution of a matrix inequality for which further details are given below, and G=[B.sup.T, 0.sub.m×m].sup.T∈
.sup.(n+m)×m. In addition, the projection bounds are defined such that
|[(t)].sub.ij|≤
.sub.max,i+(j−1)n.sub.
for i=1, . . . , n.sub.p and j=1, . . . , m.
(49) Now, using Equation (38) in Equation (17) and adding and subtracting B.sup.T(t)Nx(t) to Equation (37), we can write the augmented uncertain system dynamics and actuator dynamics compactly as
ż(t)=F.sub.r((t))z(t)+B.sub.r0c(t)−G{tilde over (W)}.sup.T(t)Nx(t), (41)
where z(t)[x.sup.T(t),
.sup.T(t)].sup.T. Defining the augmented error e.sub.z(t)
z(t)−z.sub.r(t), the error dynamics follow from Equation (41) and Equation (37) as
ė.sub.z(t)=F.sub.r((t))e.sub.z(t)−G{tilde over (W)}.sup.T(t)Nx(t),e.sub.z(0)=e.sub.z0, (42)
where {tilde over (W)}(t)=(t)−W∈
.sup.n.sup.
(50) Theorem 2 (Adaptive Command Following with Actuator Dynamics).
(51) Consider the uncertain dynamical system given by Equation (16), the integrator dynamics given by Equation (2), the actuator dynamics given by Equation (17), the reference model given by Equation (35), the feedback control law given by Equation (38), and the update law given by Equation (39). In addition, let
(52)
be quadratically stable with ϵ∈.sub.+
being a design parameter. Then, the solution (e.sub.z(t), {tilde over (W)}(t)) of the closed-loop dynamical system is bounded, lim.sub.t.fwdarw.∞e.sub.x(t)=0, and lim.sub.t.fwdarw.∞e
(t)=0.
(53) Remark 4 (LMI Analysis for Command Following).
(54) Similar to the preceding Remark 3 (LMI Analysis for Stabilization), we can satisfy the quadratic stability of Equation (43) using LMIs. In this case, let
(55)
be the corners of the hypercube constructed from all the permutations of
A.sub.i.sub.
implies that A.sup.T((t), M, ϵ)P+PA(
(t), M, ϵ)<0; thus, one can solve the LMI given by Equation (45) to calculate P, which is then used in the weight update law for Equation (39).
(56) Once again, as in the previous Remark 3 (LMI Analysis for Stabilization), one can alternatively find the minimum value of M such that A.sup.T((t), M, ϵ)P+PA(
(t), M, ϵ)<0 holds. For this purpose, let
(57)
Since ξ.sup.T[A.sup.T((t), M, ϵ)P+PA(
(t), M, ϵ)]ξ<0 for all ξ≠0, det(S.sub.c)≠0, and ξ.sup.T[A.sup.T(
(t), M, ϵ)P+PA(
(t), M, ϵ)]ξ=ξ.sup.TS.sub.c.sup.−1[Ā.sup.T(
(t), M, ϵ)
(t), M, ϵ)]S.sub.c.sup.−1ξ<0, one can equivalently show
(58)
and
(59)
the matrix inequality
Ā.sub.i.sub.
implies that Ā.sup.T((t), M, ϵ)
(t), M, ϵ)<0, such that a convex optimization problem can be set up to compute the minimum actuator bandwidth M for given projection bounds
.sub.max and the user defined parameter ϵ. For the resulting minimum M, the solution P is used in the proposed weight update law of Equation (39) (42) as P=S.sub.c.sup.−1
(60) Remark 5 (Comparison with Hedging Approach).
(61) The hedging framework used in conventional systems alters a given reference model trajectory with a hedging term, allowing for “correct” adaptation in the presence of actuator dynamics. Specifically, the reference model is given by
x(t)=[Ideal Reference Model]+[Hedging Term]=[A.sub.rx.sub.r(t)x.sub.r(t)+B.sub.rc(t)]+[B((t)−u(t))], x.sub.r(0)=x.sub.r0 (51)
This is then written compactly with the actuator dynamics {dot over (v)}(t)=−M((t)−u(t)) and the feedback control law u(t)=−Kx(t)−
(t)Nx(t) as
(62)
(63) Due to this hedging term in the reference model (Equation 51), we use LMIs to show F.sub.r((t)) is quadratically stable, and therefore, the reference model state x.sub.r(t) is bounded.
(64) The benefit of an exemplary architecture of this disclosure is that the designed reference model as given by Equation (35) does not contain the additional ϕ(e(t)) term as in Equation (52). Thus, the exemplary architecture does not significantly alter the trajectories of the given reference model as compared with the hedging approach.
(65) In the next section, we provide an illustrative example using a hypersonic vehicle model and make comparisons between the exemplary control design and reference model with a hedged reference model.
An Illustrative Numerical Example
(66) To elucidate an exemplary approach to the actuator dynamics problem in accordance with the present disclosure, we provide the following application to a hypersonic vehicle. Consider the uncertain hypersonic vehicle longitudinal dynamics given by
(67)
with zero initial conditions and the state vector being defined as x(t)=[α(t), q(t), θ(t)].sup.T, where α(t) denotes the angle-of-attack, q(t) denotes the pitch rate, θ(t) denotes the pitch angle. The uncertainty is considered to be W=[−100.01].sup.T such that it dominantly effects the stability derivative C.sub.m, and the actuator output (t) is given by the actuator dynamics
{dot over (v)}(t)=−λ.sub.e((t)−δ.sub.e(t)), (54)
where δ.sub.e(t) denotes the elevator deflection and λ.sub.e is the actuator bandwidth which is scalar since we are considering a single input control channel. For this example, we design the proposed controller using the short-period approximation such that x.sub.p(t)=[α(t), q(t)].sup.T with the respective system matrices
(68)
(69) LQR (linear-quadratic regulator) theory is used to design the nominal controller for both the proposed control design and the hedging based control design, with E.sub.p=[1; 0] such that a desired angle-of-attack command is followed. The controller gain matrix K is obtained using the highlighted augmented formulation, along with the weighting matrices Q=diag[2000, 25, 400000] to penalize the states and R=12:5 to penalize the control input, resulting in the following gain matrix
K=[−135.9 −37.7 −178.9], (57)
which has a desirable 60.4 degree phase margin and a crossover frequency of 6.75 Hz. In addition, the same learning gain matrix is used for both controllers given by Γ=diag[1000, 10]. For hedging based control design, the solution to A.sub.r.sup.TP+PA.sub.r+R.sub.1=0, where A.sub.rA−BK, is calculated with R.sub.1=diag[1000, 1000, 2000]. In an exemplary controller in accordance with the present disclosure, we use the solution P from an LMI analysis.
(70) Using the LMI analysis in the previous Remark 4 (LMI Analysis for Command Following), the minimum actuator bandwidth for the elevator control surface is calculated to be λ.sub.e=7.96.
(71) In accordance with the present disclosure, embodiments of a new adaptive controller architecture for the stabilization and command following of uncertain dynamical systems with actuator dynamics are provided. To go beyond the (pseudo-control) hedging approach, it was shown that an exemplary architecture does not significantly alter the trajectories of a given reference model for stable adaptation such that it can achieve better performance as compared to the hedging approach. An application to a hypersonic vehicle model elucidated this result.
(72) Referring now to
(73) It should be emphasized that the above-described embodiments of the present disclosure are merely possible examples of implementations, merely set forth for a clear understanding of the principles of the present disclosure. Many variations and modifications may be made to the above-described embodiment(s) without departing substantially from the spirit and principles of the present disclosure. All such modifications and variations are intended to be included herein within the scope of this disclosure and protected by the following claims.