2. NASA Ames Research Center, Moffett Field, CA 94035, USA
MANY control systems are inherently infinite-dimensional when they are described by partial differential equations (PDEs). Currently there is a renewed interest in the control of these kinds of systems especially in flexible aerospace structures,smart electric power grids,and the quantum control field[2, 12, 18]. New general results in the control theory of PDEs can be found in [11, 19, 20]. And a very different approach to adaptive control of specifically parabolic PDEs can be seen in [21]. In this paper,we want to consider how to make a linear infinite-dimensional system track the output of a finite-dimensional reference model in a robust fashion in the presence of persistent disturbances.
In our previous work [3, 4, 5, 6],we have accomplished direct model reference adaptive control and disturbance rejection with very low order adaptive gain laws for multi-input multi-output (MIMO) finite-dimensional systems. When systems are subjected to an unknown internal delay,these systems are also infinite-dimensional in nature. Direct adaptive control theory can be modified to handle this time delay situation for infinite-dimensional spaces[7]. However,this approach does not handle the situation when PDEs describe the open-loop system.
This paper will provide a foundation for the topic of direct adaptive control on infinite-dimensional spaces. This paper considers the effect of infinite-dimensionality on the adaptive control approach of [3, 4, 5, 6]. We will prove here a robust stability theorem for infinite-dimensional spaces. We will show that the adaptively controlled system is robustly globally asymptotically stable using this new result. In order to accommodate robust behavior,we must give up the idea of all errors converging to zero and replace it with the idea of convergence to a prescribed neighborhood of zero whose radius is determined by the magnitude of the unmodeled disturbance.
We want to apply this robust theory to linear PDEs governed by self-adjoint operators with compact resolvent such as linear diffusion systems. And we will also see some of the new technical difficulties encountered in infinite-dimensional direct adaptive control and find out that the devil really is in the details.
Ⅱ. ADAPTIVE ROBUST TRACKING WITH DISTURBANCE REJECTIONLet $X$ be an infinite-dimensional separable Hilbert space with inner product $(x,y)$ and corresponding norm $\left\| x \right\|\equiv \sqrt {(x,x)} $. Also let $A$ be a closed linear operator with domain $D(A)$ dense in $X$. Consider the linear infinite-dimensional plant with persistent disturbances:
$ \begin{align} \label{eq1} \left\{ {\begin{array}{l} \frac{\partial x(t)}{\partial t}=Ax(t)+Bu(t)+\Gamma u_D (t)+v,\\ \qquad x(0)\equiv x_0 \in D(A),\\ Bu\equiv \sum\limits_{i=1}^m {b_i u_i },\\ y(t)=Cx(t),\quad y_i \equiv (c_i ,x(t)),\quad i=1,\cdots,m,\\ \end{array}} \right. \end{align} $ | (1) |
where $x\in D(A)$ is plant state,$b_i \in D(A)$ are actuator influence functions,$c_i \in D(A)$ are sensor influence functions,$u,y\in R^m$ are control input and plant output, respectively,and $u_D $ is a disturbance with known basis functions $\phi _D $. We assume $v$ is a bounded but unknown disturbance such that $\left\| v \right\|\le M_v <\infty $.
In order to accomplish disturbance rejection in a direct adaptive scheme to some degree,we will make use of a definition,given in [17],for persistent disturbances.
Definition 1. A disturbance vector $u_D \in R^q$ is said to be persistent if it satisfies the disturbance generator equations:
$ \begin{align} \label{eq2} \left\{ {\begin{array}{l} u_D (t)=\theta z_D (t) \\ \dot {z}_D (t)=Fz_D (t) \\ \end{array}} \right.\mbox{ or }\left\{ {\begin{array}{l} u_D (t)=\theta z_D (t) \\ z_D (t)=L\phi _D (t) \\ \end{array}} \right., \end{align} $ | (2) |
where $F$ is a marginally stable matrix and $\phi _D (t)$ is a vector of known functions forming a basis for all such possible disturbances. This is known as ``a disturbance with known waveform but unknown amplitudes''.
The objective of control in this paper is to cause output $y(t)$ of the plant to robustly asymptotically track output $y_m \left( t \right)$ of a linear finite-dimensional reference model given by
$ \begin{align} \label{eq3} \left\{ {\begin{array}{l} \dot {x}_m =A_m x_m +B_m u_m,\\ y_m =C_m x_m,\quad x_m (0)=x_0^m,\\ \end{array}} \right. \end{align} $ | (3) |
where reference model state $x_m (t)$ is an $N_{m}$-dimensional vector with reference model output $y_m (t)$ having the same dimension as plant output $y(t)$. In general,the plant and reference models need not have the same dimension. The excitation of the reference model is accomplished via vector $u_m (t)$ which is generated by
$ \begin{align} \label{eq4} \dot {u}_m =F_m u_m,\quad u_m (0)=u_0^m. \end{align} $ | (4) |
The reference model parameters $\left( {A_m ,B_m ,C_m ,F_m } \right)$ will be assumed completely known. The meaning of robust asymptotic tracking is as follows.
We define the output error vector as
$ \begin{align} \label{eq5} e_y \equiv y-y_m \mathrel{\mathop{\kern0pt\longrightarrow}\limits_{t\to \infty }} N(0), \end{align} $ | (5) |
where $N(0)$ is a predetermined neighborhood of zero vector.
The control objective will be accomplished by a direct adaptive control law in the form of
$ \begin{align} \label{eq6} u=G_m x_m +G_u u_m +G_e e_y +G_D \phi _D. \end{align} $ | (6) |
The direct adaptive controller will have adaptive gains given by
$ \begin{align} \label{eq7} \left\{ {\begin{array}{llllll} \dot {G}_u =-e_y u_m^\ast \gamma _u,\quad \gamma _u >0 ,\\ \dot {G}_m =-e_y x_m^\ast \gamma _m,\quad \gamma _m >0,\\ \dot {G}_e =-e_y e_y^\ast \gamma _e,\quad \gamma _e >0,\\ \dot {G}_D =-e_y \phi _D^\ast \gamma _D,\quad \gamma _D >0. \\ \end{array}} \right. \end{align} $ | (7) |
We define the ideal trajectories for (1) as
$ \begin{align} \label{eq8} \left\{ {\begin{array}{llll} x_\ast =S_{11}^\ast x_m +S_{12}^\ast u_m +S_{13}^\ast z_D =S_1 z,\\ u_\ast =S_{21}^\ast x_m +S_{22}^\ast u_m +S_{23}^\ast z_D =S_2 z,\\ \end{array}} \right. \end{align} $ | (8) |
with $z\equiv [{x_m }\quad {u_m }\quad {z_d }]^{\rm T}\in {\bf R}^L$,where the ideal trajectory $x_\ast \left( t \right)$ is generated by the ideal control $u_\ast \left( t \right)$ from
$ \begin{align} \left\{ {\begin{array}{lll} \frac{\partial x_{\ast}}{\partial t}=Ax_\ast +Bu_\ast +\Gamma u_D,\\ y_\ast =Cx_\ast =y_m .\\ \end{array}} \right. \end{align} $ | (9) |
If such ideal trajectories exist,they will be linear combinations of the reference model state,disturbance state,and reference model input (8),and they will produce exact output tracking in a disturbance-free plant (9).
By substitution of (8) into (9) using (3) and (4),we obtain the linear model matching conditions,i.e.,
$ \begin{align} \label{eq10} AS_{11}^\ast +BS_{21}^\ast =S_{11}^\ast A_m, \end{align} $ | (10) |
$ \begin{align} \label{eq11} AS_{12}^\ast +BS_{22}^\ast =S_{12}^\ast F_m +S_{11}^\ast B_m, \end{align} $ | (11) |
$ \begin{align} \label{eq12} CS_{11}^\ast =C_m, \end{align} $ | (12) |
$ \begin{align} \label{eq13} CS_{12}^\ast =0, \end{align} $ | (13) |
$ \begin{align} \label{eq14} AS_{13}^\ast +BS_{23}^\ast +\Gamma \Theta =S_{13}^\ast F, \end{align} $ | (14) |
$ \begin{align} CS_{13}^\ast =0. \end{align} $ | (15) |
The model matching conditions (10)-(15) are necessary and sufficient conditions for the existence of the ideal trajectories in the form of (8). Model matching conditions (10)-(15) can be rewritten as
$ \begin{align} \label{eq15} \left\{ {\begin{array}{l} AS_1 +BS_2 =S_1 L_m +H_1,\\ CS_1 =H_2,\\ \end{array}} \right. \end{align} $ | (16) |
where $S_1 \equiv \left[{{\begin{array}{*{20}c} {S_{11}^\ast } & {S_{12}^\ast } & {S_{13}^\ast } \\ \end{array} }} \right]:R^L\to D(A)\subset X$,$S_2 \equiv \left[ {{\begin{array}{*{20}c} {S_{21}^\ast } & {S_{22}^\ast } & {S_{23}^\ast } \\ \end{array} }} \right]:R^L\to R^m$, $L_m \equiv \left[{{\begin{array}{*{20}c} {A_m } & {B_m } & 0 \\ 0 & {F_m } & 0 \\ 0 & 0 & F \\ \end{array} }} \right],$ and $\left\{ {\begin{array}{l} H_1 \equiv \left[{{\begin{array}{*{20}c} 0 & 0 & {-\Gamma \theta } \\ \end{array} }} \right] \\[2mm] H_2 \equiv \left[{{\begin{array}{*{20}c} {C_m } & 0 & 0 \\ \end{array} }} \right] \\ \end{array}} \right..$ Because $(S_1$ and $S_2 )$ are both of finite rank,they are bounded linear operators on their respective domains.
Ⅳ. IDEAL TRAJECTORY EXISTENCE AND UNIQUENESS: NORMAL FORMTo determine the conditions for existence and uniqueness of the ideal trajectories,we need two lemmas.
Lemma 1. If $CB$ is nonsingular,then $P_1 \equiv B(CB)^{-1}C$ is a (non-orthogonal) bounded projection onto the range of $B$,$R(B)$,along the null space of $C$,$N(C)$ with the complementary bounded projection $P_2 \equiv I-P_1 $,and $X=R(B)\oplus N(C)$ as well as $D(A)=R(B)\oplus [N(C)\cap D(A)]$.
Proof. Consider
$ \begin{align*} & P_1^2 =(B(CB)^{-1}C)(B(CB)^{-1}C) =\\ &\quad B(CB)^{-1}C\equiv P_1. \end{align*} $ |
Hence $P_1 $ is a projection.
Clearly,$R(P_1 )\subseteq R(B)$ and $z=Bu\in R(B)$ which implies
$ \begin{align*} & P_1 z=(B(CB)^{-1}C)Bu =\\ &\quad Bu=z\in R(P_1). \end{align*} $ |
Therefore,$R(P_1 )=R(B)$.
Also $N(P_1 )=N(C)$ because $N(C)\subseteq N(P_1 )$ and $z\in N(P_1 )$ implies that $P_1 z=0$,indicating that $CP_1 z=CB(CB)^{-1}Cz=0$ or $N(P_1 )\subseteq N(C)$. So $P_2 $ is a projection onto $R(B)$ along $N(C)$. But $P_2^\ast \ne P_2 $,so it is not an orthogonal projection in general. We have $X=R(P_1 )\oplus N(P_1 )$; hence $X=R(B)\oplus N(C).$
Since $b_i \in D(A)$,we have $R(B)\subset D(A)$. Consequently, $D(A)=( {R(B)\cap D(A)} )\oplus ( {N(C)\cap D(A)} )=R(B)\oplus ( N(C)$ $\cap D(A) )$. The projection $P_1 $ is bounded since its range is finite-dimensional,and the projection $P_2 $ is bounded because $\left\| {P_2 } \right\|\le 1+\left\| {P_1 } \right\|<\infty .$
This completes the proof of Lemma 1.
Now for the above pair of projections $(P_1,P_2)$,we have
$ \left\{ {\begin{array}{l} \displaystyle\frac{\partial P_1 x}{\partial t}=P_1 \displaystyle\frac{\partial x}{\partial t}=(\underbrace {P_1 AP_1 }_{A_{11} })P_1 x+(\underbrace {P_1 AP_2 }_{A_{12} })P_2 x+(\underbrace {P_1 B}_B)u,\\ \displaystyle\frac{\partial P_2 x}{\partial t}=P_2 \displaystyle\frac{\partial x}{\partial t}=(\underbrace {P_2 AP_1 }_{A_{21} })P_1 x+(\underbrace {P_2 AP_2 }_{A_{22} })P_2 x+(\underbrace {P_2 B}_{=0})u,\\ y=(\underbrace {CP_1 }_C)P_1 x+(\underbrace {CP_2 }_{=0})P_2 x,\\ \end{array}} \right. $ |
which implies that
$ \left\{ {\begin{array}{l} \dfrac{\partial P_1 x}{\partial t}=A_{11} P_1 x+A_{12} P_2 x+Bu, \\[2mm] \dfrac{\partial P_2 x}{\partial t}=A_{21} P_1 x+A_{22} P_2 x, \\[2mm] y=CP_1 x=Cx,\\ \end{array}} \right. $ |
because
$ \begin{align*} &y=Cx=C(B(CB)^{-1}C)x=CP_1 x,\\ & P_1 x=B(CB)^{-1}Cx=B(CB)^{-1}y,\\ &CP_2 =C-CB(CB)^{-1}C=0,\\ &P_2 B=B-B(CB)^{-1}CB=0. \end{align*} $ |
Lemma 2. If $CB$ is nonsingular,then there exists an invertible,bounded linear operator $W\equiv \left[ {{\begin{array}{*{20}c} C \\ {W_2 P_2 } \\ \end{array} }} \right]:X\to \tilde {X}\equiv R(B)\times l_2 $ such that $\bar {B}\equiv WB=\left[{{\begin{array}{*{20}c} {CB} \\ 0 \\ \end{array} }} \right]$ and $\bar {C}\equiv CW^{-1}=\left[ {{\begin{array}{*{20}c} {I_m } & 0 \\ \end{array} }} \right]$ and $\bar {A}\equiv WAW^{-1}$.
This coordinate transformation can be used to put (1) into the normal form,i.e.,
$ \begin{align} \label{eq16} \left\{ {\begin{array}{l} \dot {y}=\bar {A}_{11} y+\bar {A}_{12} z_2 +CBu,\\ \displaystyle\frac{\partial z_2}{\partial t}=\bar {A}_{21} y+\bar {A}_{22} z_2,\\ \end{array}} \right. \end{align} $ | (17) |
where subsystem $(\bar {A}_{22} ,\bar {A}_{12} ,\bar {A}_{21} )$ is called the zero dynamics of (1),and
$ \begin{align*} \left\{\begin{array}{*{20}l} \bar {A}_{11} \equiv CA_{11} B(CB)^{-1}=CAB(CB)^{-1},\\ \bar {A}_{12} \equiv CAW_2^\ast,\\ \bar {A}_{21} \equiv W_2 A_{21} B(CB)^{-1},\\ \bar {A}_{22} \equiv W_2 A_{22} W_2^\ast, \end{array}\right. \end{align*} $ |
and $W_2 :X\to l_2 \mbox{ such that }W_2 x\equiv \left[ {{\begin{array}{*{20}c} {(\theta _1 ,P_2 x)}\\ {(\theta _2 ,P_2 x)}\\ {(\theta _3 ,P_2 x)}\\ \vdots\\ \end{array} }} \right]$ is an isometry from $N(C)\mbox{ into }l_2 $.
Proof. Since $X$ is separable,we can let $N(C)=\overline {\rm sp} \left\{ {\theta _k } \right\}_{k=1}^\infty $ be an orthonormal basis.
Define $W_2 :X\to l_2 $ by $W_2 x\equiv \left[ {{\begin{array}{*{20}c} {(\theta _1 ,P_2 x)} \\ {(\theta _2 ,P_2 x)} \\ {(\theta _3 ,P_2 x)} \\ {\vdots} \\ \\ \end{array} }} \right]$.
Note that $\left\| {W_2 x} \right\|^2=\sum\nolimits_{k=1}^\infty {\left| {(\theta _k ,P_2 x)} \right|^2} =\left\| {P_2 x} \right\|^2<\infty $ which implies $W_2 x\in l_2 $. So $W_2 $ is a bounded linear operator,and an isometry of $W_2 (N(C))$ into $l_2 $. Consequently,$W_2 W_2^\ast =I$ on $N(C)$. Then we have $W_2^\ast W_2 =P_2 $ and the retraction,$z_2 =W_2 P_2 x\in l_2 $. Also $W_2^\ast z_2 =W_2^\ast (W_2 P_2 x)=P_2 x$. Now,using $x=P_1 x+P_2 x$ from Lemma 1,we have
$ \begin{align*} & \dot {y}=CP_1 \dot {x} =\\ &\quad CP_1 A(P_1 x+P_2 x)+CP_1 Bu =\\ &\quad C(B(CB)^{-1}C)AB(CB)^{-1}y+\\ &\quad C(B(CB)^{-1}C)A(W_2^\ast z_2 )+C(B(CB)^{-1}C)Bu =\\ &\quad \bar {A}_{11} y+\bar {A}_{12} z_2 +CBu, \end{align*} $ |
and
$ \begin{align*} \dot {z}_2 &=W_2 P_2 \dot {x} =\\ &WP_2 [A(P_1 x+P_2 x)+Bu] =\\ &W_2 P_2 A(B(CB)^{-1}y+W_2^8 z_2 )+W_2 P_2 Bu =\\ &W_2 (I-B(CB)^{-1}B)AB(CB)^{-1}y+\\ &W_2 (I-B(CB)^{-1}B)AW_2^\ast z_2 =\\ &\bar {A}_{21} y+\bar {A}_{22} z_2. \end{align*} $ |
This yields the {normal form} (17).
Choose $W\equiv \left[{{\begin{array}{*{20}c} C \\ {W_2 P_2 } \\ \end{array} }} \right]$,which is a bounded linear operator. Then $W$ has a bounded inverse,explicitly stated as $W^{-1}\equiv \left[ {{\begin{array}{*{20}c} {B(CB)^{-1}} & {W_2^\ast } \\ \end{array} }} \right]$. This gives
$ \begin{align*} &WW^{-1}=\left[{{\begin{array}{*{20}c} {CB(CB)^{-1}} & {CW_2^\ast } \\ {W_2 P_2 B(CB)^{-1}} & {W_2 P_2 W_2^\ast }\\ \end{array} }} \right] =\\ &\quad\left[{{\begin{array}{*{20}c} {I_m } & 0\\ 0 & {W_2 W_2^\ast } \\ \end{array} }} \right]=\left[{{\begin{array}{*{20}c} {I_m } & 0 \\ 0 & I \\ \end{array} }} \right]=I, \end{align*} $ |
because $R(W_2^\ast )\subseteq N(C)$.
Furthermore,$W^{-1}W=P_1 +W_2^\ast W_2 P_2 =P_1 +P_2 =I,$ because $W_2 W_2^\ast =I$ on $N(C)$. Also direct calculation yields
$ \begin{align*} \left\{ \!\!\!{\begin{array}{lllll} &\!\!\!\!\bar {B}\equiv WB=\left[{{\begin{array}{*{20}lllll} {CB} \\ {W_2 P_2 B} \\ \end{array} }} \right]=\left[{{\begin{array}{*{20}llll} {CB} \\ 0 \\ \end{array} }} \right],\\[2mm] & \!\!\!\!\bar {C}\equiv CW^{-1}=\left[ {{\begin{array}{*{20}lllll} {CB(CB)^{-1}} & {CW_2^\ast } \\ \end{array} }} \right]=\left[{{\begin{array}{*{20}lll} {I_m } & 0 \\ \end{array} }} \right] ,\\[2mm] &\!\!\!\! \bar {A}\equiv WAW^{-1}=\left[{{\begin{array}{*{20}lll} {CAB(CB)^{-1}} & {CAW_2^\ast }\\ {W_2 P_2 AB(CB)^{-1}} & {W_2 P_2 AP_2 W_2^\ast } \\ \end{array} }} \right].\\ \end{array}} \right.\end{align*} $ |
This completes the proof of Lemma 2.
Now we can prove the following theorem about the existence and uniqueness of ideal trajectories.
Theorem 1. Assume $CB$ is nonsingular. Then $\sigma (L_m )=\sigma (A_m )\cup \sigma (F_m )\cup \sigma (F)\subset \rho (\bar {A}_{22} )$ where $\rho (\bar {A}_{22} )\equiv \{\lambda \in C\mbox{ such that }(\lambda I-\bar {A}_{22} )^{-1}:l_2 \to l_2$ is a bounded linear operator} if and only if there exist unique bounded linear operator solutions $(S_1 ,S_2 )\mbox{ }$ satisfying matching conditions (16)
.Note that we can also write $\sigma (L_m )\cap \sigma (\bar {A}_{22} )=\phi,\mbox{ where }\sigma (\bar {A}_{22} )\equiv \mbox{ [}\rho (\bar {A}_{22} )]^c$.
Proof. Define $\bar {S}_1 \equiv W^{-1}S_1 =\left[ {{\begin{array}{*{20}c} {\bar {S}_a } \\ {\bar {S}_b } \\ \end{array} }} \right]\mbox{ and }\bar {H}_1 \equiv WH_1 =\left[ {{\begin{array}{*{20}c} {\bar {H}_a } \\ {\bar {H}_b } \\ \end{array} }} \right]$. From (16),we obtain
$ \left\{ {\begin{array}{l} \bar {A}\bar {S}_1 +\bar {B}S_2 =\bar {S}_1 L_m +\bar {H}_1,\\ \bar {C}\bar {S}_1 =H_2,\\ \end{array}} \right. $ |
where $(\bar {A},\bar {B},\bar {C})$ is the normal form (17). From this we obtain
$ \left\{ {\begin{array}{l} \bar {S}_a =H_2,\\[2mm] S_2 =(CB)^{-1}[H_2 L_m +\bar {H}_a-(\bar {A}_{11} H_2 +\bar {A}_{12} \bar {S}_b )],\\[2mm] \bar {A}_{22} \bar {S}_b-\bar {S}_b L_m =\bar {H}_b-\bar {A}_{21} H_2. \\ \end{array}} \right. $ |
We can rewrite the last equation as
$ \begin{align*} (\lambda I-\bar {A}_{22} )\bar {S}_b-\bar {S}_b (\lambda I-L_m )=\bar {A}_{21} H_2-\bar {H}_b \equiv \bar {H} \end{align*} $ |
for all complex $\lambda $. Now assume that $L_m $ is simple and therefore provides a basis of eigenvectors $\left\{ {\phi _k } \right\}_{k=1}^L \mbox{ for }R^L$. This is not essential but will make this part of the proof easier to understand. The proof can be done with generalized eigenvectors and the Jordan form. So we have
$ \begin{align*} &(\lambda _k I-\bar {A}_{22} )\bar {S}_b \phi _k-\bar {S}_b \underbrace {(\lambda _k I-L_m )\phi _k }_{=0}=\\ &\quad\bar {A}_{21} H_2-\bar {H}_b \equiv \bar {H}, \end{align*} $ |
which implies that
$ \begin{align*} \bar {S}_b \phi _k =(\lambda _k I-\bar {A}_{22} )^{-1}\bar {H}\phi _k, \end{align*} $ |
because $\lambda_{k} \in \sigma (L_m )\subset \rho ({\bar{A}}_{22} )$. Thus we have
$ \begin{align*} &\bar {S}_b z=\sum\limits_{k=1}^L {\alpha _k (\lambda _k I-\bar {A}_{22} )^{-1}\bar {H}\phi _k } \forall z=\\ &\quad \sum\limits_{k=1}^L {\alpha _k \phi _k \in R^L} . \end{align*} $ |
Since $\lambda_{k} \in \sigma (L_m )\subset \rho (\bar {A}_{22} )$,all $(\lambda _k I-\bar {A}_{22} )^{-1}$ are bounded operators.
Also $\bar {H}\equiv \bar {A}_{21} H_2-\bar {H}_b $ is a bounded operator on $R^L$. Therefore $\bar {S}_b $ is a bounded linear operator,and this leads to $S_1 $ being bounded linear as well. If we look at the converse statement and let $\lambda _{\ast } \in \sigma (L_m )\cap \sigma (\bar {A}_{22} )=\phi $. Then there exists $\phi _\ast \ne 0$ such that
$ \begin{align*} &(\lambda _\ast I-\bar {A}_{22} )\bar {S}_b \phi _\ast-\bar {S}_b \underbrace {(\lambda _\ast I-L_m )\phi _\ast }_{=0}=\\ &\quad(\lambda _\ast I-\bar {A}_{22} )\bar {S}_b \phi _\ast =\bar {H}. \end{align*} $ |
In this case,three things can happen when,$\lambda _{\ast } \in \sigma (\bar {A}_{22} )$:
1) $(\lambda _\ast I-\bar {A}_{22} )$ can fail to be one-to-one so multiple solutions of $\bar {S}_b $ will exist;
2) $R(\lambda _\ast I-\bar {A}_{22} )$ can fail to be all of $X$, so no solution $\bar {S}_b $ may occur;
3) $(\lambda _\ast I-\bar {A}_{22} )^{-1}$ can fail to be a bounded operator so solution $\bar {S}_b $ may be unbounded.
In all cases these three alternatives lead to lack of unique bounded operator solutions for $S_1 $.
The proof of Theorem 1 is complete.
It is possible to relate the point spectrum $\sigma _p (\bar {A}_{22} )\equiv \left\{ {\lambda \mbox{ such that }\lambda I-\bar {A}_{22} \mbox{ is not one-to-one}} \right\}$ to the set $Z$ of transmission (or blocking) zeros of $(A,B,C)$. As in the finite-dimensional case [13],we can see that
$ \begin{align*} & Z\equiv \Big\{ \lambda \mbox{ such that }V(\lambda )\equiv \left[{{\begin{array}{*{20}c} {\lambda I-A}& B\\ C& 0 \\ \end{array} }} \right]: D(A)\times R^m \to\\ &~ X\times R^m~{\rm linear~operator~is~not~one{\mbox -}to{\mbox-}one}\Big\}.\end{align*} $ |
Lemma 3. $Z=\sigma _p (\bar {A}_{22} )\equiv \{ \lambda \mbox{ such that }\lambda I-$ $\bar {A}_{22}$ is not one-to-one} is called the point spectrum of $\bar {A}_{22} $.
So the transmission zeros of the infinite-dimensional open-loop plant $(A,B,C)$ are the eigenvalues of its zero dynamics $(\bar {A}_{22} ,\bar {A}_{12} ,\bar {A}_{21} ).$
Proof. From
$ \begin{align*}& \bar {V}(\lambda )=\left[ {{\begin{array}{*{20}c} {\lambda I-\bar {A}} & {\bar {B}} \\ {\bar {C}} & 0 \\ \end{array} }} \right]=\\ &\quad \left[{{\begin{array}{*{20}c} {W^{-1}} & 0 \\ 0 & I\\ \end{array} }} \right]\underbrace {\left[{{\begin{array}{*{20}c} {\lambda I-A} & B\\ C & 0\\ \end{array} }} \right]}_{V(\lambda )}\left[{{\begin{array}{*{20}c} W & 0 \\ 0 & I \\ \end{array} }} \right], \end{align*} $ |
we obtain $\mbox{ }\left[{{\begin{array}{*{20}c} {\lambda I-\bar {A}} & {\bar {B}} \\ {\bar {C}} & 0\\ \end{array} }} \right]$ is not one-to-one if and only if $\left[ {{\begin{array}{*{20}c} {\lambda I-A} & B \\ C& 0\\ \end{array} }} \right]$ is not one-to-one. But,using the normal form from Lemma 2,we have
$ \begin{align*} &\bar {V}(\lambda )\equiv \left[{{\begin{array}{*{20}c} {\lambda I-\bar {A}} & {\bar {B}} \\ {\bar {C}}& 0\\ \end{array} }} \right]=\\ &\quad\left[{{\begin{array}{*{20}c} {\lambda I-\bar {A}_{11} } & {-\bar {A}_{12} }& {CB} \\ {-\bar {A}_{21} } & {\lambda I-\bar {A}_{22} }& 0 \\ {I_m } & 0 & 0 \\ \end{array} }} \right]. \end{align*} $ |
And therefore $\bar {V}(\lambda )h=\bar {V}(\lambda )[{h_1 }\quad {h_2 } \quad {h_3 }]^{\rm T}=0$ if and only if $h_1 =0,h_3 =(CB)^{-1}\bar {A}_{12} h_2 ,$ and $(\lambda I-\bar {A}_{22} )h_2 =0$. So $h\ne 0$ if and only if $h_2 \ne 0$. Therefore $\left[ {{\begin{array}{*{20}c} {sI-\bar {A}}& {\bar {B}} \\ {\bar {C}} & 0 \\ \end{array} }} \right]$ is not one-to-one if and only if $\lambda \in \sigma _p \mbox{ (}\bar {A}_{22} )$.
This completes the proof of Lemma 3.
Using Lemma 3 and Theorem 1,we have the following internal model principle.
Corollary 1. Assume $CB$ is nonsingular and $\sigma (\bar {A}_{22} )=\sigma _p (\bar {A}_{22} )\mbox{ }=\sigma _p (P_2 AP_2 )$,where $\bar {A}_{22} \equiv W_2^\ast P_2 AP_2 W_2 $. There exist unique bounded linear operator solutions $(S_1 ,S_2 )\mbox{ }$ satisfying the matching conditions (16) if and only if $\sigma (L_m )\cap Z=[\sigma (A_m )\cup \sigma (F_m )\cup \sigma (F)]\cap Z=\phi \mbox{ }$,i.e.,no eigenvalues of $(A_m ,F_m ,F)$ can be zeros of $(A,B,C). $ Note that
$ \begin{align*} &\lambda I-\bar {A}_{22} \mbox{ is not one-to-one}\Leftrightarrow\\ &\exists x\ne 0\ni P_2 x\ne 0~\&~ z_2 =W_2 P_2 x\ne 0~\&~(\lambda I-\bar {A}_{22} )z_2 =\\ &\quad 0 \Leftrightarrow \exists x\ne 0 \ni P_2 x\ne 0~\&~0=(\lambda I-\bar {A}_{22} )W_2 P_2 x=\\ &\qquad (\lambda \underbrace {W_2 W_2^\ast }_I-W_2 PAP_2 W_2^\ast )W_2 P_2 x = \\ &\qquad [W_2 (\lambda I-P_2 AP_2 )W_2^\ast]W_2 P_2 x \Leftrightarrow\\ &\qquad W_2 (\lambda I-P_2 AP_2 )W_2^\ast \mbox{ is not one-to-one on }N(C). \end{align*} $ |
But $W_2$ is an isometry on $N(C)$,so $\mbox{ }\sigma _p (\bar {A}_{22} )=\sigma _p (P_2 AP_2 )$.
Ⅴ.STABILITY OF THE ERROR SYSTEM: ALMOST STRICT DISSIPATIVITYThe error system can be found from (1) and (9) by first defining $e\equiv x-x_\ast $ and $\Delta u\equiv u-u_\ast $. Then we have
$ \begin{align} \left\{ {\begin{array}{l} \displaystyle\frac{\partial e}{\partial t}=Ae+B\Delta u+v ,\\ e_y \equiv y-y_m =y-y_\ast =Ce. \\ \end{array}} \right. \end{align} $ | (18) |
Now consider the definition of strict dissipativity for infinite-dimensional systems and the general form of this adaptive error system to prove stability. The main theorem of this section will be utilized later to assess the convergence and stability of the adaptive controller with disturbance rejection for linear diffusion systems.
Noting that there can be some ambiguity in the literature with the definition of strictly dissipative systems,we modify the suggestion of Wen in [8] for finite-dimensional systems and expand it to include infinite-dimensional systems.
Definition 2. The triple $(A_c ,B,C)$ is said to be strictly dissipative (SD) if $A_c $ is a densely defined,closed operator on $D(A_c )\subseteq X$ which is a complex Hilbert space with inner product $(x,y)$ and corresponding norm $\left\| x \right\|\equiv \sqrt {(x,x)} $ and generates a $C_0 $ semigroup of bounded operators $U(t)$,and $(B,C)$ are bounded input/output operators with finite rank $M$ where $B:R^m\to X$ and $C:X\to {\bf R}^m$. In addition,there exist symmetric positive bounded operators $P$ and Q on $X$ such that $0\le p_{\min } \left\| e \right\|^2\le (Pe,e)\le p_{\max } \left\| e \right\|^2,\quad 0\le q_{\min } \left\| e \right\|^2\le (Qe,e)\le q_{\max } \left\| e \right\|^2$,i.e.,$P$ and $Q$ are bounded and coercive,and
$ \begin{align} \label{eq18} \left\{ {\begin{array}{l} {\rm Re}(PA_c e,e)\equiv \frac{1}{2}\left[{(PA_c e,e)+\overline {(PA_c e,e)} } \right]=\\ \qquad \frac{1}{2}\left[{(PA_c e,e)+(e,PA_c e)} \right] =\\ \qquad-(Qe,e)\le-q_{\min } \left\| e \right\|^2,\quad e\in D(A_c ),\\ PB=C^\ast,\\ \end{array}} \right. \end{align} $ | (19) |
where $C^\ast $ is the adjoint of the operator $C$.
We also say that $(A,B,C)$ is almost strictly dissipative (ASD) when there exists a $G_\ast \in R^{m\times m}$ such that $(A_c ,B,C)$ is SD with $A_c \equiv A+BG_\ast C$. Note that if $P=I$ in (19),by the Lumer-Phillips theorem (see [10],p405),we would have
$ \left\| {U_c (t)} \right\|\le {\rm e}^{-\sigma t},\quad t\ge 0\mbox{ ;}\quad \sigma \equiv q_{\min } >0. $ |
Henceforth,we will make the following set of assumptions.
Hypothesis 1. Assume the following:
1) There exists a gain $G_e^\ast $ such that the triple $(A_C \equiv A+BG_e^\ast C,B,C)$ is SD,i.e.,$(A,B,C)$ is ASD;
2) $A$ is a densely defined,closed operator on $D(A)\subseteq X$ and generates a $C_0 $ semigroup of bounded operators $U(t)$;
3) $\phi _D $ is bounded.
From (8),we have $u_\ast =S_{21}^\ast x_m +S_{22}^\ast u_m +S_{23}^\ast z_D $ and using (6) and (7),we obtain
$ \begin{align*} & \Delta u\equiv u-u_\ast =\notag \\ &\quad(G_m x_m +G_u u_m +G_e e_y +G_D \phi _D )-\notag \\ &\quad(S_{21}^\ast x_m +S_{22}^\ast u_m +S_{23}^\ast \underbrace {z_D }_{L\phi _D }) =\notag \\ &\quad G_e^\ast e_y +\Delta G_e e_y +\notag \\ &\quad\left[\begin{array}{*{20}c} {\Delta G_m } & {\Delta G_u } & {\Delta G_D } \\ \end{array} \right]\left[{{\begin{array}{*{20}c} {x_m } \\ {u_m } \\ {\phi _D } \\ \end{array} }} \right]=\notag \\ &\quad G_e^\ast e_y +\Delta G\eta, \end{align*} $ |
where $\Delta G\!\equiv\! G-G_\ast,G\!\equiv\! \left[\! {{\begin{array}{*{20}c} {G_e } & {G_m } & {G_u } & {G_D } \\ \end{array} }}\! \right],~G_\ast\! \equiv\! \left[\! {{\begin{array}{*{20}c} {G_e^\ast } & {S_{21}^\ast } & {S_{22}^\ast } & {S_{23}^\ast L} \\ \end{array} }}\! \right],$ and $ \eta \!\equiv\! \left[\! {{\begin{array}{*{20}c} {e_y } \!& {x_m } \!& {u_m } \!& {\phi _D } \\ \end{array} }} \!\right]^{\rm T}.$ From (1),(6),(7),(18),and (19),the error system becomes
$ \begin{align} \label{eq19} \left\{ \begin{array}{l} \frac{\partial e}{\partial t}=(\underbrace {A+BG_e^\ast C}_{A_c })e+B\Delta G\eta +v=\\ \quad A_c e+B\rho +v,\quad e\in D(A),\quad \rho \equiv \Delta G\eta,\\ {e_y}={ce},\\ \Delta \dot {G}=\dot {G}-\dot {G}_\ast =\dot {G}=-e_y \eta ^\ast \gamma ,\\[2mm] \gamma \equiv \left[\begin{array}{*{20}c} {\gamma _e } & 0 & 0 & 0 \\ 0 & {\gamma _m } & 0 & 0 \\ 0 & 0 & {\gamma _u } & 0 \\ 0 & 0 & 0 & {\gamma _D } \\ \end{array} \right]>0. \\ \end{array} \right. \end{align} $ | (20) |
Since $B,C $ are finite rank operators,so is $BG_e^\ast C$. Therefore,$A_c \equiv A+BG_e^\ast C$ with $D(A_c \mbox{)}=D(A)$ generates a $C_0 $ semigroup $U_c (t)$ because $A$ does (see [17], Theorem 2.1,p497). Furthermore,by Theorem 8.10 (p157) in [1], $x(t)$ remains in $D(A)$ and is differentiable there for all $t\ge 0$. This is because $F(t)\equiv B\rho =B\Delta G\eta $ is continuously differentiable in $D(A).$
We see that (20) is the {feedback interconnection} of an infinite-dimensional linear subsystem with $e\in D(A)\subseteq X$ and a finite-dimensional subsystem with $\Delta G\in {\bf R}^{m\times m}$. This can be written in the following form using $w\equiv \left[{{\begin{array}{*{20}c} e\\ {\Delta G} \\ \end{array} }} \right]\in D\equiv D(A)\times R^{m\times m}\mbox{ }\subseteq \mbox{ }\bar {X}\equiv X\times R^{m \times m}$:
$ \begin{align} \label{eq20} \left\{ {\begin{array}{l} \frac{\partial w}{\partial t}=w_t =f(t,w)\equiv \left[{{\begin{array}{*{20}c} {A_c e+B\rho (t)+v} \\ {-e_y \eta ^\ast \gamma } \\ \end{array} }} \right],\\[2mm] w(t_0 )=w_0 \in D\mbox{ dense in }\bar {X}\equiv X\times R^{m\times m} .\\ \end{array}} \right. \end{align} $ | (21) |
The inner product on $\bar {X}\equiv X\times R^{m\times m}$ can be defined as
$ \begin{align*} \begin{array}{c} (w_1 ,w_2 )\equiv \left( {\left[{{\begin{array}{*{20}c} {x_1 } \\ {\Delta G_1 } \\ \end{array} }} \right],\left[{{\begin{array}{*{20}c} {x_2 } \\ {\Delta G_2 } \\ \end{array} }} \right]} \right)\equiv \\ \qquad \quad (x_1 ,x_2 )+\mbox{tr}(\Delta G_2 \Delta G_1 ^\ast ), \end{array} \end{align*} $ |
which will make it a Hilbert space as well.
The following robust stabilization theorem shows that convergence to a neighborhood with radius determined by the supremum norm of $v $ is possible for a specific type of adaptive error system. In the following,we denote $\left\| M \right\|_2 \equiv \sqrt {\mbox{tr}(M\gamma ^{-1}M^{\rm T})} $ as the trace norm of a matrix $M $where $\gamma >0$.
Theorem 2 (Robust stabilization). Consider the coupled system of differential equations
$ \begin{align} \left\{ \begin{array}{l} \dot{e}=A_c e+B\underbrace {\left( {G(t)-G^\ast } \right)}_{\Delta G} z+v,\\ e_y =Ce,\\ \dot{G}(t)=-e_y z^{\rm T}\gamma-aG(t),\\ \end{array} \right. \end{align} $ | (22) |
where $e,v\in D(A_c ),~z\in R^m$ and $\left[ {{\begin{array}{*{20}c} e \\ G \\ \end{array} }} \right]\in \bar {X}\equiv X\times R^{m\times m}$ is a Hilbert space with inner product $\left( {\left[{{\begin{array}{*{20}c} {e_1 } \\ {G_1 } \\ \end{array} }} \right],\left[{{\begin{array}{*{20}c} {e_2 } \\ {G_2 } \\ \end{array} }} \right]} \right)\equiv (e_1 ,e_2 )+\mbox{tr}\left( {G_1 \gamma ^{-1}G_2 } \right)$,norm $\left\| {\left[ {{\begin{array}{*{20}c} e \\ G \\ \end{array} }} \right]} \right\|\equiv \left( {\left\| e \right\|^2+\mbox{tr}(G\gamma ^{-1}G)} \right)^{\frac{1}{2}},$ $G(t)$ is the $m\times m$ adaptive gain matrix,and $\gamma $ is any positive definite constant matrix of appropriate dimension. {Assume the following:}
1) $(A,B,C)$ is ASD with $A_c \equiv A+BG_\ast C$;
2) There exists $M_G >0$ such that $\sqrt {\mbox{tr}(G^\ast G^{\ast {\rm T}})} \le M_G $;
3) There exists $M_\upsilon >0$ such that $\mathop {\sup }\nolimits_{t\ge 0} \left\| {v (t)} \right\|\le M_v <\infty $;
4) There exists $\alpha >0$ such that $a\le \frac{q_{\min } }{p_{\max } }$,where $q_{\min } ,p_{\max } $ are defined in Definition 2;
5) The positive definite matrix $\gamma $ satisfies $\mbox{tr}(\gamma ^{-1})\le \left( {\frac{M_v }{aM_G }} \right)^2$.
Then the gain matrix $G(t)$,is bounded,and state $e(t)$ exponentially with rate ${\rm e}^{-at}$ approaches the ball of radius,
$ \begin{align} R_\ast \equiv \frac{\left( {1+\sqrt {p_{\max } } } \right) }{a\sqrt {p_{\min } } }M_v.\notag \end{align} $ |
The proof of Theorem 2 is in Appendix.
Now we can prove the robust stability and convergence of the direct adaptive controller (4) in closed-loop with the linear infinite-dimensional plant (1) and (2).
Theorem 3. Under Hypothesis 1,we have robust state and output tracking of the reference model,i.e.,$\left[ {{\begin{array}{*{20}c} e \\ {\Delta G} \\ \end{array} }} \right]\mathrel{\mathop{\kern0pt\longrightarrow}\limits_{t\to \infty }} N(0,R_\ast )$ and since $C$ is a bounded linear operator,we have $e_y =y-y_m =Ce\mathrel{\mathop{\kern0pt\longrightarrow}\limits_{t\to \infty }} N(0,R_\ast )$ with bounded adaptive gains $G\equiv \left[ {{\begin{array}{*{20}c} {G_e } & {G_m } & {G_u } & {G_D } \\ \end{array} }} \right]=G_\ast +\Delta G.$
Proof. Follows directly from the application of Theorem 2 to the error system (18) or (22).
Note that uniform continuity is not needed since Barbalat$'$s lemma[15] is not invoked here.
Ⅵ.APPLICATION: ADAPTIVE CONTROL OF UNSTABLE DIFFUSION EQUATIONS DESCRIBED BY SELF-ADJOINT OPERATORS WITH COMPACT RESOLVENTWe will apply the above direct adaptive controller on the following single-input/single-output Cauchy problem:
$ \left\{ {\begin{array}{l} \frac{\partial x}{\partial t}=Ax+b(u+u_D )+v,~x(0)\equiv x_0 \in D(A),\\ y=(c,x),~\mbox{ with }b,c\in D(A). \\ \end{array}} \right. $ |
And the reference model will be
$ \left\{ {\begin{array}{l} \dot {x}_m =A_m x_m +B_m u_m =-x_m +u_m ,\\ y_m =C_m x_m =x_m,\\ \dot {u}_m =F_m u_m =0. \\ \end{array}} \right. $ |
For this application,we will assume the disturbances are step functions. Note that the disturbance functions can be any basis function as long as $\phi_D$ is bounded,in particular sinusoidal disturbances are often applicable. So we have $\phi_D \equiv 1$ and $\left\{ {\begin{array}{l} u_D =(1)z_D \\ \dot {z}_D =(0)z_D \\ \end{array}} \right.$ which implies $F=0$ and $\theta _D =1$.
Let $u=G_e y+G_D $ with $\left\{ {\begin{array}{lllll} \dot {G}_e =-e_y e_y^\ast \gamma _e \\ \dot {G}_D =-e_y \gamma _D \\ \end{array}} \right.$.
We will assume that $A$ is closed and densely defined,but is also a {self-adjoint operator with compact resolvent}. This means $A$ has discrete real spectrum: $\lambda _1 \ge \lambda _2 \ge\cdots \to -\infty $ and $\left\{ {\varphi _k } \right\}_{k=1}^\infty $ is an orthonormal sequence of eigenfunctions (see [9],Theorem 6.29, p187). Assume $\lambda _k \ne 0,~\forall k=1,2,\cdots$. Only a finite number of the eigenvalues maybe unstable (or positive); so we will say that $\lambda _1 \ge \lambda _2 \ge \cdots\ge \lambda _N \ge-\sigma \ge \lambda _{N+1} \to-\infty ,\mbox{ where}~\sigma >\mbox{0}$ is the desired stability margin.
Define the orthogonal projection operators as $x=P_N x+P_R x$ with
$ \begin{align*} P_N \equiv \sum\limits_{k=1}^N {\underbrace {(x,\varphi _k )}_{x_k }\varphi _k } ,\qquad P_R \equiv \sum\limits_{k=N+1}^\infty {\underbrace {(x,\varphi _k )}_{x_k }\varphi _k,} \end{align*} $ |
where $P_N :X\to S_N \equiv {\rm sp}\left\{ {\varphi _1 ,\cdots,\varphi _N } \right\},P_R :X\to S_N^\bot $.
Let the {sensor and actuator influence functions be the same and entirely in} $S_N $,that is,$c\equiv b=\sum\nolimits_{k=1}^N {\underbrace {(b,\varphi_k )}_{b_k }\varphi_k } $ with all $b_k \ne 0$ and choose $G_\ast \equiv-g^\ast <0$. Then $A_c =A-g_\ast b^\ast b$ remains self-adjoint with discrete spectrum,and we have
$ \begin{align*} \left\{ {\begin{array}{l} A_c P_N x=\sum\limits_{k=1}^N {\lambda _k } P_N \varphi _k-g_\ast P_N b^\ast bx=\\ \qquad \sum\limits_{k=1}^N {\lambda _k } \varphi _k-g_\ast (b,P_N x)b,\\ \mbox{ }A_c P_R x=\sum\limits_{k=N+1}^\infty {\lambda _k } P_R \varphi _k =\sum\limits_{k=N+1}^\infty {\lambda _k } \varphi _k,\\ \end{array}} \right.\end{align*} $ |
because $P_N b=b=c$. So ${\rm Re}(PA_c x,x)={\rm Re}(PA_c P_N x,x)+{\rm Re}(PA_c P_R x,x)$,and in Definition 2,we will use $P=I$,and obtain the following results from [17]:
1) ${\rm Re}(A_c P_N x,x)=\sum\nolimits_{k=1}^N {(\lambda _k x_k^2 }-g_\ast (\sum\nolimits_{k=1}^N {b_k x_k } )^2)=\underline{x}_N^{\rm T} (\underbrace {\bar {A}_N-g_\ast \underline{b}_N \underline{b}_N^{\rm T} }_{\bar {A}_N^c })\underline{x}_N )$ where $\bar {A}_N \equiv {\rm diag}[\lambda _k],~\underline{b}_N \equiv \left[{b_1 ,\cdots,b_N } \right]^{\rm T},~\underline{x}_N \equiv [x_1 ,\cdots,x_N]$, $(\bar {A}_N ,\underline{b}_N ,\underline{b}_N^{\rm T})$ is a finite-dimensional system that is controllable/observable if and only if $b_k \ne 0$.
2) $(\bar {A}_N ,\underline{b}_N ,\underline{b}_N^{\rm T})$ is almost strictly positive real which is equivalent to $\underline{b}_N^{\rm T} \underline{b}_N >0$ and all zeros of the open-loop transfer function being stable (see [17]).
3) We have $\underline{b}_N^{\rm T} \underline{b}_N =\sum\nolimits_{k=1}^N {b_k^2 } =\left\| b \right\|^2=1>0$ and all zeros of the open-loop are stable when
$ \begin{align*} &\bar {H}_N \equiv \left[{{\begin{array}{*{20}c} {\bar {A}_N-\lambda I} & {\underline{b}_N } \\ {\underline{b}_N^T } & 0 \\ \end{array} }} \right]=\\ &\qquad \left[{{\begin{array}{*{20}c} {\lambda _1-\lambda } & 0 & {\vdots} & 0 & {b_1 } \\ 0 & {\lambda _2-\lambda } & {\vdots} & 0 & {b_2 } \\ {\cdots} & 0 & {\ddots} & 0 & {\cdots} \\ 0 & 0 & {\cdots} & {\lambda _N-\lambda } & {b_N } \\ {b_1 } & {b_2 } & {\vdots} & {b_N } & 0 \\ \end{array} }} \right] \end{align*} $ |
is nonsingular for all ${\rm Re}(\lambda) \ge 0$ (see [14],p286). So $(\bar {A}_N ,\underline{b}_N ,\underline{b}_N^{\rm T} )_{ }$is almost strictly positive real (ASPR) if and only if
$ \begin{array}{l} \det ({{\bar H}_N}) = \left( {\prod\limits_{k = 1}^N {({\lambda _k} - \lambda )} } \right)\sum\limits_{k = 1}^N {\frac{{( - b_k^2)}}{{{\lambda _k} - \lambda }}} = \\ \;\;\;\;\;\;\;\;\;\;\; - \sum\limits_{k = 1}^N {b_k^2\prod\limits_{l = 1,l \ne k}^N {({\lambda _k} - \lambda )} } \ne {\rm{0}}, \end{array} $ | (23) |
for all ${\rm Re}(\lambda) \ge 0$ and $\mbox{Re}(\lambda) \ne \lambda _{k},$ because in this application all eigenvalues are distinct and nonzero.
4) There exists $G_\ast \equiv-g_\ast $ such that $(A_c \equiv A-g_\ast cc^\ast ,B\equiv b,C\equiv c^\ast )$ is SD and
$ \begin{align} \label{eq22} {\rm Re}(A_c x,x)\le-\sigma \left\| x \right\|^2,\quad \forall x\in D(A). \end{align} $ | (24) |
As long as (23) is satisfied,we can apply Theorem 3,and we have robust state tracking, $x\mathrel{\mathop{\kern0pt\longrightarrow}\limits_{t\to \infty }} x_\ast $,and robust reference model tracking, $y\mathrel{\mathop{\kern0pt\longrightarrow}\limits_{t\to \infty }} y_m $,with bounded adaptive gains $G\equiv \left[ {{\begin{array}{*{20}c} {G_m } & {G_u } & {G_e } & {G_D } \\ \end{array} }} \right]$ in the presence of persistent disturbances,via the direct adaptive controller.
Example. An unstable heat equation
Let $Ax\equiv \frac{\partial ^2x}{\partial z^2}+\beta \pi ^2x$ on $D(A)\equiv \left\{ {x~{\rm such~that }~x\in {C}^{2}[0,1]~{\rm and }~x(t,0)=x(t,1)=0} \right\},$ which implies that $x(t,z)=\sum\nolimits_{k=1}^\infty {{\rm e}^{\lambda _k t}(x(0,z),\phi_k (z))} \phi_k (z)$ with $\lambda _k \equiv (\beta -k^2)\pi ^2$ and $\phi_k \equiv \sqrt 2 \sin (k\pi x)$.
This is a heat equation with an internal source. When $\beta \equiv 2\mbox{ and }b\equiv \frac{1}{\sqrt 3 }(\varphi _1 +\varphi _2 +\varphi _3 )\in S_3 \equiv {\rm sp}\left\{ {\varphi _1 ,\varphi _2 ,\varphi _3 } \right\}\subset D(A),$ this implies that
$ \begin{align*} &A_N =\left[{{\begin{array}{*{20}c} {\beta-1} & 0 & 0 \\ 0 & {\beta-4} & 0 \\ 0 & 0 & {\beta-9} \\ \end{array} }} \right]\pi ^2,\\ &b_N =\frac{1}{\sqrt 3 } [1\quad 1\quad 1]^{\rm T}=c_N^{\rm T}.\end{align*} $ |
This system satisfies (6) and (7),and has poles at $+\pi^2,~-2\pi ^2,~-7\pi ^2$ and zeros at $-49.35,~-3.29,$ so is the minimum phase with $c_N b_N =1>0$ and therefore ASPR. Consequently,the direct adaptive controller in (4) will produce output tracking $e_y \equiv y-y_m \mathrel{\mathop{\kern0pt\longrightarrow}\limits_{t\to \infty }} 0$ with bounded adaptive gains in the presence of step disturbances.
Ⅶ.PERTURBATION RESULTSThe previous results depend upon $b=P_N b\in S_N $. However,it is possible to allow $b\equiv P_N b+\varepsilon P_R b\in D(A),~\varepsilon \ge 0$. Define $x_N \equiv P_N x\mbox{ and }x_R \equiv P_R x$,which implies that
$ \begin{align*} & {\rm Re}(A(\varepsilon )_c x,x)=\\ &\quad {\rm Re}\left( {\left[{{\begin{array}{*{20}c} {A_N^c } & {\varepsilon A_{12} } \\ {\varepsilon A_{21} } & {A_R +\varepsilon A_{22} } \\ \end{array} }} \right]\left[{{\begin{array}{*{20}c} {x_N } \\ {x_R } \\ \end{array} }} \right],\left[{{\begin{array}{*{20}c} {x_N } \\ {x_R } \\ \end{array} }} \right]} \right) =\\ &\quad {\rm Re}\left( {\left[{{\begin{array}{*{20}c} {A_N^c } & 0 \\ 0 & {A_R } \\ \end{array} }} \right]\left[{{\begin{array}{*{20}c} {x_N } \\ {x_R } \\ \end{array} }} \right],\left[{{\begin{array}{*{20}c} {x_N } \\ {x_R } \\ \end{array} }} \right]} \right)+\varepsilon \underbrace{{\rm Re}(\Delta Ax,x)}_{\le \left| {(\Delta Ax,x)} \right|}\le \\ &\quad-\sigma \underbrace {(\left\| {x_N } \right\|^2+\left\| {x_R } \right\|^{2})}_{\left\| x \right\|^2}+\varepsilon \left\| {\Delta A} \right\|\left\| x \right\|^2=\\ &\quad-(\sigma-\varepsilon \left\| {\Delta A} \right\|)\left\| x \right\|^{2}. \end{align*} $ |
And this proves
$ \begin{align*} {\rm Re}(A(\varepsilon )_c x,x)\le-(\underbrace {\sigma -\varepsilon \left\| {\Delta A} \right\|}_{\gamma >0})\left\| x \right\|^2, \end{align*} $ |
for all $0\le \varepsilon <\frac{\sigma }{\left\| {\Delta A} \right\|}$. And we have that $(A(\varepsilon )_c ,B,C)$ is SD and we can apply Theorem 2 again. Therefore,for small $\varepsilon >0$,all previous results are still true and we do not need $b$ being entirely confined in $S_N $.
Ⅷ.CONCLUSIONSIn Theorem 2,we prove a robust stabilization result for linear dynamic systems on infinite-dimensional Hilbert spaces under the hypothesis of almost strict dissipativity for infinite-dimensional systems. This idea is an extension of the concept of $m$-accretivity for infinite-dimensional systems (see [9],p278-280). In Theorem 3,we show that adaptive model tracking is possible with a very simple direct adaptive controller that knows very little specific information about the system it is controlling. This controller can also mitigate persistent disturbances. There is no use of Barbalat$'$s lemma which requires certain signals to be uniformly continuous. However,we do not get something for nothing; we must relax the idea that all signals will converge to 0 and replace it with the idea that they will be attracted exponentially to a prescribed neighborhood whose size depends on the norm of the completely unknown disturbance. In order to make such an infinite-dimensional system track a finite-dimensional reference model,we use the idea of ideal trajectories,and in Theorem 1 we show the conditions for existence and uniqueness of these ideal trajectories without requiring any deep knowledge of the infinite-dimensional plant.
We apply these results to a general infinite-dimensional linear systems described by self-adjoint operators with compact resolvent,in particular unstable diffusion problems using a single actuator,sensor and direct adaptive output feedback. Such systems are shown to be able to robustly track the outputs of a finite-dimensional reference model in the presence of persistent disturbances.
These results do not require deep knowledge of specific properties or parameters of the system to accomplish model tracking. And they do not require that the disturbances enter through the same channels as the control. Finally,it is possible to substantially expand the results in Theorem 2 to nonlinear infinite-dimensional systems,but we have elected here to take a small (baby) step forward and show the possibilities of adaptive control for infinite-dimensional systems.
APPENDIXProof of Theorem 2. ~From (21) and Pazy corollary (see [1], Corollary 2.5,p107),we have a well-posed system in (23) where $A_c $ is a closed operator,densely defined on $\mbox{ }D(A_C )\subseteq X$ and generates a $C_0 $ semigroup on $X$,and all trajectories starting in $D(A_C )$ will remain there. Hence we can differentiate signals in $D(A_C )$.
Consider the positive definite function
$ \begin{align} V\equiv \frac{1}{2}(Pe,e)+\frac{1}{2}{\rm tr}\left[{\Delta G\gamma ^{-1}\Delta G^{\rm T}} \right],\quad \end{align} $ | (A1) |
where $\Delta G(t)\equiv G(t)-G^\ast $ and $P$ satisfies (19).
Taking the time derivative of (A1) (this can be done $\forall e\in D(A_C ))$ and substituting (22) into the result yields
$ \begin{align*} &\dot{V}=\frac{1}{2}[(PA_c e,e)+(e,PA_c e)]+(PBw,e)+\notag\\ &\quad {\rm tr}\left[{\Delta \dot {G}\gamma ^{-1}\Delta G^{\rm T}} \right]+(Pe,v),~w\equiv \Delta Gz. \end{align*} $ |
Invoking the equalities in Definition 2 of strict dissipativity, using $x^{\rm T}y$ = tr[${yx}$$^{\rm T}$],and substituting (23) into the last expression,we get
$ \begin{align*} \left. {\begin{array}{lllll} &\dot {V}={\rm Re}(PA_c e,e)+\left\langle {e_y ,w} \right\rangle-a\cdot {\rm tr}\left[ {G\gamma ^{-1}\Delta G^{\rm T}} \right]-\\ &\qquad \underbrace {\mbox{tr}(e_y z^{\rm T}\Delta G^{\rm T})}_{\left\langle {e_y ,w} \right\rangle }+(Pe,v)\le \\ &\qquad-q_{\min } \left\| e \right\|^2-\!a\!\cdot\!\mbox{tr}\left[{(\Delta G+G^\ast )\gamma ^{-1}\Delta G^{\rm T}} \right]\!+\!(Pe,v)\!\le \\ &\qquad-\Big( q_{\min} \left\| e \right\|^2+a\cdot \mbox{tr}\left[{\Delta G\gamma ^{-1}\Delta G^{\rm T}} \right] \Big)+\\ &\qquad a\cdot \left| {{\rm tr}\left[{G^\ast \gamma ^{-1}\Delta G^{\rm T}} \right]} \right|+\left| {(Pe,v)} \right| \le \\ &\qquad-\left( {\frac{2q_{\min } }{p_{\max } }\cdot \frac{1}{2}(Pe,e)+2a\cdot \frac{1}{2}\mbox{tr}\left[{\Delta G\gamma ^{-1}\Delta G^{\rm T}} \right]} \right)+\\ &\qquad a\cdot \left| {\mbox{tr}\left[{G^\ast \gamma ^{-1}\Delta G^{\rm T}} \right]} \right|+\left| {(Pe,v)} \right|\le \\ &\qquad-2aV+a\cdot \left| {\mbox{tr }\left[{G^\ast \gamma ^{-1}\Delta G^{\rm T}} \right]} \right|+\left| {(Pe,v)} \right| \\ \end{array}} \right. \end{align*} $ |
with $\left\langle {e_y ,w} \right\rangle \equiv e_y^\ast w$. Now, using the Cauchy-Schwartz inequality
$ \left| {\mbox{tr}\left[{G^\ast \gamma ^{-1}\Delta G^{\rm T}} \right]} \right|\le \left\| {G^\ast } \right\|_2 \left\| {\Delta G} \right\|_2 \mbox{ } $ |
and
$ \mbox{ }\left| {(Pe,v)} \right|\le \left\| {P^{\frac{1}{2}}v } \right\|\mbox{ }\left\| {P^{\frac{1}{2}}e} \right\|=\sqrt {(Pv ,v)}\cdot \sqrt {(Pe,e)}. $ |
We have
$ \begin{array}{l} \dot {V}+2aV\le a\cdot \left\| {G^\ast } \right\|_2 \left\| {\Delta G} \right\|_2 +\sqrt {p_{\max } } \left\| v\right\|\sqrt {(Pe,e)} \le\\ \quad a\cdot \left\| {G^\ast } \right\|_2 \left\| {\Delta G} \right\|_2 +(\sqrt {p_{\max } } M_v)\sqrt {(Pe,e)} \le\\ \quad (a\left\| {G^\ast } \right\|_2 +\sqrt {p_{\max } } M_v )\sqrt 2 \underbrace {[\frac{1}{2}(Pe,e)+\frac{1}{2}\left\| {\Delta G} \right\|_2^2 ]^{\frac{1}{2}}}_{V^{\frac{1}{2}}} \\ \end{array}. $ |
Therefore,
$ \frac{\dot {V}+2aV}{V^{\frac{1}{2}}}\le \sqrt 2 (a\left\| {G^\ast } \right\|_2 +\sqrt {p_{\max } } M_v). $ |
Now,using the identity $\mbox{tr}\left[{ABC} \right]=\mbox{tr}\left[{CAB} \right]$,we have
$ \begin{array}{l} \left\| {G^\ast } \right\|_2 \equiv \left[{\mbox{tr}\left( {G^\ast \gamma ^{-1}(G^\ast )^{\rm T}} \right)} \right]^{\frac{1}{2}}=\left[ {\mbox{tr}\left( {(G^\ast )^{\rm T}G^\ast \gamma ^{-1}} \right)} \right]^{\frac{1}{2}} \le \\ \quad\left[{\left( {\mbox{tr}\left( {(G^\ast )^{\rm T}G^\ast (G^\ast )^{\rm T}G^\ast } \right)} \right)^{^{\frac{1}{2}}}\left( {\mbox{tr}(\gamma ^{-1}\gamma ^{-1})} \right)^{\frac{1}{2}}} \right]^{\frac{1}{2}} =\\ \quad\left[{\mbox{tr}\left( {G^\ast (G^\ast )^{\rm T}} \right)} \right]^{\frac{1}{2}}\left[{\mbox{tr}(\gamma ^{-1})} \right]^{\frac{1}{2}}\le \\ \quad \frac{M_v }{aM_G }\cdot M_G =\frac{M_v }{a},\\ \end{array} $ |
which implies that
$ \begin{align} \frac{\dot {V}+2aV}{V^{\frac{1}{2}}}\le \sqrt 2 \left( {1+\sqrt {p_{\max } } } \right)M_v . \quad \end{align} $ | (A2) |
From
$ \begin{array}{l} \frac{\rm d}{{\rm d}t}(2{\rm e}^{at}V^{\frac{1}{2}})={\rm e}^{at}\frac{\dot {V}+2aV}{V^{\frac{1}{2}}} \le \sqrt 2 {\rm e}^{at}\left( {1+\sqrt {p_{\max } } } \right)M_v,\\ \end{array} $ |
Integrating this expression we have
$ {\rm e}^{at}V(t)^{1/2}-V(0)^{1/2}\le \frac{\left( {1+\sqrt {p_{\max } } } \right) M_v }{a}\left( {{\rm e}^{at}-1} \right). $ |
Therefore,
$ \begin{align} V(t)^{1/2}\le V(0)^{1/2}{\rm e}^{-at}+\frac{\left( {1+\sqrt {p_{\max } } } \right) M_v}{a}\left( {1-{\rm e}^{-at}}\right). \end{align} $ | (A3) |
The function $V(t)$ is a norm function of state $e(t)$ and matrix $G(t)$. So,since $V(t)^{1/2}$ is bounded for all $t$,then $e(t)$ and $G(t)$ are bounded. We also obtain the following inequality:
$ \sqrt {p_{\min } } \left\| {e(t)} \right\|\le V(t)^{\frac{1}{2}}. $ |
Substitution of this into (A3) gives us an exponential bound on state $e$({$t$}),i.e.,
$ \begin{align} \left\| {e(t)} \right\|\le \frac{{\rm e}^{-at}}{\sqrt {p_{\min } } }V(0)^{\frac{1}{2}}+\frac{\left( {1+\sqrt {p_{\max } } } \right) M_v}{a\sqrt {p_{\min } } }\left( {1-{\rm e}^{-at}}\right). \end{align} $ | (A4) |
Taking the limit superior of (A4),we have
$ \begin{align}\overline {\mathop {\lim }\limits_{t \to \infty } } \left\| {e(t)} \right\|\le \frac{\left( {1+\sqrt {p_{\max } } } \right) }{a\sqrt {p_{\min } } }M_v\equiv R_\ast . \end{align} $ | (A5) |
And the proof is complete.
[1] | Pazy A. Semigroups of Linear Operators and Applications to Partial Differential Equations. New York: Springer, 1983. |
[2] | D'Alessandro D. Introduction to Quantum Control and Dynamics. London: Chapman & Hall, 2008. |
[3] | Balas M, Erwin R S, Fuentes R. Adaptive control of persistent disturbances for aerospace structures. In: Proceedings of the AIAA Guidance, Navigation and Control Conference.Denver, 2000. |
[4] | Fuentes R J, Balas M J. Direct adaptive rejection of persistent disturbances. Journal of Mathematical Analysis and Applications, 2000, 251(1): 28-39 |
[5] | Fuentes R, Balas M. Disturbance accommodation for a class of tracking control systems. In: Proceedings of the AIAA Guidance, Navigation and Control Conference. Denver, Colorado, 2000. |
[6] | Fuentes R J, Balas M J. Robust model reference adaptive control with disturbance rejection. In: Proceedings of the American Control Conference. Anchorage, AK: IEEE, 2002.4003-4008 |
[7] | Balas M, Gajendar S, Robertson L. Adaptive tracking control of linear systems with unknown delays and persistent disturbances (or Who You Callin'Retarded?). In: Proceedings of the AIAA Guidance, Navigation and Control Conference.Chicago, IL, 2009. |
[8] | Wen J T. Time domain and frequency domain conditions for strict positive realness. IEEE Transactions on Automatic Control, 1988, 33(10): 988-992 |
[9] | Kato T. Perturbation Theory for Linear Operators. New York: Springer, 1980. |
[10] | Renardy M, Rogers R. An Introduction to Partial Differential Equations. New York: Springer, 1993. |
[11] | Curtain R, Pritchard A. Functional Analysis in Modern Applied Mathematics. Academic: Academic Press, 1977. |
[12] | Balas M J. Trends in large space structure control theory: fondest hopes, wildest Dreams. IEEE Transactions on Automatic Control, 1982, 27(3): 522-535 |
[13] | Balas M, Fuentes R. A non-orthogonal projection approach to characterization of almost positive real systems with an application to adaptive control. In: Proceedings of the American Control Conference. Boston, MA, USA: IEEE, 2004. 1911-1916 |
[14] | Antsaklis P, Michel A. A Linear Systems Primer. Boston: Birkhauser, 2007. |
[15] | Popov V M. Hyperstability of Control Systems. Berlin: Springer, 1973. |
[16] | Kailath T. Linear Systems. New York: Prentice-Hall, 1998. 448-449 |
[17] | Balas M, Frost S. Distributed parameter direct adaptive control using a new version of the Barbalat-Lyapunov stability result in Hilbert space. In: Proceedings of AIAA Guidance, Navigation and Control Conference. Boston, MA: AIAA, 2013. |
[18] | Kothari D, Nagrath I. Modern Power System Analysis. New York: McGraw-Hill, 2003. |
[19] | Cannarsa P, Coron J M, Alabau-Boussouira F, Brockett R, Glass O, Le Rousseau J, Zuazua E. Control of Partial Differential Equations: Cetraro, Italy 2010, Editors: Piermarco Cannarsa, Jean-Michel Coron (Lecture Notes in Mathematics/C.I.M.E. Foundation Subseries). Berlin Heidelberg: Springer, 2012. |
[20] | Troltzsch F. Optimal Control of Partial Differential Equations. Providence, RI: American Mathematical Society, 2010. |
[21] | Smyshlyaev A, Kristic M. Adaptive Control of Parabolic PDEs. Princeton: Princeton University Press, 2010. |