\lambda | \gamma_{1} | \gamma_{2} | \gamma_{3} | \mu_{1} | \mu_{2} | \varepsilon |
1 | 0.8 | 0.14 | 0.06 | 1 | 0.001 | 0.01 |
IEEE/CAA Journal of Automatica Sinica
Citation: | D. Y. Meng, J. Y. Zhang, "Robust Optimization-Based Iterative Learning Control for Nonlinear Systems With Nonrepetitive Uncertainties," IEEE/CAA J. Autom. Sinica, vol. 8, no. 5, pp. 1001-1014, May. 2021. doi: 10.1109/JAS.2021.1003973 |
INTELLIGENT control has generated considerable research interest in both theories and applications of linear/nonlinear systems, where of particular note are the learning-based design methods (see, e.g., [1]–[5]). As a class of effective intelligent control methods with the specific focus on realizing the perfect tracking tasks for the systems that are repetitively executed, iterative learning control (ILC) has been considered as one of the most practically important learning-based design methods in many application fields, see, e.g., [6] for multi-axis robots, [7] for micro aerial vehicles, [8] for linear motor positioning systems, [9] for high-speed trains, and [10] for Chylla-Haase reactors. The readers are referred to the detailed explanations that have been introduced for characteristics and applications of ILC in the surveys of, e.g., [11]–[13]. In particular, ILC has been regarded as one of the most famous and applicable data-driven control methods [14]–[18] that may be alternatively called the model-free control methods [19]–[21], where the accurate models are generally not needed for the ILC algorithm design as well as convergence analysis. Typically, this class of data-driven ILC methods are explored based on an optimization issue that focuses directly on nonlinear systems, regardless of unknown nonlinearities.
In addition to the tight relation to optimization-based design, the data-driven ILC employs a dynamical linearization approach to overcome unknown nonlinearities and an adaptive approach to estimate linearization parameters [15]–[21]. This yields a class of optimization-based adaptive ILC methods that are capable of accommodating unknown dynamics in both nonlinear systems and their dynamical linearization models. For the convergence analysis, optimization-based adaptive ILC adopts a contraction mapping (CM)-based approach that is usually implemented via the eigenvalue analysis. Though it is popularly applied in ILC, the eigenvalue-based CM approach requires ILC processes to have iteration-independent parameters from the perspective of standard linear system theory [22], [23].
It is worth emphasizing that for nonlinear control plants, the dynamical linearization inevitably leads to iteration-dependent model parameters [15]–[21]. This renders the eigenvalue-based CM approach no longer effective in implementing convergence analysis of optimization-based adaptive ILC. Another issue left to settle for optimization-based adaptive ILC is robustness with regard to iteration-dependent uncertainties that are considered to be practically important for ILC [24]–[31]. Actually, the robust issue has not been well studied for optimization-based adaptive ILC (see, e.g., [15]–[21]). It is mainly due to that the iteration-dependent uncertainties may bring challenging difficulties into ILC convergence in the presence of nonrepetitiveness created by iteration-dependent model parameters. To accommodate the effects arising from nonrepetitiveness, new design and analysis approaches for ILC usually need to be explored, see, e.g., [18] for an extended state observer-based design approach and [24], [28] for a double-dynamics analysis (DDA) approach. Despite these new approaches, the eigenvalue analysis is still leveraged in [18], and linear systems are only considered in [24], [28].
In this paper, we are devoted to exploring the robust problem of optimization-based adaptive ILC that is subject to unknown time-varying nonlinearities and nonrepetitive uncertainties due to iteration-dependent initial shifts and disturbances. The main contributions of our established design and analysis results are specified as follows.
1) We propose a new optimization-based design method for adaptive ILC. This new design method makes it feasible to directly apply the CM-based analysis approach of ILC to develop the boundedness of estimated parameters that are used in our adaptive updating law for the estimation of unknown time-varying nonlinearities.
2) We introduce a new robust convergence analysis method for optimization-based adaptive ILC by implementing a DDA approach and resorting to the use of the properties of the substochastic matrices. This makes it possible to not only accomplish the robust convergence analysis of optimization-based adaptive ILC, but also guarantee the boundedness of all the system trajectories.
3) Our design methods and analysis results of optimization-based adaptive ILC can effectively work, regardless of the presence of nonrepetitive uncertainties. This particularly helps to overcome the drawbacks of those methods and results for optimization-based adaptive ILC established through applying the eigenvalue-based CM approach in, e.g., [16], [17].
In addition, we demonstrate the robust performance of our proposed optimization-based adaptive ILC with two simulation examples, regardless of the initial shifts and disturbances that are varying with respect to both iteration and time.
The rest of our paper is organized as follows. In Section II, a robust tracking problem of optimization-based ILC is given. In Section III, an optimization-based adaptive ILC is accordingly proposed, and the main design and analysis results are derived. Simulation tests and concluding remarks are made in Sections IV and V, respectively.
Notations: Let
Let
yk(t+1)=f(yk(t),…,yk(t−l),uk(t),…,uk(t−n),t)+wk(t)withyk(i)={0,i<0y0+δk,i=0anduk(i)=0,i<0 | (1) |
where
f:R×R×⋯×R⏟l+n+3→R. |
In the following analysis, we will write this nonlinear function as
Problem Statement: Let
lim | (2) |
via the solving of an optimization problem with the following index over
J\left(u_k(t)\right) = \left[\sum\limits_{i = 1}^{m}\gamma_i e_{k-i+1}(t+1)\right]^{2}+\lambda\left[\Delta u_k(t)\right]^{2} | (3) |
where
\mathop {\lim \sup }\limits_{k\to\infty}\left|e_{k}(t+1)\right|\leq\beta_{e_{\sup}}(t),\quad\forall t\in\mathbb{Z}_{T-1} | (4) |
where
To perform the abovementioned robust tracking tasks for the nonlinear system (1), we need two basic assumptions about the continuous differentiability of the unknown nonlinear function and the boundedness of nonrepetitive uncertainties.
Assumption 1: Let
\begin{split} &\left|\frac{\partial f}{\partial x_{i}}\left(x_{1},x_{2},\ldots,x_{l+n+2},t\right)\right| \leq\beta_{\overline{f}} \\ & \forall x_{i}\in\mathbb{R},\;i = 1,2,\ldots,l+n+2,\quad\forall t\in\mathbb{Z}_{T-1}\end{split} | (5) |
and without loss of generality, assume that
\begin{split} &\frac{\partial f}{\partial x_{l+2}}\left(x_{1},x_{2},\ldots,x_{l+n+2},t\right)\geq\beta_{\underline {f}} \\ &\forall x_{i}\in\mathbb{R},\;\;i = 1,2,\ldots,l+n+2,\quad\forall t\in\mathbb{Z}_{T-1}. \end{split} | (6) |
Assumption 2: Let
\begin{split} &\left|w_{k}(t)\right| \;\leq\beta_{w}(t),\;\;\;\forall k\;\in\mathbb{Z}_+,\forall t\in\mathbb{Z}_{T-1} \\ &\left|\delta_{k}\right| \;\leq\beta_{\delta},\;\;\;\forall k\;\in\mathbb{Z}_+ \end{split} | (7) |
for some finite bounds
Remark 1: The considered nonlinear system (1) includes, as a particular case, the class of deterministic autoregressive moving average models, which are typically seen in the applications of ILC (see, e.g., an injection molding process [32]). Moreover, (1) can be modeled to represent the input-output relation for a class of affine nonlinear systems in the state-space description form of
\begin{array}{l} x_{k}(t+1) = h(x_{k}(t),t)+b(t)u_{k}(t)+w_{k}^{x}(t)\\ y_{k}(t) = c(t)x_{k}(t)+w_{k}^{y}(t) \end{array} |
for some state
Remark 2: Since the nonlinear time-varying dynamics of (1) are unknown, Assumption 1 is a commonly used condition that provides basic guarantees for both dynamical linearization of unknown nonlinearities and optimization-based design of adaptive ILC (see, e.g., [15]–[21]). In particular, it follows from (5) and (6) in Assumption 1 that
\begin{split} \frac{ \partial f}{ \partial x_{l+2}}&\left(x_{1},x_{2},\ldots,x_{l+n+2},t\right)\in\left[\beta_{\underline{f}},\beta_{\overline{f}}\right]\\ &\forall x_{i}\in\mathbb{R},\;\;i = 1,2,\ldots,l+n+2,\quad\forall t\in\mathbb{Z}_{T-1}. \end{split} | (8) |
Remark 3: It is worth noting that Assumption 2 is a commonly considered ILC condition on the class of nonrepetitive uncertainties since it is generally acceptable for many practical applications (see, e.g., [24], [28]). Of particular note is to use a further condition on the convergence of nonrepetitive uncertainties such that
\lim\limits_{k\to\infty}\left[w_{k}(t)-w_{k-1}(t)\right] = 0,\quad\forall t\in\mathbb{Z}_{T-1},\quad \lim\limits_{k\to\infty}\left(\delta_{k}-\delta_{k-1}\right) = 0 | (9) |
which may be considered as an additional requirement of Assumption 2 for the accomplishment of the perfect tracking task (2), despite the presence of nonrepetitive uncertainties. A trivial case most considered for (9) is the absence of nonrepetitive uncertainties in ILC, namely, (9) collapses into
w_{k}(t)\equiv w(t),\quad\forall k\in\mathbb{Z}_+,\forall t\in\mathbb{Z}_{T-1},\quad \delta_{k}\equiv\delta,\quad\forall k\in\mathbb{Z}_+ | (10) |
for some iteration-independent quantities
We design optimization-based adaptive ILC to overcome the effect of unknown nonlinear time-varying dynamics on seeking output tracking objectives for (1), regardless of the presence of nonrepetitive uncertainties. Towards this end, we establish the following lemma to derive an extended dynamical linearization for the unknown nonlinear time-varying dynamics of (1).
Lemma 1: If Assumption 1 is satisfied for the nonlinear system (1), then an extended dynamical linearization can be given for (1) as
\begin{split} \left[ {\begin{array}{*{20}{c}} {{y_i}(1)}\\ {{y_i}(2)}\\ \vdots \\ {{y_i}(T)} \end{array}} \right] &- \left[ {\begin{array}{*{20}{c}} {{y_j}(1)}\\ {{y_j}(2)}\\ \vdots \\ {{y_j}(T)} \end{array}} \right] = {\Theta _{i,j}}\left( {\left[ {\begin{array}{*{20}{c}} {{u_i}(0)}\\ {{u_i}(1)}\\ \vdots \\ {{u_i}(T - 1)} \end{array}} \right] - \left[ {\begin{array}{*{20}{c}} {{u_j}(0)}\\ {{u_j}(1)}\\ \vdots \\ {{u_j}(T - 1)} \end{array}} \right]} \right)\;\;\;\\ &+ {\Upsilon _{i,j}}\left( {\left[ {\begin{array}{*{20}{c}} {{w_i}(0)}\\ {{w_i}(1)}\\ \vdots \\ {{w_i}(T - 1)} \end{array}} \right] - \left[ {\begin{array}{*{20}{c}} {{w_j}(0)}\\ {{w_j}(1)}\\ \vdots \\ {{w_j}(T - 1)} \end{array}} \right]} \right)\;\;\;\\ &+ \left[ {\begin{array}{*{20}{c}} {{\vartheta _{i,j,0}}}\\ {{\vartheta _{i,j,1}}}\\ \vdots \\ {{\vartheta _{i,j,T - 1}}} \end{array}} \right]\left( {{\delta _i} - {\delta _j}} \right),\;\;\;\;\forall i,j \in {{\mathbb{Z}}_ + }\\[-30pt] \end{split} | (11) |
where for any
\begin{split} &\Theta_{i,j} = \left[ {\begin{array}{*{20}{c}} \theta_{i,j,0}(0) & 0 &\cdots & 0 \\ \theta_{i,j,1}(0) & \theta_{i,j,1}(1) & \ddots & \vdots \\ \vdots & \ddots & \ddots & 0\\ \theta_{i,j,T-1}(0) &\cdots & \theta_{i,j,T-1}(T-2) & \theta_{i,j,T-1}(T-1) \end{array}} \right]\\ &\Upsilon_{i,j} = \left[ {\begin{array}{*{20}{c}} 1 & 0 &\cdots & 0 \\ \upsilon_{i,j,1}(0) & 1 & \ddots & \vdots \\ \vdots & \ddots & \ddots & 0\\ \upsilon_{i,j,T-1}(0) & \cdots &\upsilon_{i,j,T-1}(T-2) & 1 \end{array}} \right] \end{split} |
and
\begin{split} &\left|\theta_{i,j,t}(\xi)\right| \leq\beta_{\theta},\quad\forall\xi\in\mathbb{Z}_{t},\forall t\in\mathbb{Z}_{T-1},\forall i,j\in\mathbb{Z}_+ \\ &\left|\upsilon_{i,j,t}(\xi)\right| \leq\beta_{\theta}, \quad\forall\xi\in\mathbb{Z}_{t-1},\forall t\in\mathbb{Z}_{T-1},\forall i,j\in\mathbb{Z}_+ \\ &\left|\vartheta_{i,j,t}\right| \leq\beta_{\theta}, \quad\forall t\in\mathbb{Z}_{T-1},\forall i,j\in\mathbb{Z}_+ \end{split} | (12) |
where, more precisely,
\theta_{i,j,t}(t) \in\left[\beta_{\underline{f}},\beta_{\overline{f}}\right],\quad\forall t\in\mathbb{Z}_{T-1},\forall i,j\in\mathbb{Z}_+. | (13) |
Proof: With Assumption 1, we leverage the differential mean value theorem and can then develop this lemma by considering the derivation rules of compound functions. For clarity, we provide the proof details in the Appendix, which can also be found at
Of specific interest is the application of Lemma 1 to disclose the input-output relation between two sequential iterations for the nonlinear system (1). Namely, by taking
\begin{split} \Delta y_{k}(t+1) =\;& \sum\limits_{i = 0}^{t}\theta_{k,k-1,t}(i)\Delta u_{k}(i)\\ &+\Delta w_{k}(t)+\sum\limits_{i = 0}^{t-1}\upsilon_{k,k-1,t}(i)\Delta w_{k}(i) +\vartheta_{k,k-1,t}\Delta\delta_{k}\\ \triangleq\;&\Delta\overrightarrow{u_{k}}^{{T}}(t)\overrightarrow{\theta_{k,k-1,t}}(t) +\varphi_{k}(t),\quad\forall t\in\mathbb{Z}_{T-1},\forall k\in\mathbb{Z} \end{split} | (14) |
where
\begin{split} &\overrightarrow{u_{k}}(t) = \left[u_{k}(0),u_{k}(1),\ldots,u_{k}(t)\right]^{{T}}\\ &\overrightarrow{\theta_{k,k-1,t}}(t) = \left[\theta_{k,k-1,t}(0),\theta_{k,k-1,t}(1),\ldots,\theta_{k,k-1,t}(t)\right]^{{T}}\\ &\varphi_{k}(t) = \Delta w_{k}(t)+\sum\limits_{i = 0}^{t-1}\upsilon_{k,k-1,t}(i)\Delta w_{k}(i)+\vartheta_{k,k-1,t}\Delta\delta_{k}. \end{split} |
It is obvious from (14) that the linearization parameters help to describe the dynamic evolution of ILC along the iteration axis. However, Lemma 1 shows that the linearization parameters are unknown. In addition, the nonrepetitive uncertainties also play an important role in influencing the dynamic evolution of (14). These render it hard to find the optimal solution to (3). Toward this end, we select a bounded initial input
\begin{split} u_{k}(t) =\;& u_{k-1}(t) -\frac{ \gamma_{1}^{2}{\hat\theta}_{k,k-1,t}(t)}{ \lambda+\gamma_{1}^{2}{\hat\theta}^{2}_{k,k-1,t}(t)} \sum\limits_{i = 0}^{t-1}{\hat\theta}_{k,k-1,t}(i)\big[u_{k}(i) \\ &-u_{k-1}(i)\big]+\frac{ \gamma_{1}{\hat\theta}_{k,k-1,t}(t)}{ \lambda+\gamma_{1}^{2}{\hat\theta}^{2}_{k,k-1,t}(t)} \Bigg[\gamma_{1}e_{k-1}(t+1) \\ &+\sum\limits_{i = 2}^{m}\gamma_{i}e_{k-i+1}(t+1)\Bigg],\quad\forall k\in\mathbb{Z},\forall t\in\mathbb{Z}_{T-1}. \end{split} | (15) |
To implement (15), we need to determine
\overrightarrow{\hat{\theta}_{k,k-1,t}}(t) = \left[\hat{\theta}_{k,k-1,t}(0),\hat{\theta}_{k,k-1,t}(1),\ldots,\hat{\theta}_{k,k-1,t}(t)\right]^{{T}} |
and then we leverage an optimization index as
\begin{split} H\left(\overrightarrow{\hat{\theta}_{k,k-1,t}}(t)\right) =\;& \left[\Delta y_{k-1}(t+1)-\Delta\overrightarrow{u_{k-1}}^{{T}}(t)\overrightarrow{\hat{\theta}_{k,k-1,t}}(t)\right]^{2}\\ &+\mu_{1}\left\|\overrightarrow{\hat{\theta}_{k,k-1,t}}(t)-\overrightarrow{\hat{\theta}_{k-1,k-2,t}}(t)\right\|_{2}^{2}\\ &+\mu_{2}\left\|\overrightarrow{\hat{\theta}_{k,k-1,t}}(t)\right\|_{2}^{2},\quad\forall t\in\mathbb{Z}_{T-1},\forall k\geq2 \end{split} | (16) |
where
S1) Initialization: Select a bounded estimation
{\hat\theta}_{1,0,t}(t)\geq\varepsilon,\quad\forall t\in\mathbb{Z}_{T-1}. | (17) |
S2) Update: Implement an updating law about the parameter estimation, such that
\begin{split} {\hat\theta}_{k,k-1,t}(i) =\;& \frac{ \mu_{1}}{ \mu_{1}+\mu_{2}}{\hat\theta}_{k-1,k-2,t}(i)\\ &+\frac{ \Delta u_{k-1}(i)}{ \mu_{1}+\mu_{2}+\sum\limits_{j = 0}^{t}\Delta u_{k-1}^2(j)}\bigg[\Delta y_{k-1}(t+1)\\ &-\frac{ \mu_{1}}{ \mu_{1}+\mu_{2}}\sum\limits_{j = 0}^{t}{\hat\theta}_{k-1,k-2,t}(j)\Delta u_{k-1}(j)\bigg]\\ &\forall k\geq2,\;\;\forall t\in\mathbb{Z}_{T-1},\;\;\forall i\in\mathbb{Z}_{t} \end{split} | (18) |
where, if
{\hat\theta}_{k,k-1,t}(t) = {\hat\theta}_{1,0,t}(t),\quad\forall k\geq2,\forall t\in\mathbb{Z}_{T-1}. | (19) |
It is worth highlighting that in the above optimization-based design of adaptive ILC, the effect of nonrepetitive uncertainties is not directly reflected. In spite of this fact, it will be disclosed that our optimization-based adaptive ILC is of robustness with respect to nonrepetitive uncertainties.
Next, the robust convergence analysis of optimization-based adaptive ILC for the nonlinear system (1) is explored. Towards this end, the dynamics of the tracking error are considered, and by combining (15) with (14), it can be verified that
\begin{split} e_{k}(t+1) =\;& e_{k-1}(t+1)-\Delta y_{k}(t+1)\\ =\;& e_{k-1}(t+1)-\sum\limits_{i = 0}^{t}\theta_{k,k-1,t}(i)\Delta u_k(i)\\ &-\Delta w_{k}(t)-\sum\limits_{i = 0}^{t-1}\upsilon_{k,k-1,t}(i)\Delta w_{k}(i) -\vartheta_{k,k-1,t}\Delta\delta_{k}\\ =\;& \left[1-\frac{ \left(\gamma_{1}^{2}+\gamma_{1}\gamma_{2}\right)\theta_{k,k-1,t}(t){\hat\theta}_{k,k-1,t}(t)} { \lambda+\gamma_{1}^{2}{\hat\theta}^{2}_{k,k-1,t}(t)}\right]e_{k-1}(t+1)\\ &-\sum\limits_{i = 3}^{m}\frac{ \gamma_{1}\gamma_{i}\theta_{k,k-1,t}(t){\hat\theta}_{k,k-1,t}(t)}{ \lambda+\gamma_{1}^{2}{\hat\theta}^{2}_{k,k-1,t}(t)}e_{k-i+1}(t+1)\\ &+\kappa_{k}(t),\quad\forall k\in\mathbb{Z},\forall t\in\mathbb{Z}_{T-1}\\[-15pt] \end{split} | (20) |
where
\begin{split} \kappa_{k}(t) =\;& \frac{ \gamma_{1}^{2}\theta_{k,k-1,t}(t){\hat\theta}_{k,k-1,t}(t)}{ \lambda+\gamma_{1}^{2}{\hat\theta}^{2}_{k,k-1,t}(t)} \sum\limits_{i = 0}^{t-1}{\hat\theta}_{k,k-1,t}(i)\Delta u_{k}(i) \\ &-\sum\limits_{i = 0}^{t-1}\theta_{k,k-1,t}(i)\Delta u_{k}(i) -\Delta w_{k}(t) \\ &-\sum\limits_{i = 0}^{t-1}\upsilon_{k,k-1,t}(i)\Delta w_{k}(i) -\vartheta_{k,k-1,t}\Delta\delta_{k}. \end{split} | (21) |
From (20), it is clear that the evolution process of the tracking error is iteration-dependent owing to its explicit dependence on
To proceed with the ILC convergence analysis based on (20), we explore the boundedness property of the uncertain parameter
Theorem 1: For the nonlinear system (1) with Assumptions 1 and 2, let the updating law (15) for the input and the adaptive updating schemes S1) and S2) for the parameter estimation be applied. If
\left|{\hat\theta}_{k,k-1,t}(i)\right|\leq\beta_{{\hat\theta}},\quad\forall k\in\mathbb{Z},\forall t\in\mathbb{Z}_{T-1},\forall i\in\mathbb{Z}_{t} | (22) |
where
{\hat\theta}_{k,k-1,t}(t)\in\left[\varepsilon,\beta_{{\hat\theta}}\right],\quad\forall k\in\mathbb{Z},\forall t\in\mathbb{Z}_{T-1}. | (23) |
Proof: From (18), we can equivalently derive
\begin{split} \overrightarrow{\hat{\theta}_{k,k-1,t}}(t) =\;& \frac{ \mu_{1}}{ \mu_{1}+\mu_{2}}\overrightarrow{\hat{\theta}_{k-1,k-2,t}}(t)\\ &+\frac{ 1}{ \mu_{1}+\mu_{2}+\left\|\Delta\overrightarrow{{u}_{k-1}}(t)\right\|_{2}^{2}}\bigg[\Delta y_{k-1}(t+1)\\ &-\frac{ \mu_1}{ \mu_{1}+\mu_{2}}\Delta\overrightarrow{{u}_{k-1}}^{{T}}(t)\overrightarrow{\hat{\theta}_{k-1,k-2,t}}(t)\bigg] \Delta\overrightarrow{{u}_{k-1}}(t). \end{split} | (24) |
If we define a symmetric matrix as
\begin{array}{l} Q\left(\Delta\overrightarrow{u_{k-1}}(t)\right) = I-\dfrac{ \Delta\overrightarrow{u_{k-1}}(t)\Delta\overrightarrow{u_{k-1}}^{{T}}(t)} { \mu_{1}+\mu_{2}+\left\|\Delta\overrightarrow{u_{k-1}}(t)\right\|_{2}^{2}} \end{array} | (25) |
then
\begin{split} \overrightarrow{\hat{\theta}_{k,k-1,t}}(t) =\;& \frac{ \mu_{1}}{ \mu_1+\mu_2}\Bigg[\overrightarrow{\hat{\theta}_{k-1,k-2,t}}(t)\\ &-\frac{ \Delta\overrightarrow{u_{k-1}}^{{T}}(t)\overrightarrow{\hat{\theta}_{k-1,k-2,t}}(t)\Delta\overrightarrow{u_{k-1}}(t)} { \mu_{1}+\mu_{2}+\left\|\Delta\overrightarrow{u_{k-1}}(t)\right\|_{2}^{2}}\Bigg]\\ &+\frac{ \Delta y_{k-1}(t+1)\Delta\overrightarrow{u_{k-1}}(t)} { \mu_{1}+\mu_{2}+\left\|\Delta\overrightarrow{u_{k-1}}(t)\right\|_{2}^{2}}\\ = \;&\frac{ \mu_{1}}{ \mu_1+\mu_2}\Bigg[\overrightarrow{\hat{\theta}_{k-1,k-2,t}}(t)\\ &-\frac{ \Delta\overrightarrow{u_{k-1}}(t)\Delta\overrightarrow{u_{k-1}}^{{T}}(t)\overrightarrow{\hat{\theta}_{k-1,k-2,t}}(t)} { \mu_{1}+\mu_{2}+\left\|\Delta\overrightarrow{u_{k-1}}(t)\right\|_{2}^{2}}\Bigg] \end{split} |
\begin{split} &+\frac{ \Delta y_{k-1}(t+1)\Delta\overrightarrow{u_{k-1}}(t)} { \mu_{1}+\mu_{2}+\left\|\Delta\overrightarrow{u_{k-1}}(t)\right\|_{2}^{2}}\\ =\;& \frac{ \mu_{1}}{ \mu_1+\mu_2}\left[I-\frac{ \Delta\overrightarrow{u_{k-1}}(t)\Delta\overrightarrow{u_{k-1}}^{{T}}(t)} { \mu_{1}+\mu_{2}+\left\|\Delta\overrightarrow{u_{k-1}}(t)\right\|_{2}^{2}}\right]\\ &\times\overrightarrow{\hat{\theta}_{k-1,k-2,t}}(t)+\frac{ \Delta y_{k-1}(t+1)\Delta\overrightarrow{u_{k-1}}(t)} { \mu_{1}+\mu_{2}+\left\|\Delta\overrightarrow{u_{k-1}}(t)\right\|_{2}^{2}}\\ =\;& \left[\frac{ \mu_{1}}{ \mu_1+\mu_2}Q\left(\Delta\overrightarrow{u_{k-1}}(t)\right)\right]\overrightarrow{\hat{\theta}_{k-1,k-2,t}}(t)\\ &+\frac{ \Delta y_{k-1}(t+1)\Delta\overrightarrow{u_{k-1}}(t)} { \mu_{1}+\mu_{2}+\left\|\Delta\overrightarrow{u_{k-1}}(t)\right\|_{2}^{2}}. \end{split} | (26) |
By combining (7) and (12) with (14), we can validate
\begin{split} \left|\Delta y_{k-1}(t+1)\right| \leq\;&\left|\Delta\overrightarrow{u_{k-1}}^{{T}}(t)\overrightarrow{\theta_{k-1,k-2,t}}(t)\right| +\left|\varphi_{k-1}(t)\right|\\ \leq\;&\left\|\Delta\overrightarrow{u_{k-1}}(t)\right\|_{2}\sqrt{\sum\limits_{i = 0}^{t}\theta_{k-1,k-2,t}^2(i)} +\left|\varphi_{k-1}(t)\right|\\ \leq\;&\sqrt{t+1}\beta_{\theta}\left\|\Delta\overrightarrow{u_{k-1}}(t)\right\|_{2}\\ &+2\beta_{w}(t)+2\sum\limits_{i = 0}^{t-1}\beta_{\theta}\beta_{w}(i)+2\beta_{\theta}\beta_{\delta}\\ \leq\;&\sqrt{T}\beta_{\theta}\left\|\Delta\overrightarrow{u_{k-1}}(t)\right\|_{2}\\ & +2\beta_{w}+2T\beta_{\theta}\beta_{w}+2\beta_{\theta}\beta_{\delta} \end{split} | (27) |
where
\begin{split} &\left\|\frac{ \Delta y_{k-1}(t+1)\Delta\overrightarrow{u_{k-1}}(t)} { \mu_{1}+\mu_{2}+\left\|\Delta\overrightarrow{u_{k-1}}(t)\right\|_{2}^{2}}\right\|_{2} \\ &\qquad\leq\frac{ \sqrt{T}\beta_{\theta}\left\|\Delta\overrightarrow{u_{k-1}}(t)\right\|_{2}^{2}} { \mu_{1}+\mu_{2}+\left\|\Delta\overrightarrow{u_{k-1}}(t)\right\|_{2}^{2}} \\ & \quad\qquad+\frac{ (2\beta_{w}+2T\beta_{\theta}\beta_{w}+2\beta_{\theta}\beta_{\delta})\left\|\Delta\overrightarrow{u_{k-1}}(t)\right\|_{2}} { \mu_{1}+\mu_{2}+\left\|\Delta\overrightarrow{u_{k-1}}(t)\right\|_{2}^{2}} \\ &\qquad\leq\sqrt{T}\beta_{\theta}+\frac{ \beta_{w}+T\beta_{\theta}\beta_{w}+\beta_{\theta}\beta_{\delta}}{ \sqrt{\mu_{1}+\mu_{2}}}. \end{split} | (28) |
Due to
\begin{split} \left\|\overrightarrow{\hat{\theta}_{k,k-1,t}}(t)\right\|_{2} \leq\;&\left\|\frac{ \mu_{1}}{ \mu_1+\mu_2}Q\left(\Delta\overrightarrow{u_{k-1}}(t)\right)\right\|_{2} \left\|\overrightarrow{\hat{\theta}_{k-1,k-2,t}}(t)\right\|_{2}\\ &+\left\|\frac{ \Delta y_{k-1}(t+1)\Delta\overrightarrow{u_{k-1}}(t)} { \mu_{1}+\mu_{2}+\left\|\Delta\overrightarrow{u_{k-1}}(t)\right\|_{2}^{2}}\right\|_{2}\\ \leq\;&\frac{ \mu_{1}}{ \mu_{1}+\mu_{2}}\left\|\overrightarrow{\hat{\theta}_{k-1,k-2,t}}(t)\right\|_{2}\\ &+\sqrt{T}\beta_{\theta}+\frac{ \beta_{w}+T\beta_{\theta}\beta_{w}+\beta_{\theta}\beta_{\delta}}{ \sqrt{\mu_{1}+\mu_{2}}}. \end{split} | (29) |
A straightforward consequence of (29) is
\begin{split} \left\|\overrightarrow{\hat{\theta}_{k,k-1,t}}(t)\right\|_{2} \leq\;&\left(\frac{ \mu_{1}}{ \mu_{1}+\mu_{2}}\right)^{k-1}\left\|\overrightarrow{\hat{\theta}_{1,0,t}}(t)\right\|_{2} +\sum\limits_{i = 0}^{k-2}\left(\frac{ \mu_{1}}{ \mu_{1}+\mu_{2}}\right)^{i}\\ & \times\left(\sqrt{T}\beta_{\theta}+\frac{ \beta_{w}+T\beta_{\theta}\beta_{w}+\beta_{\theta}\beta_{\delta}}{ \sqrt{\mu_{1}+\mu_{2}}}\right). \end{split} | (30) |
Since
\begin{split} &\left\|\overrightarrow{\hat{\theta}_{k,k-1,t}}(t)\right\|_{2} \leq\left\|\overrightarrow{\hat{\theta}_{1,0,t}}(t)\right\|_{2}\\ &\qquad+\frac{ \mu_{1}+\mu_{2}}{ \mu_{2}} \left(\sqrt{T}\beta_{\theta}+\frac{ \beta_{w}+T\beta_{\theta}\beta_{w}+\beta_{\theta}\beta_{\delta}}{ \sqrt{\mu_{1}+\mu_{2}}}\right)\\ &\quad\leq\beta_{\hat{\theta}},\quad\forall k\in\mathbb{Z},\forall t\in\mathbb{Z}_{T-1} \end{split} | (31) |
where
\begin{array}{l} \beta_{\hat{\theta}} = \max\limits_{t\in\mathbb{Z}_{T-1}}\left\|\overrightarrow{\hat{\theta}_{1,0,t}}(t)\right\|_{2} +\dfrac{ \mu_{1}+\mu_{2}}{ \mu_{2}} \left(\sqrt{T}\beta_{\theta}+\dfrac{ \beta_{w}+T\beta_{\theta}\beta_{w}+\beta_{\theta}\beta_{\delta}}{ \sqrt{\mu_{1}+\mu_{2}}}\right). \end{array} |
Owing to
In Theorem 1, it discloses that the boundedness of estimated parameters depends only on the nonnegative weighting factors
Remark 4: The similar idea of CM-based approach has been applied to exploit the boundedness of estimated parameters for the optimization-based adaptive ILC in, e.g., [16], [17]. By contrast,
Q\left(\Delta\overrightarrow{u_{k-1}}(t)\right) = I-\frac{ \varsigma\Delta\overrightarrow{u_{k-1}}(t)\Delta\overrightarrow{u_{k-1}}^{{T}}(t)} { \mu_{1}+\left\|\Delta\overrightarrow{u_{k-1}}(t)\right\|_{2}^{2}} |
it can only be obtained that
With Theorem 1, we proceed to develop robust convergence of ILC by achieving the boundedness of the system trajectories and the convergence of the tracking error, which is established in the following theorem by leveraging a DDA approach.
Theorem 2: Consider the nonlinear system (1) satisfying Assumptions 1 and 2, and let the updating law (15) for the input and the adaptive updating schemes S1) and S2) for the parameter estimation be applied. If
\gamma_{1}+\gamma_{2}>\sum\limits_{i = 3}^{m}\gamma_{i},\; \; \lambda>\left(\gamma_{1}^{2}+\gamma_{1}\gamma_{2}\right)\beta_{\overline{f}}\,\beta_{{\hat\theta}},\; \; \mu_{1}>0,\; \; \mu_{2}>0 | (32) |
then the following results for boundedness and convergence of the optimization-based adaptive ILC hold.
1) The boundedness can be guaranteed for both input and output trajectories such that
\left\{ \begin{array}{l} \left|u_{k}(t)\right|\leq\beta_{u},\quad\forall k\in\mathbb{Z}_+,\forall t\in\mathbb{Z}_{T-1}\\ \left|y_{k}(t)\right|\leq\beta_{y},\quad\forall k\in\mathbb{Z}_+,\forall t\in\mathbb{Z}_{T} \end{array} \right. | (33) |
for some finite bounds
2) The robust tracking objective (4) of ILC can be realized. Further, the perfect tracking objective (2) of ILC can be achieved (with its limit being approached exponentially fast), provided that (9) is ensured (with an exponentially fast speed).
Remark 5: In view of Theorem 2, it can be obviously stated that our approach for the optimization-based adaptive ILC is not only applicable for accommodating unknown nonlinear time-varying dynamics but also effective in overcoming ill effects of nonrepetitive uncertainties. This benefits from our new design of the optimization-based adaptive ILC, together with the use of a DDA approach to its convergence analysis. Further, it is worth emphasizing that it is generally difficult to obtain robustness of data-driven ILC in the presence of nonrepetitive uncertainties, e.g., see [15]–[21]. By contrast, Theorem 2 successfully shows the robust analysis of data-driven ILC, in spite of nonrepetitive uncertainties arising from disturbances and initial shifts.
Remark 6: For the optimization-based adaptive ILC of nonlinear systems, the parameters
\begin{split} u_{k}(t) =\;& u_{k-1}(t) +\frac{ \gamma_{1}^{2}{\hat\theta}_{k,k-1,t}(t)}{ \lambda+\gamma_{1}^{2}{\hat\theta}^{2}_{k,k-1,t}(t)} \Bigg\{e_{k-1}(t+1)\\ &-\sum\limits_{i = 0}^{t-1}{\hat\theta}_{k,k-1,t}(i)\left[u_{k}(i)-u_{k-1}(i)\right]\Bigg\} \end{split} |
and accordingly, the selection conditions become
To prove Theorem 2, we give some helpful properties for the robust convergence of the tracking error and the boundedness of the input, where we resort to the properties of substochastic matrices and nonnegative matrices. For clarity of our analysis, we introduce some basic facts related to nonnegative matrices.
Lemma 2: For any matrices
1)
2)
Further, if
3)
4)
Proof: Step 1: See the result (8.1.9) of [34, Chapter 8, p. 491].
Step 2: Thanks to
\left\|A\right\|_{\infty} = \max\limits_{1\leq i\leq n}\sum\limits_{j = 1}^{n}\left|a_{ij}\right| = \big\|\left|A\right|\big\|_{\infty}. |
Step 3: Due to
\begin{split} &\left\|A\right\|_{\infty} = \max\limits_{1\leq i\leq n}\sum\limits_{j = 1}^{n}\left|a_{ij}\right| = \max\limits_{1\leq i\leq n}\sum\limits_{j = 1}^{n}a_{ij}\\ &\left\|B\right\|_{\infty} = \max\limits_{1\leq i\leq n}\sum\limits_{j = 1}^{n}\left|b_{ij}\right| = \max\limits_{1\leq i\leq n}\sum\limits_{j = 1}^{n}b_{ij} \end{split} |
and then with
\sum\limits_{j = 1}^{n}a_{ij} \leq\sum\limits_{j = 1}^{n}b_{ij} \leq\max\limits_{1\leq i\leq n}\sum\limits_{j = 1}^{n}b_{ij} ,\quad\forall i = 1,2,\ldots,n |
which further yields
\max\limits_{1\leq i\leq n}\sum\limits_{j = 1}^{n}a_{ij} \leq\max\limits_{1\leq i\leq n}\sum\limits_{j = 1}^{n}b_{ij}. |
From the above facts,
Step 4: Owing to
\left\|A\mathbf{1}_{n}\right\|_{\infty} = \max\limits_{1\leq i\leq n}\sum\limits_{j = 1}^{n}a_{ij} |
from which
Based on Lemma 2, we revisit (20) and develop two helpful convergence results for the tracking error in the lemma below.
Lemma 3: Consider the iterative process (20) for the tracking error over any
1) If
2) If, moreover,
Proof: Let us denote
\begin{split} &\overrightarrow{e_{k}}(t+1) = \left[e_{k}(t+1),e_{k-1}(t+1),\ldots,e_{k-m+2}(t+1)\right]^{{T}}\in\mathbb{R}^{m-1}\\ &\overrightarrow{\kappa_{k}}(t) = \left[\kappa_{k}(t),0,\ldots,0\right]^{{T}}\in\mathbb{R}^{m-1}\\ &p_{1,k}(t) = 1-\frac{ \left(\gamma_{1}^{2}+\gamma_{1}\gamma_{2}\right)\theta_{k,k-1,t}(t){\hat\theta}_{k,k-1,t}(t)} { \lambda+\gamma_{1}^{2}{\hat\theta}^{2}_{k,k-1,t}(t)}\\ &p_{i,k}(t) = -\frac{ \gamma_{1}\gamma_{i+1}\theta_{k,k-1,t}(t){\hat\theta}_{k,k-1,t}(t)}{ \lambda+\gamma_{1}^{2}{\hat\theta}^{2}_{k,k-1,t}(t)},\quad i = 2,3,\ldots,m-1 \end{split} | (34) |
and then we can rewrite (20) as
\overrightarrow{e_{k}}(t+1) = P_{k}(t)\overrightarrow{e_{k-1}}(t+1)+\overrightarrow{\kappa_{k}}(t),\quad\forall k\in\mathbb{Z},\forall t\in\mathbb{Z}_{T-1} | (35) |
where
{P_k}(t) = \left[ {\begin{array}{*{20}{c}} {{p_{1,k}}(t)}&{{p_{2,k}}(t)}& \cdots & \cdots &{{p_{m - 1,k}}(t)}\\ 1&0& \cdots & \cdots &0\\ 0& \ddots & \ddots & \ddots & \vdots \\ \vdots & \ddots & \ddots & \ddots & \vdots \\ 0& \cdots &0&1&0 \end{array}} \right]. | (36) |
With (32), we can combine (13) and (23) to deduce
\begin{split} \sum\limits_{i = 1}^{m-1}\left|p_{i,k}(t)\right| =\;&\left |1-\frac{ \left(\gamma_{1}^{2}+\gamma_{1}\gamma_{2}\right)\theta_{k,k-1,t}(t){\hat\theta}_{k,k-1,t}(t)} { \lambda+\gamma_{1}^{2}{\hat\theta}^{2}_{k,k-1,t}(t)}\right| \\ &+\sum\limits_{i = 3}^{m}\left|\frac{ \gamma_{1}\gamma_{i}\theta_{k,k-1,t}(t){\hat\theta}_{k,k-1,t}(t)}{ \lambda+\gamma_{1}^{2}{\hat\theta}^{2}_{k,k-1,t}(t)}\right| \\ \leq\;&1-\frac{ \gamma_{1}\left(\gamma_{1}+\gamma_{2}-\sum\limits_{i = 3}^{m}\gamma_{i}\right)\beta_{\underline{f}}\varepsilon} { \lambda+\gamma_{1}^{2}\beta_{{\hat\theta}}^{2}} \\ \triangleq\;&\zeta<1,\quad\forall k\in\mathbb{Z},\forall t\in\mathbb{Z}_{T-1}. \end{split} | (37) |
To proceed, we resort to the nonnegative matrix
\prod\limits_{j = (m-1)s+1}^{(m-1)s+m-1}\left|P_{j}(t)\right|\mathbf{1}_{m-1}\leq\mathbf{1}_{m-1},\quad\forall s\in\mathbb{Z}_+,\forall t\in\mathbb{Z}_{T-1} |
which, together with (37) describing a condition that is strictly less than one, further leads to (see also the proof of (71) given at
\prod\limits_{j = (m-1)s+1}^{(m-1)s+m-1}\left|P_{j}(t)\right|\mathbf{1}_{m-1}\leq\zeta\mathbf{1}_{m-1},\quad\forall s\in\mathbb{Z}_+,\forall t\in\mathbb{Z}_{T-1}. | (38) |
Since the use of the property 1) of Lemma 2 leads to
\left|\prod\limits_{j = (m-1)s+1}^{(m-1)s+m-1}P_{j}(t)\right| \leq\prod\limits_{j = (m-1)s+1}^{(m-1)s+m-1}\left|P_{j}(t)\right| |
we leverage this fact and incorporate the properties 3) and 4) of Lemma 2 to deduce
\begin{split} \left\|\;\left|\prod\limits_{j = (m-1)s+1}^{(m-1)s+m-1}P_{j}(t)\right|\;\right\|_{\infty} \leq\;&\left\|\prod\limits_{j = (m-1)s+1}^{(m-1)s+m-1}\left|P_{j}(t)\right|\right\|_{\infty}\\ =\;& \left\|\prod\limits_{j = (m-1)s+1}^{(m-1)s+m-1}\left|P_{j}(t)\right|\mathbf{1}_{m-1}\right\|_{\infty} \end{split} |
which, together with using the property 2) of Lemma 2, further results in
\begin{split} \left\|\prod\limits_{j = (m-1)s+1}^{(m-1)s+m-1}P_{j}(t)\right\|_{\infty} =\;& \left\|\;\left|\prod\limits_{j = (m-1)s+1}^{(m-1)s+m-1}P_{j}(t)\right|\;\right\|_{\infty}\\ \leq\;&\left\|\prod\limits_{j = (m-1)s+1}^{(m-1)s+m-1}\left|P_{j}(t)\right|\mathbf{1}_{m-1}\right\|_{\infty}. \end{split} |
As a consequence of this property and again with the property 3) of Lemma 2, we can explore (38) to obtain
\begin{split} \left\|\prod\limits_{j = (m-1)s+1}^{(m-1)s+m-1}P_{j}(t)\right\|_{\infty} \leq\;&\left\|\prod\limits_{j = (m-1)s+1}^{(m-1)s+m-1}\left|P_{j}(t)\right|\mathbf{1}_{m-1}\right\|_{\infty} \\ \leq\;&\left\|\zeta\mathbf{1}_{m-1}\right\|_{\infty} \\ =\;&\zeta \\ <\;&1,\quad\forall s\in\mathbb{Z}_+,\forall t\in\mathbb{Z}_{T-1}. \end{split} | (39) |
For the iterative process (35) under the condition (39), we can exploit the idea of [24, Lemma 2] to develop that:
1) There can be determined some finite bounds satisfying
2)
Based on these two facts, we notice the definitions of
Next, we explore the boundedness of the system trajectories, for which the input dynamics along the iteration axis are used. To this end, we redescribe (15) as
\begin{split} u_{k}(t) =\;& u_{k-1}(t)+\frac{ \gamma_{1}{\hat\theta}_{k,k-1,t}(t)\sum\nolimits_{i = 3}^{m}\gamma_{i}e_{k-i+1}(t+1)}{ \lambda+\gamma_{1}^{2}{\hat\theta}^{2}_{k,k-1,t}(t)}\\ & -\frac{ \gamma_{1}^{2}{\hat\theta}_{k,k-1,t}(t)\sum\nolimits_{i = 0}^{t-1}{\hat\theta}_{k,k-1,t}(i)\Delta u_k(i)}{ \lambda+\gamma_{1}^{2}{\hat\theta}^{2}_{k,k-1,t}(t)}\\ &+\frac{ \left(\gamma_{1}^{2}+\gamma_{1}\gamma_{2}\right){\hat\theta}_{k,k-1,t}(t)\left[y_d(t+1)-y_{k-1}(t+1)\right]} { \lambda+\gamma_{1}^{2}{\hat\theta}^{2}_{k,k-1,t}(t)}. \end{split} | (40) |
By considering the initial iteration (i.e.,
\begin{split} y_{k-1}(t+1) =\;& \theta_{k-1,0,t}(t)u_{k-1}(t)+y_0(t+1)\\ &+\sum\limits_{i = 0}^{t-1}\theta_{k-1,0,t}(i)u_{k-1}(i)-\sum\limits_{i = 0}^{t}\theta_{k-1,0,t}(i)u_0(i)\\ &+\left[w_{k-1}(t)-w_{0}(t)\right] +\sum\limits_{i = 0}^{t-1}\upsilon_{k-1,0,t}(i)\big[w_{k-1}(i)\\ &-w_{0}(i)\big] +\vartheta_{k-1,0,t}\left(\delta_{k-1}-\delta_{0}\right). \end{split} | (41) |
As a consequence of substituting (41) into (40), the dynamics of the input can be formulated as
\begin{split} u_{k}(t) =\;& \left[1-\frac{ \left(\gamma_{1}^{2}+\gamma_{1}\gamma_{2}\right){\hat\theta}_{k,k-1,t}(t)\theta_{k-1,0,t}(t)} { \lambda+\gamma_{1}^{2}{\hat\theta}^{2}_{k,k-1,t}(t)}\right]u_{k-1}(t) \\ &+\psi_k(t),\quad\forall k\in\mathbb{Z},\forall t\in\mathbb{Z}_{T-1} \end{split} | (42) |
where
\begin{split} \psi_{k}(t) =\;& \frac{ \gamma_{1}{\hat\theta}_{k,k-1,t}(t)}{ \lambda+\gamma_{1}^{2}{\hat\theta}^{2}_{k,k-1,t}(t)} \Bigg\{\left(\gamma_{1}+\gamma_{2}\right)\sum\limits_{i = 0}^{t}\theta_{k-1,0,t}(i)u_{0}(i) -\big(\gamma_{1}\\ &+\gamma_{2}\big)\sum\limits_{i = 0}^{t-1}\theta_{k-1,0,t}(i)u_{k-1}(i) -\gamma_{1}\sum\limits_{i = 0}^{t-1}{\hat\theta}_{k,k-1,t}(i)\Delta u_{k}(i)\\ &+\left(\gamma_{1}+\gamma_{2}\right)e_{0}(t+1)+\sum\limits_{i = 3}^{m}\gamma_{i}e_{k-i+1}(t+1)\\ &-\left(\gamma_{1}+\gamma_{2}\right)\left[w_{k-1}(t)-w_{0}(t)\right]\\ &-\left(\gamma_{1}+\gamma_{2}\right)\sum\limits_{i = 0}^{t-1}\upsilon_{k-1,0,t}(i)\left[w_{k-1}(i)-w_{0}(i)\right]\\ &-\left(\gamma_{1}+\gamma_{2}\right)\vartheta_{k-1,0,t}\left(\delta_{k-1}-\delta_{0}\right)\Bigg\}.\\[-15pt] \end{split} | (43) |
Now, with (42), we present a boundedness result of the input in the following lemma.
Lemma 4: For the iterative process (42) of the input over any
Proof: Based on (32), the application of (13) and (23) results in
\begin{split} &\left|1-\frac{ \left(\gamma_{1}^{2}+\gamma_{1}\gamma_{2}\right){\hat\theta}_{k,k-1,t}(t)\theta_{k-1,0,t}(t)} { \lambda+\gamma_{1}^{2}{\hat\theta}^{2}_{k,k-1,t}(t)}\right|\\ &\qquad\leq 1-\frac{ \left(\gamma_{1}^{2}+\gamma_{1}\gamma_{2}\right)\beta_{\underline{f}}\varepsilon}{ \lambda+\gamma_{1}^{2}\beta_{{\hat\theta}}^{2}} \triangleq\phi<1,\quad\forall k\in\mathbb{Z},\forall t\in\mathbb{Z}_{T-1}. \end{split} | (44) |
By considering (44) for (42), we can develop this lemma based on the result i) in [24, Lemma 2].
Based on Lemmas 3 and 4, we present the proof of Theorem 2 by resorting to a DDA approach to ILC, instead of applying the eigenvalue-based analysis approach.
Proof of Theorem 2: This proof is obtained by induction over
Step 1: Initialization results for
\kappa_{k}(0) = -\Delta w_{k}(0)-\vartheta_{k,k-1,0}\Delta\delta_{k},\quad\forall k\in\mathbb{Z} | (45) |
which is guaranteed to be bounded under (7) and (12), namely,
\sup\limits_{k\in\mathbb{Z}_+}\left|e_{k}(1)\right|\leq\beta_{e}(0) \; \; {\rm{and}}\; \; \mathop {\lim \sup }\limits_{k\to\infty}\left|e_{k}(1)\right|\leq\beta_{e_{\sup}}(0) | (46) |
for some finite bounds
\begin{split} \psi_{k}(0) =\;&\frac{ \gamma_{1}{\hat\theta}_{k,k-1,0}(0)}{ \lambda+\gamma_{1}^{2}{\hat\theta}^{2}_{k,k-1,0}(0)} \Bigg\{\left(\gamma_{1}+\gamma_{2}\right)\theta_{k-1,0,0}(0)u_{0}(0)\\ &+\left(\gamma_{1}+\gamma_{2}\right)e_{0}(1)+\sum\limits_{i = 3}^{m}\gamma_{i}e_{k-i+1}(1)\\ &-\left(\gamma_{1}+\gamma_{2}\right)\left[w_{k-1}(0)-w_{0}(0)\right]\\ &-\left(\gamma_{1}+\gamma_{2}\right)\vartheta_{k-1,0,0}\left(\delta_{k-1}-\delta_{0}\right)\Bigg\} \end{split} |
which, together with (7), (12), (23), and (46), ensures
\begin{split} \left|\psi_{k}(0)\right| \leq\;&\frac{ \gamma_{1}\beta_{{\hat\theta}}}{ \lambda+\gamma_{1}^{2}\varepsilon^{2}} \Bigg\{\left(\gamma_{1}+\gamma_{2}\right)\big[\beta_{\theta}\left|u_{0}(0)\right|+2\beta_{w}(0) \\ &+2\beta_{\theta}\beta_{\delta}\big]+\sum\limits_{i = 1}^{m}\gamma_{i}\beta_{e}(0)\Bigg\} \\ \triangleq\;&\beta_{\psi}(0). \end{split} | (47) |
With (47) and by considering Lemma 4 for the input process (42) at the initial time step
Step 2: Let
In view of Lemma 1 and Theorem 1, we notice (21), employ the hypothesis presented for the time instants
\begin{split} \left|\kappa_{k}(N)\right| =\;& \Bigg|\frac{ \gamma_{1}^{2}\theta_{k,k-1,N}(N){\hat\theta}_{k,k-1,N}(N)}{ \lambda+\gamma_{1}^{2}{\hat\theta}^{2}_{k,k-1,N}(N)} \sum\limits_{i = 0}^{N-1}{\hat\theta}_{k,k-1,N}(i)\Delta u_{k}(i)\\ &-\sum\limits_{i = 0}^{N-1}\theta_{k,k-1,N}(i)\Delta u_{k}(i)-\Delta w_{k}(N)\\ &-\sum\limits_{i = 0}^{N-1}\upsilon_{k,k-1,N}(i)\Delta w_{k}(i) -\vartheta_{k,k-1,N}\Delta\delta_{k}\Bigg|\\ \leq\;&2\beta_{\theta} \left(1+\frac{ \gamma_{1}^{2}\beta_{{\hat\theta}}^{2}}{ \lambda+\gamma_{1}^{2}\varepsilon^{2}}\right) \sum\limits_{i = 0}^{N-1}\beta_{u}(i) +2\beta_{w}(N)\\ &+2\beta_{\theta}\sum\limits_{i = 0}^{N-1}\beta_{w}(i) +2\beta_{\theta}\beta_{\delta}\\ \triangleq\;&\beta_{\kappa}(N),\quad\forall k\in\mathbb{Z}. \end{split} | (48) |
With (48) and based on the result 1) of Lemma 3, we can get
\sup\limits_{k\in\mathbb{Z}_+}\left|e_{k}(N+1)\right|\leq\beta_{e}(N),\; \; \mathop {\lim \sup }\limits_{k\to\infty}\left|e_{k}(N+1)\right|\leq\beta_{e_{{\rm{sup}}}}(N) | (49) |
where
\begin{split} \psi_{k}(N) =\;& \frac{ \gamma_{1}{\hat\theta}_{k,k-1,N}(N)}{ \lambda+\gamma_{1}^{2}{\hat\theta}^{2}_{k,k-1,N}(N)} \Bigg[\left(\gamma_{1}+\gamma_{2}\right)\sum\limits_{i = 0}^{N}\theta_{k-1,0,N}(i)u_{0}(i) -\big(\gamma_{1}\\ &+\gamma_{2}\big)\sum\limits_{i = 0}^{N-1}\theta_{k-1,0,N}(i)u_{k-1}(i) -\gamma_{1}\sum\limits_{i = 0}^{N-1}{\hat\theta}_{k,k-1,N}(i)\Delta u_{k}(i)\\ &+\left(\gamma_{1}+\gamma_{2}\right)e_{0}(N+1)+\sum\limits_{i = 3}^{m}\gamma_{i}e_{k-i+1}(N+1)\\ &-\left(\gamma_{1}+\gamma_{2}\right)\left[w_{k-1}(N)-w_{0}(N)\right]\\ &-\left(\gamma_{1}+\gamma_{2}\right)\sum\limits_{i = 0}^{N-1}\upsilon_{k-1,0,N}(i)\left[w_{k-1}(i)-w_{0}(i)\right]\\ & -\left(\gamma_{1}+\gamma_{2}\right)\vartheta_{k-1,0,N}\left(\delta_{k-1}-\delta_{0}\right)\Bigg] \end{split} |
for which we insert (49), together with the boundedness results of Lemma 1, Theorem 1, Assumption 2, and the hypothesis made in this step, to obtain
\begin{split} \left|\psi_{k}(N)\right| \leq\;&\frac{ \gamma_{1}\left|{\hat\theta}_{k,k-1,N}(N)\right|}{ \lambda+\gamma_{1}^{2}{\hat\theta}^{2}_{k,k-1,N}(N)} \Bigg[\left(\gamma_{1}+\gamma_{2}\right)\sum\limits_{i = 0}^{N}\left|\theta_{k-1,0,N}(i)\right|\left|u_{0}(i)\right|\\ &+\big(\gamma_{1}+\gamma_{2}\big)\sum\limits_{i = 0}^{N-1}\left|\theta_{k-1,0,N}(i)\right|\left|u_{k-1}(i)\right|\\ &+\gamma_{1}\sum\limits_{i = 0}^{N-1}\left|{\hat\theta}_{k,k-1,N}(i)\right|\left|\Delta u_{k}(i)\right|\\ &+\left(\gamma_{1}+\gamma_{2}\right)\left|e_{0}(N+1)\right|+\sum\limits_{i = 3}^{m}\gamma_{i}\left|e_{k-i+1}(N+1)\right|\\ &+\left(\gamma_{1}+\gamma_{2}\right)\left[\left|w_{k-1}(N)\right|+\left|w_{0}(N)\right|\right]\\ &+\left(\gamma_{1}+\gamma_{2}\right)\sum\limits_{i = 0}^{N-1}\left|\upsilon_{k-1,0,N}(i)\right|\left[\left|w_{k-1}(i)\right|+\left|w_{0}(i)\right|\right]\\ &+\left(\gamma_{1}+\gamma_{2}\right)\left|\vartheta_{k-1,0,N}\right|\left(\left|\delta_{k-1}\right|+\left|\delta_{0}\right|\right)\Bigg]\\ \leq\;&\frac{ \gamma_{1}\beta_{{\hat\theta}}}{ \lambda+\gamma_{1}^{2}\varepsilon^{2}} \Biggr\{\left(\gamma_{1}+\gamma_{2}\right)\beta_{\theta}\sum\limits_{i = 0}^{N}\left|u_{0}(i)\right|\\ &+\left(\gamma_{1}\beta_{\theta}+\gamma_{2}\beta_{\theta}+2\gamma_{1}\beta_{{\hat\theta}}\right) \sum\limits_{i = 0}^{N-1}\beta_{u}(i) +\sum\limits_{i = 1}^{m}\gamma_{i}\beta_{e}(N)\\ &+2\left(\gamma_{1}+\gamma_{2}\right) \left[\beta_{w}(N)+\beta_{\theta}\sum\limits_{i = 0}^{N-1}\beta_{w}(i)+\beta_{\theta}\beta_{\delta}\right] \Biggr\}\\ \triangleq&\beta_{\psi}(N).\\[-10pt] \end{split} | (50) |
Based on the application of Lemma 4 to the input process (42) for the time step
From the analysis of Steps 1 and 2, we perform induction and can easily deduce the boundedness result 1) and the robust tracking result (4) of Theorem 2.
Step 3: If (9) additionally holds (with an exponentially fast speed), then for
\label{} \Delta u_{k}(0) = \frac{ \gamma_{1}{\hat\theta}_{k,k-1,0}(0)}{ \lambda+\gamma_{1}^{2}{\hat\theta}^{2}_{k,k-1,0}(0)} \left[\gamma_{1}e_{k-1}(1)+\sum\limits_{i = 2}^{m}\gamma_{i}e_{k-i+1}(1)\right] |
we further get
\begin{split} \left|\kappa_{k}(N)\right| \leq\;&\beta_{\theta} \left(1+\frac{ \gamma_{1}^{2}\beta_{{\hat\theta}}^{2}}{ \lambda+\gamma_{1}^{2}\varepsilon^{2}}\right) \sum\limits_{i = 0}^{N-1}\left|\Delta u_{k}(i)\right| \\ &+\left|\Delta w_{k}(N)\right| +\beta_{\theta}\sum\limits_{i = 0}^{N-1}\left|\Delta w_{k}(i)\right| +\beta_{\theta}\left|\Delta\delta_{k}\right| \\ \to\;&0\; {\rm{(exponentially\; fast)}},\quad{\rm{as}}\; \;k\to\infty \end{split} | (51) |
we can apply the result 2) of Lemma 3 to derive
\begin{split} \left|\Delta u_{k}(N)\right| =\;& \left|-\frac{ \gamma_{1}^{2}{\hat\theta}_{k,k-1,N}(N)}{ \lambda+\gamma_{1}^{2}{\hat\theta}^{2}_{k,k-1,N}(N)} \sum\limits_{i = 0}^{N-1}{\hat\theta}_{k,k-1,N}(i)\Delta u_{k}(i)\right.\\ &+\frac{ \gamma_{1}{\hat\theta}_{k,k-1,N}(N)}{ \lambda+\gamma_{1}^{2}{\hat\theta}^{2}_{k,k-1,N}(N)} \Bigg[\gamma_{1}e_{k-1}(N+1)\\ &\left.+\sum\limits_{i = 2}^{m}\gamma_{i}e_{k-i+1}(N+1)\Bigg]\right|\\ \leq\;&\frac{ \gamma_{1}^{2}\beta_{{\hat\theta}}^{2}}{ \lambda+\gamma_{1}^{2}\varepsilon^{2}} \sum\limits_{i = 0}^{N-1}\left|\Delta u_{k}(i)\right| +\frac{ \gamma_{1}\beta_{{\hat\theta}}}{ \lambda+\gamma_{1}^{2}\varepsilon^{2}} \Bigg[\gamma_{1}\left|e_{k-1}(N+1)\right|\\ &+\sum\limits_{i = 2}^{m}\gamma_{i}\left|e_{k-i+1}(N+1)\right|\Bigg] \end{split} |
further yields
To illustrate the effectiveness of the proposed optimization-based adaptive ILC, we perform simulation tests by considering a numerical example and an injection molding process.
Example 1: Consider the nonlinear system (1) with nonlinear dynamics described in a specific form of
\begin{split} &f(y_{k}(t),y_{k}(t-1),u_{k}(t),u_{k}(t-1),t) = \sin\left(y_{k}(t)\right)\\ &\quad+\cos\left(y_{k}(t-1)\right)+\frac{ t+1}{ t+2}u_{k}(t)+\cos\left(y_{k}(t)\right)\sin\left(u_{k}(t-1)\right) \end{split} |
and with nonrepetitive uncertainties given by
w_k(t) = 0.01\chi_{w}(k,t),\; \; \; y_k(0) = 1.5+0.01\chi_{y}(k) |
where
y_{d}(t) = 5\sin\left(\frac{ 2\pi t}{ 50}\right)+\frac{ t(50-t)}{ 375},\quad\forall t\in\mathbb{Z}_{50}. |
To implement the updating law (15) and the adaptive updating schemes S1) and S2), we employ the parameters shown in Table I, and select the initial value
\lambda | \gamma_{1} | \gamma_{2} | \gamma_{3} | \mu_{1} | \mu_{2} | \varepsilon |
1 | 0.8 | 0.14 | 0.06 | 1 | 0.001 | 0.01 |
In Fig. 1, we describe the curve of the iteration evolution of the input, which is evaluated with
Example 2: Consider an injection molding process, devoting to the dynamics between the nozzle pressure and the hydraulic control valve opening, described by (see also [32])
\begin{split} y_{k}(t+1) =\;& 1.607y_{k}(t)-0.6086y_{k}(t-1)\\ &+1.239u_{k}(t)-0.9282u_{k}(t-1)+w_{k}(t) \end{split} |
where
y_{k}(0) = 10+\overline{\chi}_{y}(k),\quad w_{k}(t) = \overline{\chi}_{w}(k,t) |
for some
To implement the robust tracking task, the desired reference trajectory is given by
{y_d}(t) = \left\{ \begin{array}{l} 150,\;\;\;\;\;\;\;0 \le t \le 50\\ 300,\;\;\;\;\;\;\;51 \le t \le 100 \end{array} \right. |
and our optimization-based adaptive ILC, which is comprised of the updating law (15) and the adaptive updating schemes S1) and S2), is applied by adopting the same parameters as shown in Table I and the same initial settings as used in Example 1. It can be validated that the needed robust convergence conditions of ILC in Theorems 1 and 2 are satisfied.
Similarly to Figs. 1–3, Figs. 4–6 are depicted to demonstrate the system performance of the injection molding process when operating with the use of our optimization-based adaptive ILC. The input evolution versus iteration is depicted in Fig. 4, which illustrates the boundedness of system trajectories. In Fig. 5, the robust convergence performance is illustrated for the optimization-based adaptive ILC by describing the evolution of the tracking error versus iteration. The high-precision tracking performance is demonstrated in Fig. 6 that describes the output learned with our optimization-based adaptive ILC after the 100th iteration, as well as the desired reference trajectory. It can be obviously seen that the illustrations of Figs. 4–6 coincide with our robust optimization-based adaptive ILC results of nonlinear systems.
Discussions: The simulation tests performed in Examples 1 and 2 validate the robustness and effectiveness of our presented optimization-based adaptive ILC for nonlinear systems in spite of unknown nonlinearities and nonrepetitive uncertainties. Due to the limited use of model information, they also demonstrate that our optimization-based adaptive ILC results may provide a feasible way to the design and analysis of data-driven methods. In particular, the illustrations of Figs. 3 and 6 can disclose that our design method of optimization-based adaptive ILC works effectively for accomplishing the high-precision tracking tasks of nonlinear systems subject to the nonrepetitive uncertainties, especially in comparison with those methods proposed in, e.g., [16], [17].
In this paper, robust convergence problems for the optimization-based adaptive ILC of nonlinear time-varying systems subject to iteration-dependent initial shifts and disturbances have been discussed. A new design method has been introduced to bridge the gap between optimization-based and CM-based approaches for ILC. By incorporating properties of substochastic matrices, a DDA approach integrated with CM-based analyses has been explored to establish robust convergence results of ILC, which avoids performing the eigenvalue analysis to gain convergence of iterative processes subject to iteration-dependent parameters. These advantages make our design and analysis methods for the optimization-based adaptive ILC effective and robust in spite of nonrepetitive uncertainties. In addition, the simulation tests implemented through a numerical example and for an injection molding process have demonstrated the validity of our robust optimization-based adaptive ILC results.
Proof: An inductive analysis on
Step 1: Let
\begin{split} y_{k}(1) =\;& f\left(y_k(0),0,\ldots,0,u_{k}(0),0,\ldots,0\right)+w_k(0)\\ \triangleq\;& \overline{g}^{0}\left(y_k(0),u_{k}(0),w_k(0)\right) \end{split} |
based on which we have
\begin{split} &\frac{ \partial \overline{g}\,^{0}}{ \partial y_k(0)} = \left.\frac{ \partial f}{ \partial x_{1}}\right|_{\left(y_k(0),0,\ldots,0,u_{k}(0),0,\ldots,0\right)}\\ &\frac{ \partial \overline{g}\,^{0}}{ \partial u_{k}(0)} = \left.\frac{\partial f}{\partial x_{l+2}}\right|_{\left(y_k(0),0,\ldots,0,u_{k}(0),0,\ldots,0\right)}\\ &\frac{ \partial \overline{g}\,^{0}}{ \partial w_k(0)} = 1. \end{split} |
By employing (5) and (6), we can further derive
\begin{array}{l} \left|\dfrac{ \partial \overline{g}\,^{0}}{ \partial y_k(0)}\right| \leq\beta_{\overline{f}}\triangleq\beta_{\theta}(0),\quad \dfrac{ \partial \overline{g}\,^{0}}{ \partial u_k(0)}\in\left[\beta_{\underline{f}},\beta_{\overline{f}}\right],\quad \dfrac{ \partial \overline{g}\,^{0}}{ \partial w_k(0)} = 1. \end{array} |
Step 2: Let us consider any
\begin{split} &\left|\frac{ \partial \overline{g}\,^{t}}{ \partial y_k(0)}\right| \leq\beta_{\theta}(t), \frac{ \partial \overline{g}\,^{t}}{ \partial u_{k}(t)} \in\left[\beta_{\underline{f}},\beta_{\overline{f}}\right], \frac{ \partial \overline{g}\,^{t}}{ \partial w_{k}(t)} = 1\\ &\left|\frac{ \partial \overline{g}\,^{t}}{ \partial u_{k}(0)}\right| \leq\beta_{\theta}(t),\ldots, \left|\frac{ \partial \overline{g}\,^{t}}{ \partial u_{k}(t-1)}\right|\leq\beta_{\theta}(t) \\ &\left|\frac{ \partial \overline{g}\,^{t}}{ \partial w_{k}(0)}\right| \leq\beta_{\theta}(t),\ldots, \left|\frac{ \partial \overline{g}\,^{t}}{ \partial w_{k}(t-1)}\right| \leq\beta_{\theta}(t) \end{split} |
for some finite bound
When we consider (1) for
\begin{split} y_{k}(N+1) =\;& f\left(y_{k}(N),\ldots,y_{k}(N-l),u_{k}(N),\ldots,u_{k}(N-n),N\right)\\ & +w_{k}(N)\\ =\;& f\Big(\overline{g}\,^{N-1},\ldots,\overline{g}\,^{N-1-l},u_{k}(N),\ldots,u_{k}(N-n),N\Big) +w_{k}(N)\\ \triangleq \;&\overline{g}\,^{N}\left(y_k(0),u_{k}(0),\ldots,u_{k}(N),w_{k}(0),\ldots,w_{k}(N)\right). \end{split} |
For
\begin{split} &\frac{ \partial \overline{g}\,^{N}}{ \partial y_k(0)} = \sum\limits_{i = 0}^{l}\frac{ \partial f}{ \partial \overline{g}\,^{N-1-i}}\frac{ \partial \overline{g}\,^{N-1-i}}{ \partial y_k(0)}\\ &\frac{ \partial \overline{g}\,^{N}}{ \partial u_{k}(0)} = \sum\limits_{i = 0}^{l}\frac{ \partial f}{ \partial \overline{g}\,^{N-1-i}}\frac{ \partial \overline{g}\,^{N-1-i}}{ \partial u_{k}(0)}\\ &\qquad\qquad\qquad\vdots\\ &\frac{ \partial \overline{g}\,^{N}}{ \partial u_{k}(N-1)} = \frac{ \partial f}{ \partial \overline{g}\,^{N-1}}\frac{ \partial \overline{g}\,^{N-1}}{ \partial u_{k}(N-1)} +\frac{ \partial f}{ \partial u_{k}(N-1)}\\ &\frac{ \partial \overline{g}\,^{N}}{ \partial u_{k}(N)} = \frac{ \partial f}{ \partial u_{k}(N)} \end{split} |
and
\begin{split} &\frac{ \partial \overline{g}\,^{N}}{ \partial w_{k}(0)} = \sum\limits_{i = 0}^{l}\frac{ \partial f}{ \partial \overline{g}\,^{N-1-i}}\frac{ \partial \overline{g}\,^{N-1-i}}{ \partial w_{k}(0)}\\ &\qquad\qquad\qquad\vdots\\ &\frac{ \partial \overline{g}\,^{N}}{ \partial w_{k}(N-1)} = \frac{ \partial f}{ \partial \overline{g}\,^{N-1}}\frac{ \partial \overline{g}\,^{N-1}}{ \partial w_{k}(N-1)}\\ &\frac{ \partial \overline{g}\,^{N}}{ \partial w_{k}(N)} = 1. \end{split} |
Again with the made hypothesis and by inserting (5) and (8), we can obtain
\begin{split} &\left|\frac{ \partial \overline{g}\,^{N}}{ \partial y_k(0)}\right| \leq\sum\limits_{i = 0}^{l}\left|\frac{ \partial f}{ \partial \overline{g}\,^{N-1-i}}\right| \left|\frac{ \partial \overline{g}\,^{N-1-i}}{ \partial y_k(0)}\right| \leq\beta_{\theta}(N)\\ &\left|\frac{ \partial \overline{g}\,^{N}}{ \partial u_{k}(0)}\right| \leq\sum\limits_{i = 0}^{l}\left|\frac{ \partial f}{ \partial \overline{g}\,^{N-1-i}}\right|\left|\frac{ \partial \overline{g}\,^{N-1-i}}{ \partial u_{k}(0)}\right| \leq\beta_{\theta}(N)\\ &\qquad\qquad\qquad\vdots\\ &\left|\frac{ \partial \overline{g}\,^{N}}{ \partial u_{k}(N-1)}\right| \leq\left|\frac{ \partial f}{ \partial \overline{g}\,^{N-1}}\right|\left|\frac{ \partial \overline{g}\,^{N-1}}{ \partial u_{k}(N-1)}\right| +\left|\frac{ \partial f}{ \partial u_{k}(N-1)}\right|\\ &\qquad\leq\beta_{\theta}(N)\\ &\frac{ \partial \overline{g}\,^{N}}{ \partial u_{k}(N)} = \frac{ \partial f}{ \partial u_{k}(N)}\in\left[\beta_{\underline{f}},\beta_{\overline{f}}\right] \end{split} |
and
\begin{split} &\left|\frac{ \partial \overline{g}\,^{N}}{ \partial w_{k}(0)}\right| \leq\sum\limits_{i = 0}^{l}\left|\frac{ \partial f}{ \partial \overline{g}\,^{N-1-i}}\right|\left|\frac{ \partial \overline{g}\,^{N-1-i}}{ \partial w_{k}(0)}\right| \leq\beta_{\theta}(N)\\ &\qquad\qquad\qquad\vdots\\ &\left|\frac{ \partial \overline{g}\,^{N}}{ \partial w_{k}(N-1)}\right| \leq\left|\frac{ \partial f}{ \partial \overline{g}\,^{N-1}}\right|\left|\frac{ \partial \overline{g}\,^{N-1}}{ \partial w_{k}(N-1)}\right|\leq \beta_{\theta}(N)\\ &\frac{ \partial \overline{g}\,^{N}}{ \partial w_{k}(N)} = 1 \end{split} |
where
Based on the analysis of the above Steps 1 and 2, we can conclude by induction that for any
\begin{split} &y_{k}(t+1) = \overline{g}\,^{t}\left(y_k(0),u_{k}(0),\ldots,u_{k}(t),w_{k}(0),\ldots,w_{k}(t)\right)\; \; {\rm{with}}\\ &\left\{ \begin{aligned} &\left|\frac{ \partial \overline{g}\,^{t}}{ \partial y_k(0)}\right| \leq\beta_{\theta}(t),\;\; \frac{ \partial \overline{g}\,^{t}}{ \partial u_{k}(t)} \in\left[\beta_{\underline{f}},\beta_{\overline{f}}\right], \;\;\frac{ \partial \overline{g}\,^{t}}{ \partial w_{k}(t)} = 1\\ &\left|\frac{ \partial \overline{g}\,^{t}}{ \partial u_{k}(0)}\right| \leq\beta_{\theta}(t),\ldots, \left|\frac{ \partial \overline{g}\,^{t}}{ \partial u_{k}(t-1)}\right|\leq\beta_{\theta}(t) \\ &\left|\frac{ \partial \overline{g}\,^{t}}{ \partial w_{k}(0)}\right| \leq\beta_{\theta}(t),\ldots, \left|\frac{ \partial \overline{g}\,^{t}}{ \partial w_{k}(t-1)}\right| \leq\beta_{\theta}(t) \end{aligned}\right. \end{split} |
where
\overline{g}^{t}:\;\;\underbrace{\mathbb{R}\times\mathbb{R}\times\cdots\times\mathbb{R}}_{2t+3}\to\mathbb{R} |
and
\begin{split} y_{i}(t+1)-\;&y_{j}(t+1)\\ =\;& \left.\left[\frac{ \partial \overline{g}\,^{t}}{ \partial {\textit{z}}_{1}},\frac{ \partial \overline{g}\,^{t}}{ \partial {\textit{z}}_{2}},\ldots,\frac{ \partial \overline{g}\,^{t}}{ \partial {\textit{z}}_{2t+3}}\right]\right|_{\left({\textit{z}}_{1},{\textit{z}}_{2},\ldots,{\textit{z}}_{2t+3}\right) = \left({\textit{z}}_{1}^{\ast},{\textit{z}}_{2}^{\ast},\ldots,{\textit{z}}_{2t+3}^{\ast}\right)}\\ &\times\left(\begin{bmatrix}y_i(0)\\u_{i}(0)\\\vdots\\u_{i}(t)\\w_{i}(0)\\\vdots\\w_{i}(t)\end{bmatrix} -\begin{bmatrix}y_j(0)\\u_{j}(0)\\\vdots\\u_{j}(t)\\w_{j}(0)\\\vdots\\w_{j}(t)\end{bmatrix}\right)\\ = \;&\left.\left[\frac{ \partial \overline{g}\,^{t}}{ \partial {\textit{z}}_{2}},\frac{ \partial \overline{g}\,^{t}}{ \partial {\textit{z}}_{3}},\ldots,\frac{ \partial \overline{g}\,^{t}}{ \partial {\textit{z}}_{t+2}}\right]\right|_{\left({\textit{z}}_{1},{\textit{z}}_{2},\ldots,{\textit{z}}_{2t+3}\right) = \left({\textit{z}}_{1}^{\ast},{\textit{z}}_{2}^{\ast},\ldots,{\textit{z}}_{2t+3}^{\ast}\right)}\\ &\times\left(\begin{bmatrix}u_{i}(0)\\u_{i}(1)\\\vdots\\u_{i}(t)\end{bmatrix} -\begin{bmatrix}u_{j}(0)\\u_{j}(1)\\\vdots\\u_{j}(t)\end{bmatrix}\right)\\ &+\left.\left[\frac{ \partial \overline{g}\,^{t}}{ \partial {\textit{z}}_{t+3}},\frac{ \partial \overline{g}\,^{t}}{ \partial {\textit{z}}_{t+4}},\ldots,\frac{ \partial \overline{g}\,^{t}}{ \partial {\textit{z}}_{2t+3}}\right]\right|_{\begin{array}{l}\left({\textit{z}}_{1},{\textit{z}}_{2},\ldots,{\textit{z}}_{2t+3}\right)\\ \substack{= \left({\textit{z}}_{1}^{\ast},{\textit{z}}_{2}^{\ast},\ldots,{\textit{z}}_{2t+3}^{\ast}\right)}\end{array}}\\ &\times\left(\begin{bmatrix}w_{i}(0)\\w_{i}(1)\\\vdots\\w_{i}(t)\end{bmatrix} -\begin{bmatrix}w_{j}(0)\\w_{j}(1)\\\vdots\\w_{j}(t)\end{bmatrix}\right)\\ &+\left.\frac{ \partial \overline{g}\,^{t}}{ \partial {\textit{z}}_{1}}\right|_{\left({\textit{z}}_{1},{\textit{z}}_{2},\ldots,{\textit{z}}_{2t+3}\right) = \left({\textit{z}}_{1}^{\ast},{\textit{z}}_{2}^{\ast},\ldots,{\textit{z}}_{2t+3}^{\ast}\right)}(\delta_i-\delta_j)\\[-15pt] \end{split} | (52) |
where
\begin{split} &\left({\textit{z}}_{1}^{\ast},{\textit{z}}_{2}^{\ast},\ldots,{\textit{z}}_{2t+3}^{\ast}\right) = \overline{\varpi}\left(y_i(0),u_{i}(0),\ldots,u_{i}(t),w_{i}(0),\ldots,w_{i}(t)\right)\\ &\qquad+\left(1-\overline{\varpi}\right)\left(y_j(0),u_{j}(0),\ldots,u_{j}(t),w_{j}(0),\ldots,w_{j}(t)\right) \end{split} |
for some
[1] |
H. Zhang, Q. Wei, and Y. Luo, “A novel infinite-time optimal tracking control scheme for a class of discrete-time nonlinear systems via the greedy HDP iteration algorithm,” IEEE Trans. Systems,Man,and Cybernetics-Part B:Cybernetics, vol. 38, no. 4, pp. 937–942, Aug. 2008. doi: 10.1109/TSMCB.2008.920269
|
[2] |
S. John and J. O. Pedro, “Neural network-based adaptive feedback linearization control of antilock braking system,” Int. Journal of Artificial Intelligence, vol. 10, no. S13, pp. 21–40, Mar. 2013.
|
[3] |
S. Preitl, R.-E. Precup, Z. Preitl, S. Vaivoda, S. Kilyeni, and J. K. Tar, “Iterative feedback and learning control. Servo systems applications,” IFAC Proceedings Volumes, vol. 40, no. 8, pp. 16–27, Jul. 2007. doi: 10.3182/20070709-3-RO-4910.00004
|
[4] |
J. Zhang and D. Meng, “Convergence analysis of saturated iterative learning control systems with locally Lipschitz nonlinearities,” IEEE Trans. Neural Networks and Learning Systems, vol. 31, no. 10, pp. 4025–4035, Oct. 2020. doi: 10.1109/TNNLS.2019.2951752
|
[5] |
D. Meng and K. L. Moore, “Contraction mapping-based robust convergence of iterative learning control with uncertain, locally-Lipschitz nonlinearity,” IEEE Trans. Systems,Man,and Cybernetics:Systems, vol. 50, no. 2, pp. 442–454, Feb. 2020. doi: 10.1109/TSMC.2017.2780131
|
[6] |
K. L. Barton and A. G. Alleyne, “A norm optimal approach to time-varying ILC with application to a multi-axis robotic testbed,” IEEE Trans. Control Systems Technology, vol. 19, no. 1, pp. 166–180, Jan. 2011. doi: 10.1109/TCST.2010.2040476
|
[7] |
W. He, T. Meng, X. He, and C. Sun, “Iterative learning control for a flapping wing micro aerial vehicle under distributed disturbances,” IEEE Trans. Cybernetics, vol. 49, no. 4, pp. 1524–1535, Apr. 2019. doi: 10.1109/TCYB.2018.2808321
|
[8] |
P. Janssens, G. Pipeleers, and J. Swevers, “A data-driven constrained norm-optimal iterative learning control framework for LTI systems,” IEEE Trans. Control Systems Technology, vol. 21, no. 2, pp. 546–551, Mar. 2013. doi: 10.1109/TCST.2012.2185699
|
[9] |
Q. Yu, Z. Hou, and J. Xu, “D-type ILC based dynamic modeling and norm optimal ILC for high-speed trains,” IEEE Trans. Control Systems Technology, vol. 26, no. 2, pp. 652–663, Mar. 2018. doi: 10.1109/TCST.2017.2692730
|
[10] |
J. Wang, Y. Wang, L. Cao, and Q. Jin, “Adaptive iterative learning control based on unfalsified strategy for Chylla-Haase reactor,” IEEE/CAA Journal of Automatica Sinica, vol. 1, no. 4, pp. 347–360, Oct. 2014. doi: 10.1109/JAS.2014.7004663
|
[11] |
D. Shen, “Iterative learning control with incomplete information: A survey,” IEEE/CAA Journal of Automatica Sinica, vol. 5, no. 5, pp. 885–901, Sept. 2018. doi: 10.1109/JAS.2018.7511123
|
[12] |
D. A. Bristow, M. Tharayil, and A. G. Alleyne, “A survey of iterative learning control: A learning-based method for high-performance tracking control,” IEEE Control Systems Magazine, vol. 26, no. 3, pp. 96–114, Jun. 2006. doi: 10.1109/MCS.2006.1636313
|
[13] |
H.-S. Ahn, Y. Chen, and K. L. Moore, “Iterative learning control: Brief survey and categorization,” IEEE Trans. Systems,Man,and Cybernetics,Part C:Applications and Reviews, vol. 37, no. 6, pp. 1099–1121, Nov. 2007. doi: 10.1109/TSMCC.2007.905759
|
[14] |
M. Minakais, S. Mishra, and J. T. Wen, “Database-driven iterative learning for building temperature control,” IEEE Trans. Automation Science and Engineering, vol. 16, no. 4, pp. 1896–1906, Oct. 2019. doi: 10.1109/TASE.2019.2899377
|
[15] |
R.-H. Chi and Z.-S. Hou, “Dual-stage optimal iterative learning control for nonlinear non-affine discrete-time systems,” Acta Automatica Sinica, vol. 33, no. 10, pp. 1061–1065, Oct. 2007. doi: 10.1360/aas-007-1061
|
[16] |
R. Chi, Z. Hou, B. Huang, and S. Jin, “A unified data-driven design framework of optimality-based generalized iterative learning control,” Computers and Chemical Engineering, vol. 77, pp. 10–23, Jun. 2015. doi: 10.1016/j.compchemeng.2015.03.003
|
[17] |
R. Chi, Z. Hou, S. Jin, and B. Huang, “Computationally efficient data-driven higher order optimal iterative learning control,” IEEE Trans. Neural Networks and Learning Systems, vol. 29, no. 12, pp. 5971–5980, Dec. 2018. doi: 10.1109/TNNLS.2018.2814628
|
[18] |
Y. Hui, R. Chi, B. Huang, and Z. Hou, “Extended state observer-based data-driven iterative learning control for permanent magnet linear motor with initial shifts and disturbances,” IEEE Trans. Systems,Man,and Cybernetics:Systems, vol. 51, no. 3, pp. 1881–1891, Mar. 2021. doi: 10.1109/TSMC.2019.2907379
|
[19] |
Z. Hou and S. Jin, Model Free Adaptive Control: Theory and Applications. Boca Raton, USA: CRC Press, 2013.
|
[20] |
X. Bu, S. Wang, Z. Hou, and W. Liu, “Model free adaptive iterative learning control for a class of nonlinear systems with randomly varying iteration lengths,” Journal of the Franklin Institute, vol. 356, no. 5, pp. 2491–2504, Mar. 2019. doi: 10.1016/j.jfranklin.2019.01.003
|
[21] |
X. Bu, Q. Yu, Z. Hou, and W. Qian, “Model free adaptive iterative learning consensus tracking control for a class of nonlinear multiagent systems,” IEEE Trans. Systems,Man,and Cybernetics:Systems, vol. 49, no. 4, pp. 677–686, Apr. 2019. doi: 10.1109/TSMC.2017.2734799
|
[22] |
W. J. Rugh, Linear System Theory. Upper Saddle River, NJ, USA: Prientice Hall, 1996.
|
[23] |
D. Meng, Y. Jia, and J. Du, “Stability of varying two-dimensional roesser systems and its application to iterative learning control convergence analysis,” IET Control Theory and Applications, vol. 9, no. 8, pp. 1221–1228, May 2015. doi: 10.1049/iet-cta.2014.0643
|
[24] |
D. Meng and K. L. Moore, “Robust iterative learning control for nonrepetitive uncertain systems,” IEEE Trans. Automatic Control, vol. 62, no. 2, pp. 907–913, Feb. 2017. doi: 10.1109/TAC.2016.2560961
|
[25] |
Q. Zhu, J.-X. Xu, D. Huang, and G.-D. Hu, “Iterative learning control design for linear discrete-time systems with multiple high-order internal models,” Automatica, vol. 62, pp. 65–76, Dec. 2015. doi: 10.1016/j.automatica.2015.09.017
|
[26] |
D. Meng, “Convergence conditions for solving robust iterative learning control problems under nonrepetitive model uncertainties,” IEEE Trans. Neural Networks and Learning Systems, vol. 30, no. 6, pp. 1908–1919, Jun. 2019. doi: 10.1109/TNNLS.2018.2874977
|
[27] |
J. Zhang, B. Cui, X. Dai, and Z. Jiang, “Iterative learning control for distributed parameter systems based on non-collocated sensors and actuators,” IEEE/CAA Journal of Automatica Sinica, vol. 7, no. 3, pp. 865–871, May 2020. doi: 10.1109/JAS.2019.1911663
|
[28] |
D. Meng and J. Zhang, “Robust tracking of nonrepetitive learning control systems with iteration-dependent references,” IEEE Trans. Systems,Man,and Cybernetics:Systems, vol. 51, no. 2, pp. 842–852, Feb. 2021. doi: 10.1109/TSMC.2018.2883383
|
[29] |
D. Meng and J. Zhang, “Convergence analysis of robust iterative learning control against nonrepetitive uncertainties: System equivalence transformation, ” IEEE Trans. Neural Networks and Learning Systems, to be published, DOI: 10.1109/TNNLS.2020.3016057.
|
[30] |
X. Jin, “Fault-tolerant iterative learning control for mobile robots non-repetitive trajectory tracking with output constraints,” Automatica, vol. 94, pp. 63–71, Aug. 2018. doi: 10.1016/j.automatica.2018.04.011
|
[31] |
M. Yu, D. Huang, and W. He, “Robust adaptive iterative learning control for discrete-time nonlinear systems with both parametric and nonparametric uncertainties,” Int. Journal of Adaptive Control and Signal Processing, vol. 30, no. 7, pp. 972–985, Jul. 2016. doi: 10.1002/acs.2648
|
[32] |
J. Shi, F. Gao, and T.-J. Wu, “Integrated design and structure analysis of robust iterative learning control system based on a two-dimensional model,” Industrial and Engineering Chemistry Research, vol. 44, no. 21, pp. 8095–8105, Sept. 2005. doi: 10.1021/ie050211i
|
[33] |
J.-X. Xu, “A survey on iterative learning control for nonlinear systems,” Int. Journal of Control, vol. 84, no. 7, pp. 1275–1294, Jul. 2011. doi: 10.1080/00207179.2011.574236
|
[34] |
R. A. Horn and C. R. Johnson, Matrix Analysis. Cambridge, UK, Cambridge University Press, 1985.
|
[35] |
H. K. Khalil, Nonlinear Systems. Upper Saddle River, New Jersey, USA, Prientice Hall, 2002.
|
\lambda | \gamma_{1} | \gamma_{2} | \gamma_{3} | \mu_{1} | \mu_{2} | \varepsilon |
1 | 0.8 | 0.14 | 0.06 | 1 | 0.001 | 0.01 |