
IEEE/CAA Journal of Automatica Sinica
Citation: | Dianwei Qian, Chengdong Li, SukGyu Lee and Chao Ma, "Robust Formation Maneuvers Through Sliding Mode for Multi-agent Systems With Uncertainties," IEEE/CAA J. Autom. Sinica, vol. 5, no. 1, pp. 342-351, Jan. 2018. doi: 10.1109/JAS.2017.7510787 |
With the development of artificial intelligence, multi-agent systems have been hailed as a novel paradigm for conceptualizing, designing, and implementing intelligent systems [1]-[3]. A multi-agent system is a coupled network of some agents, where the agents can interact to achieve some goals that are beyond the individual capacities or knowledge of each agent [4], [5]. The advantages of the multi-agent system include but are not limited to efficiency, extensibility and reliability. On the other hand, many increasing applications in reality require the agents that have to work together [6]. To enable these applications, requirement of coordination of the agents has substantially increased.
As one of coordination task, the consensus problem is emerging because it integrates both graph theory and control theory [7]. The consensus problem covers some typical control tasks, i.e., formation control, rendezvous, attitude alignment, flocking and foraging [8]. Among the tasks, the formation control concentrates on forming up a multi-agent system as well as making the agents move in given geometrical shapes. The task is rooted in the real applications. For example, the agents have to maintain some formations when they move at disaster sites, warehouses and hazardous areas [9]. See [10] for a complete review of recent philosophies in this field.
One scheme of multi-agent formations is called "leader-follower" [11]. As the name suggests, one agent in a multi-agent system is named as leader and other agents are successively designated as followers. The sole leader takes charge of tracking a predefined trajectory. The followers keep on tracking the leader to form up a desired formation while the multi-agent system moves. The scheme has been successfully applied to the analysis and design of multi-agent formations.
Inherently, the leader-follower scheme is centralized and heavily depends on the leader and it suffers from the problem of "single point of failure" [12]. Besides, the scheme has been paid increasing attention because its dynamics are not only experimentally modelled, but the internal formation stability can be theoretically guaranteed [13]. Adopting the scheme, various control methods have been developed for multi-agent formations, that is, neural network-based adaptive design [1], robust control [14], adaptive output feedback method [15], nonlinear predictive mechanism [16], and iterative learning technique [17], to name but a few.
The methodology of sliding mode control (SMC) is popular due to its invariance property [18]. Some SMC-based methods have been addressed to solve the formation-control problem of multi-agent systems, that is, fuzzy SMC [19], [20], first-order SMC [21], terminal SMC [22], backstepping SMC [23], [24], etc. Previous contributions have verified the feasibility of the SMC methodology for multi-agent formations.
In a multi-agent system, uncertainties exist everywhere. Each agent may contain uncertainties, i.e., external disturbances, unmodelled dynamics and parameter perturbations. Originated from the uncertainties of the agents, formation dynamics of the multi-agent system become uncertain. In previous works about the SMC-based multi-agent formations, uncertainties are considered because they adversely affect the formation stability. However, two solutions can be summarized from the aforementioned works. One solution is to discuss the formation stability by means of graph theory [19], [24]. The other is to analyze the formation stability in light of Lyapunov's theorem [20]-[23]. To guarantee the formation stability, the uncertainties are usually assumed to be bounded by a known boundary. Unfortunately, the assumption is not mild because the uncertainties are rather hard to exactly measure or to know in advance. The lack of such a boundary may result in severe problems, i.e., decrease of the formation robustness, deterioration of the formation performance as well as deficiency of the formation stability. In order to obtain the important information, it is desired to adaptively approximate the formation uncertainties.
The technique of nonlinear disturbance observer (NDO) has been proven to be effective in handling uncertainties and improving robustness [25]. The applications of NDO have been investigated by some actual cases [26], [27]. This technique can be considered as an alternative to attack the issue of uncertainties for multi-agent formations. So far, the academic problem of how to eliminate the adverse effects of uncertainties in multi-agent formations via NDO still remains unsolved.
This paper touches the academic problem and investigates a robust control design for formation maneuvers of a multi-agent system. The multi-agent system under consideration is leader-follower-based, and the communication topology is considered in order to strengthen the adaptability, reliability and practicability of the leader-follower scheme. Since the multi-agent system is subjected to uncertainties, the robust control design contains two parts. One is to develop an SMC-based controller and the other is to present an NDO-based observer. The controller and observer work together to realize formation maneuvers of the multi-agent system in the presence of uncertainties. The main contributions of this paper can be summarized as follows: 1) a formation control design that integrates SMC and NDO is proposed for each follower agent; 2) the presented design with guaranteed stability is extended to the multi-agent system under a given communication topology; 3) some comparisons are drawn to illustrate the feasibility and validity of the presented design.
The remainder of this paper is organized as follows. The modelling of one single agent and the communication topology of the agents are given in Section Ⅱ. Formation design is presented in Section Ⅲ. Simulation results are illustrated in Section Ⅳ. Finally, conclusions are drawn in Section Ⅴ.
The multi-agent system under consideration consists of N mobile robots. The robots are identical and each robot can be treated as an agent. Fig. 1 displays a robot in the multi-agent system. The robot is round with differential wheels having radius R, and its movement is actuated by two separately driven wheels placed on either side of its body. Index i is used to represent the robot. The Cartesian coordinate system in Fig. 1 specifies (xLi, yLi) as the center of the left wheel, (xRi, yRi) as the center of the right wheel, (xci, yci) as the center of the robot's body and (xhi, yhi) as the robot's head. In Fig. 1, xhi=xci+hcosθi, xLi=xci−lsinθi, xRi=xci+lsinθi, yhi=yci+hsinθi, yLi=yci−lcosθi and yRi=yci+lcosθi, where r is the radius of wheels, l is the distance between the center of robot and the wheel, h is the distance between the center and the head position and θi is the rotation angle. Let us specify a \boldsymbol{q}_i = [x_{hi}\ y_{hi}\ \theta_i]^T to describe the robot's posture.
The Lagrangian equations of motion to describe the agent can have the form of (1) with respect to the vector \boldsymbol{{q}}_i.
\begin{equation}\label{eq1} \frac{{\boldsymbol{{d}}}}{{\boldsymbol{{d}}}t}\left(\frac{\partial{L_i}}{\partial\dot{{\boldsymbol{{q}}}}_i}\right)-\frac{\partial{L_i}}{\partial{{\boldsymbol{{q}}}}_i}={{{{\boldsymbol{ B}}}}}({\boldsymbol{{q}}}_i)\tau_i \end{equation} | (1) |
where L_i =K_i-P_i (K_i and P_i denote the kinetic energy and the potential energy of the agent, respectively.), \tau_i=[\tau_{Li}\ \tau_{Ri}]^T is the torque vector applied to the wheels and \boldsymbol{{B}}(\boldsymbol{{q}}_i) is a time-varying matrix.
Concerning the agent, its motion is restricted to horizontal plane, its potential energy is kept unchanged and P_i can be defined as 0. Therefore, L_i can be written by
\begin{equation}\label{eq2} L_i =K_i=K_{bi}+K_{Li}+K_{Ri} \end{equation} | (2) |
where K_{bi}, K_{Li} and K_{Ri} are the kinetic energies of the agent's body, left wheel and right wheel, respectively. The kinetic energies can be formulated by K_{bi}=m_b(\dot{x}_{ci}^2+\dot{y}_{ci}^2)/2+I_b\dot{\theta}_i^2/2, K_{li}=m_w(\dot{x}_{li}^2+\dot{y}_{li}^2)/2+I_w\dot{\theta}_i^2/2 and K_{ri}=m_w(\dot{x}_{ri}^2+\dot{y}_{ri}^2)/2+I_w\dot{\theta}_i^2/2, where m_b and I_b are the mass and the moment of inertia of the agent's body, respectively; m_w and I_w are the mass and the moment of inertia of the agent's wheel, respectively.
Let I=I_b+2I_w+2m_wl^2+m_bh^2 and m=m_b+2m_w. By the Lagrangian method, the dynamic model of the agent can be formulated by
\begin{equation}\label{eq3} \boldsymbol{{M}}(\boldsymbol{{q}}_i)\ddot{\boldsymbol{{q}}}_i+ \boldsymbol{{C}}(\boldsymbol{{q}}_i, \ \dot{\boldsymbol{{q}}}_i)\dot{\boldsymbol{{q}}}_i=\boldsymbol{{B}}(\boldsymbol{{q}}_i)\tau_i \end{equation} | (3) |
where the matrices \boldsymbol{{M}}(\boldsymbol{{q}}_i), \boldsymbol{{C}}(\boldsymbol{{q}}_i, \ \dot{\boldsymbol{{q}}}_i) and \boldsymbol{{B}}^T(\boldsymbol{{q}}_i) in order are determined by \left[\begin{array}{ccc} m & 0 & mh\sin\theta_i \\ 0 & m &-mh\cos\theta_i \\ mh\sin\theta_i &-mh\cos\theta_i & I \end{array} \right], \left[\begin{array}{ccc} 0 & 0 & mh\dot{\theta}_i\cos\theta_i \\ 0 & 0 &-mh\dot{\theta}_i\sin\theta_i \\ 0 & 0 & 0 \end{array} \right] and \dfrac{1}{r}\left[\begin{array}{ccc} \cos\theta_i & \sin\theta_i &-1\\ \cos\theta_i & \sin\theta_i &1 \end{array} \right].
From Fig. 1, two symbols of the agent are kept unexplained, that is, the linear velocity v_i and the rotation angular velocity \omega_i. Differentiating {\boldsymbol{q}}_i with respect to time t yields
\begin{equation}\label{eq4} \dot{{\boldsymbol{{q}}}}_i={\boldsymbol{{T}}}({\boldsymbol{{q}}}_i)\xi_i \end{equation} | (4) |
where {\boldsymbol{{T}}}^T({\boldsymbol{{q}}}_i)=\left[ \begin{array}{ccc} \cos\theta_i & \sin\theta_i & 0 \\ -h\sin\theta_i & h\cos\theta_i & 1 \end{array} \right] and \xi_i=[v_i\ \omega_i]^T.
Substituting (4) into (3) gives
\begin{equation}\label{eq5} {\tilde{\boldsymbol{M}}}(\boldsymbol{{q}}_i)\dot{\xi}_i+ {\tilde{\boldsymbol{C}}}({\boldsymbol{q}}_i, \ \dot{\boldsymbol{{q}}}_i){\xi}_i={\tilde{\boldsymbol{B}}}({\boldsymbol{q}}_i)\tau_i \end{equation} | (5) |
where {\tilde{\boldsymbol{M}}}({\boldsymbol{q}}_i), {\tilde{\boldsymbol{C}}}({\boldsymbol{q}}_i, \ \dot{{\boldsymbol{q}}}_i) and \tilde{{\boldsymbol{B}}}({\boldsymbol{q}}_i) in order are determined by {\boldsymbol{T}}^T({\boldsymbol{q}}_i){\boldsymbol{M}}({\boldsymbol{q}}_i){\boldsymbol{T}}({\boldsymbol{q}}_i), {\boldsymbol{T}}^T({\boldsymbol{q}}_i)[{\boldsymbol{M}}({\boldsymbol{q}}_i)\dot{{\boldsymbol{T}}}({\boldsymbol{q}}_i)+{\boldsymbol{C}}({\boldsymbol{q}}_i, \ \dot{{\boldsymbol{q}}}_i){\boldsymbol{T}}({\boldsymbol{q}}_i)]=\left[\begin{array}{cc} 0 & 0 \\ 0 & 0 \end{array} \right] and {\boldsymbol{T}}^T({\boldsymbol{q}}_i){\boldsymbol{B}}({\boldsymbol{q}}_i)=\dfrac{1}{r}\left[\begin{array}{cc} 1 & 1 \\ -1 & 1 \end{array} \right].
In (3), \det[{\boldsymbol{M}}({\boldsymbol{q}}_i)]=0 if and only if I=m{h^2}. Consequently, it is justified to assume that {\tilde{\boldsymbol{M}}}^{-1}({\boldsymbol{q}}_i) is invertible in (5). Taking the assumption into consideration, the equations of motion describing the behavior of the agent can be written as
\begin{equation}\label{eq6} \dot{\xi}_i={\tilde{\boldsymbol{M}}}^{-1}({\boldsymbol{q}}_i){\tilde{\boldsymbol{B}}}({\boldsymbol{q}}_i)\tau_i. \end{equation} | (6) |
Recall (4) such that the equations of motion of the agent at its head has a form of
\begin{equation}\label{eq7} [\dot{x}_{hi}\ \dot{y}_{hi}]^T={\boldsymbol{H}}[v_i\ \omega_i]^T %\dot{\xi}_i=\bf{\tilde{M}}^{-1}(\bf{q}_i)\bf{\tilde{B}}(\bf{q}_i)\tau_i \end{equation} | (7) |
where {\boldsymbol{H}}=\left[\begin{array}{cc} \cos\theta_i &-h\sin\theta_i \\ \sin\theta_i & h\cos\theta_i \end{array} \right].
Differentiating (7) with respect to time t yields
\begin{equation}\label{eq8} \left[\begin{array}{c} \ddot{x}_{hi} \\ \ddot{y}_{hi} \end{array} \right]=\left[\begin{array}{c} \dot{v}_{xi} \\ \dot{v}_{yi} \end{array} \right]={\boldsymbol{H}}{\tilde{\boldsymbol{M}}}^{-1}({\boldsymbol{q}}_i){\tilde{\boldsymbol{B}}}({\boldsymbol{q}}_i)\tau_i+\left[\begin{array}{c} \rho_{1i} \\ \rho_{2i} \end{array} \right] \end{equation} | (8) |
where \rho_{1i}=-v_i\omega_i\sin\theta_i-h\omega_i^2\cos\theta_i and \rho_{2i}=v_i\omega_i\cos\theta_i-h\omega_i^2\sin\theta_i.
Define [u_{xi}\ u_{yi}]^T={\boldsymbol{H}}{\tilde{\boldsymbol{M}}}^{-1}({\boldsymbol{q}}_i){\tilde{\boldsymbol{B}}}({\boldsymbol{q}}_i)\tau_i+[\rho_{1i}\ \rho_{2i}]^T. (8) has a form of
\begin{equation*} \left[\begin{array}{c} \ddot{x}_{hi} \\ \ddot{y}_{hi} \end{array} \right]=\left[\begin{array}{c} \dot{v}_{xi} \\ \dot{v}_{yi} \end{array} \right]=\left[\begin{array}{c} u_{xi} \\ u_{yi} \end{array} \right] \end{equation*} |
Considering the agent's uncertainties, the equations of motion of the agent can be described by
\begin{equation}\label{eq9} \left[\begin{array}{c} \dot{x}_{hi} \\ \dot{v}_{xi}\\ \dot{y}_{hi}\\ \dot{v}_{yi} \end{array} \right]=\left[\begin{array}{c} {v}_{xi} \\ {u}_{xi}+ {\delta}_{xi}\\ {v}_{yi}\\ {u}_{yi}+ {\delta}_{yi} \end{array} \right] \end{equation} | (9) |
where \delta_{xi} and \delta_{yi} denote the uncertain terms.
This paper deals with formation maneuvers of multi-agent systems in the presence of uncertainties. It is justified to assume that the uncertainties are bounded by an unknown constant, that is, |\delta_{xi}| \leq \delta_{xi}^* and |\delta_{yi} |\leq \delta_{yi}^* , where \delta_{xi}^*>0 and \delta_{yi}^*>0 are constant but unknown. In order to implement the technique of nonlinear disturbance observer, the designed observer should evaluate or calculate \delta_{xi} and \delta_{yi} much faster than the changing rates of \delta_{xi} and \delta_{yi}. In this sense, both \delta_{xi} and \delta_{yi} are assumed to be slowly time-varying, that is, \dot{\delta}_{xi}\simeq 0 and \dot{\delta}_{yi}\simeq 0.
Recall the multi-agent system. Its formation maneuvers are leader-follower-based. In the leader-follower scheme, the sole leader agent takes the responsibility of tracking a pre-defined trajectory while other follower agents keep on tracking the leader. Such a scheme indicates that the sole leader does not need to receive any information from the followers. On the other hand, the followers need to receive some information by communication link in order to form up a desired formation. Here some ideal conditions are considered, such as no communication delay or no packet loss.
The communication topology of the multi-agent system can be modelled via the theory of algebraic graph. Define a directed graph \mathcal{G}=(\mathcal{V}, \ \mathcal{E}) composed of a vertex set \mathcal{V} and an edge set \mathcal{E}, where \mathcal{V}=\{\nu_1, \ \nu_2, \ \ldots, \ \nu_N\}, \mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}, the node \nu_i denotes the ith agent and i=1, \ 2, \ \ldots, \ N. This paper investigates the directed graph \mathcal{G} in the multi-agent system. Assuming that \mathcal{G} of the multi-agent system has a spanning tree, the zero eigenvalue of \mathcal{L} is simple. Consider the ith agent whose collection of neighbors is defined as \mathcal{N}_i=\{\nu_j\in\mathcal{V}:\ (\nu_i, \ \nu_j)\in \mathcal{E}\}. The ordered pair (\nu_i, \ \nu_j)\ \in\ \mathcal{E} means that the jth agent can send information to the ith agent, but the information cannot be sent vice versa.
The weighted adjacency matrix \mathcal{A} of \mathcal{G} has a form of
\begin{equation}\label{eq10} \mathcal{A}=\left[\begin{array}{cccc} a_{11}&a_{12}&\cdots&a_{1N} \\ a_{21}&a_{22}&\cdots&a_{2N} \\ \vdots&\vdots&\ddots&\vdots \\ a_{N1}&a_{N2}&\cdots&a_{NN} \end{array} \right]\in \mathbb{R}^{N\times N} \end{equation} | (10) |
where a_{ij} indicates the weight of the pair (\nu_i, \ \nu_j); \forall\ (\nu_i, \ \nu_j)\ \in \mathcal{E}, \ \exists\ a_{ij}=1; \forall\ (\nu_i, \ \nu_j)\ \notin \mathcal{E}, \ \exists\ a_{ij}=0; and a_{ii}=0.
The degree matrix of \mathcal{G} is a diagonal matrix, determined by \mathcal{D}={\rm{diag}}\{d_1, \ d_2, \ \ldots, \ d_N\}\ \in\ \mathbb{R}^{N\times N}. In the diagonal matrix, d_i is the in-degree of \nu_i, formulated by d_i=\sum_{j=1}^{N}a_{ij} (i=1, \ 2, \ \ldots, \ N). Accordingly, the Laplacian matrix of \mathcal{G} can be defined by \mathcal{L} = \mathcal{D}-\mathcal{A} \in\ \mathbb{R}^{N\times N}. As proven in [4], \mathcal{L} has at least one zero eigenvalue as well as all other eigenvalues are located at the open right-half plane if \mathcal{G} is connected.
Concerning \mathcal{L}, its zero eigenvalue is simple. For the zero eigenvalue, an eigenvector of \mathcal{L} is {{\mathbf{1}}_{N}}, that is, \mathcal{L}{{\mathbf{1}}_{N}}={{\mathbf{0}}_{N}} holds true, where {{\mathbf{1}}_{N}}=[1, \ 1, \ \ldots, \ 1]^T \in\ \mathbb{R}^{N\times 1} and {{\mathbf{0}}_{N}}={{[0,\ 0,\ \ldots ,\ 0]}^{T}}\in \ {{\mathbb{R}}^{N\times 1}}. Further, {\rm {rank}}(\mathcal{L}) =N-1 for the simple zero eigenvalue [4].
Without loss of generality, the Nth agent in the multi-agent system is named leader and other N-1 agents are followers, that is, a_{Ni}=0\ (i=1, \ 2, \ \ldots, \ N) and the Laplacian matrix \mathcal{L} of \mathcal{G} can be written as
\begin{equation}\label{eq11} \left[\begin{array}{ccccc} d_{1}&-a_{12}&\cdots&-a_{1(N-1)}&-a_{1N} \\ -a_{21}&d_{2}&\cdots&-a_{2(N-1)} &-a_{2N} \\ \vdots&\vdots&\ddots&\vdots&\vdots \\ -a_{(N-1)1}&-a_{(N-2)2}&\cdots&d_{N-1}&-a_{(N-1)N}\\ 0&0&0&0&0 \end{array} \right]. \end{equation} | (11) |
Further, the communication topology among all the followers can be described by a directed graph \overline{\mathcal{G}}. Apparently, \overline{\mathcal{G}} is a subgraph of \mathcal{G}. The weighted adjacency matrix \overline{\mathcal{A}}\ \in\ \mathbb{R}^{(N-1)\times(N-1)} of \overline{\mathcal{G}} is defined by
\begin{equation}\label{eq12} \overline{\mathcal{A}}=\left[\begin{array}{cccc} a_{11}&a_{12}&\cdots&a_{1(N-1)} \\ a_{21}&a_{22}&\cdots&a_{2(N-1)} \\ \vdots&\vdots&\ddots&\vdots \\ a_{(N-1)1}&a_{(N-1)2}&\cdots&a_{(N-1)(N-1)} \end{array} \right]. \end{equation} | (12) |
The degree matrix of \overline{\mathcal{G}} is determined by \overline{\mathcal{D}}= {\rm {diag}}\{\bar{d}_1, \ \bar{d}_2, \ \ldots, \ \bar{d}_{N-1}\}, where \bar{d}_i=\sum_{j=1}^{N-1}a_{ij}\ (i=1, \ 2, \ \ldots, \ N-1). Accordingly, the Laplacian matrix of \overline{\mathcal{G}} can be defined as \overline{\mathcal{D}}-\overline{\mathcal{A}}, formulated by
\begin{equation}\label{eq13} \overline{\mathcal{L}}=\left[\begin{array}{cccc} \bar{d}_1&-a_{12}&\cdots&-a_{1(N-1)} \\ -a_{21}&\bar{d}_2&\cdots&-a_{2(N-2)} \\ \vdots&\vdots&\ddots&\vdots \\ -a_{(N-1)1}&-a_{(N-1)2}&\cdots&\bar{d}_{N-1} \end{array}\right]. \end{equation} | (13) |
Similarly, assuming that the subgraph \overline{\mathcal{G}} is itself a directed graph, \overline{\mathcal{L}}{{\mathbf{1}}_{N-1}}={{\mathbf{0}}_{N-1}} can be drawn. Here {{\mathbf{1}}_{N-1}}=[1, \ 1, \ \ldots, \ 1]^T \in\ \mathbb{R}^{(N-1)\times 1} and {{\mathbf{0}}_{N-1}}=[0, \ 0, \ \ldots, \ 0]^T \in\ \mathbb{R}^{(N-1)\times 1}. Moreover, define a matrix \overline{\mathcal{B}}={\rm{diag}}\{\bar{b}_1, \ \bar{b}_2, \ \ldots, \ \bar{b}_{N-1}\}, where \bar{b}_i=a_{iN}\ (i=1, \ 2, \ \ldots, \ N-1). Apparently, it holds {\rm{rank}}(\overline{\mathcal{L}} + \overline{\mathcal{B}})={\rm{rank}}(\mathcal{L}) = N -1.
The formation maneuvers in this paper are leader-follower-based. Concerning the leader's duty, its control problem is the tracking-control problem of a single robot, which can be well controlled by a developed technology [9]. In the multi-agent system, the Nth agent has been named as leader that can be treated as a nominal one in the formation-control problem, that is, \delta_{xN}=\delta_{yN}=0. Accordingly, the other N-1 agents act as followers and they are equipped with the designed formation controllers to achieve the formation maneuvers of the multi-agent system.
In order to concentrate on the formation-control design of the ith follower (i=1, \ 2, \ \ldots, \ N-1), recall its equations of motion (9). The equations in (9) are decoupled in the x-axis and y-axis. Consequently, its formation-control design can be divided into the design of the x-axis subsystem and the design of the y-axis subsystem. Here the design of the x-axis subsystem is taken into account at first. From (9), the x-axis subsystem with uncertainties can be written by
\begin{equation}\label{eq14} \left[\begin{array}{c} \dot{x}_{hi} \\ \dot{v}_{xi} \end{array} \right]=\left[\begin{array}{c} {v}_{xi} \\ {u}_{xi}+ {\delta}_{xi} \end{array} \right] \end{equation} | (14) |
which can be re-written by the following state-space representation.
\begin{equation}\label{eq15} \dot{{\boldsymbol{x}}}_{xi}=\mathbb{A}_{xi} {\boldsymbol{x}}_{xi}+ \mathbb{B}_{xi} u_{xi}+ \mathbb{B}_{xi} \delta_{xi} \end{equation} | (15) |
where {\boldsymbol{x}}_{xi}=[x_{hi}\ v_{xi}]^T, \mathbb{A}_{xi}=\left[\begin{array}{cc} 0 & 1 \\ 0 & 0 \end{array} \right], \mathbb{B}_{xi}=[0\ 1]^T and u_{xi} is the control action.
Consider the x-axis subsystem (15) and design its NDO-based observer (16) [25].
\begin{equation}\label{eq16} \left\{ \begin{gathered} \dot{p}_{xi} = - {{\mathbb{L}}_{xi}^T}{{\mathbb{B}}_{xi}}{{p}_{xi}} - {{\mathbb{L}}_{xi}^T}({{\mathbb{B}}}_{xi}{{\mathbb{L}}_{xi}^T}{{{\boldsymbol{x}}}_{xi}} + {{\mathbb{A}}_{xi}}{{{\boldsymbol{x}}}_{xi}} + {{\mathbb{B}}_{xi}}{u}_{xi}) \hfill \\ {\hat{\delta }_{xi}} = {{p}_{xi}} + {{\mathbb{L}}_{xi}^T}{{{\boldsymbol{x}}}_{xi}} \hfill \\ \end{gathered} \right. \end{equation} | (16) |
where {p}_{xi} is the internal state variable of the observer, {{\hat{\delta}}}_{xi} is the approximation of {{\delta}}_{xi} and the gain vector {\mathbb{L}}_{xi} \in{\mathbb{R}^{2 \times 1}} is designed such that the constant \lambda_{xi}= {{\mathbb{L}}_{xi}^T}{{\mathbb{B}}_{xi}} is positive.
Define an estimation-error variable {e}_{{xi}_d} = {\delta_{xi}} -{{{\hat {\delta}}}_{xi}}. Here an assumption of the estimation-error variable is |e_{{xi}_d}| < e_{{xi}_d}^*, where e_{{xi}_d}^*>0 is constant but unknown. Differentiate the error variable with respect to time t and substitute (16) into the derivative of {e}_{{xi}_d}. Subsequently, (17) can be obtained.
\begin{equation*} \begin{array}{lllllllll} {{\dot{e}}}_{{xi}_d}&\!\!\!\!= {{\dot{\delta}}_{xi}} - {{\dot{\hat{\delta}}}_{xi}} \simeq - {{{\dot p}}_{xi}} - {{\mathbb{L}}_{xi}^T}{{{{\boldsymbol{\dot x}}}}_{xi}} \\ &\!\!\!\!= {{\lambda_{xi}}}{{{p}}_{xi}} + {{\mathbb{L}}}_{xi}^T({{\mathbb{B}}_{xi}}{{\mathbb{L}}_{xi}^T}{{{\boldsymbol{x}}}_{xi}} + {{\mathbb{A}}_{xi}}{{{\boldsymbol{x}}}_{xi}} + {{\mathbb{B}}_{xi}}{{{u}}_{xi}}) \\ &- {{\mathbb{L}}_{xi}^T}({{\mathbb{A}}_{xi}}{{{\boldsymbol{x}}}_{xi}} + {{\mathbb{B}}_{xi}}{{{u}}_{xi}} + {{\mathbb{B}}_{xi}}{{\delta}_{xi}})\\ &\!\!\!\!= \lambda_{xi}({{{{\hat{\delta}}}}_{xi}}\! -\! {{\mathbb{L}}_{xi}^T}{{{\boldsymbol{x}}}_{xi}})\!+\!{{\mathbb{L}}_{xi}^T}({{\mathbb{B}}_{xi}}{{\mathbb{L}}_{xi}^T}{{{\boldsymbol{x}}}_{xi}}\!+\!{{\mathbb{A}}}{{{\boldsymbol{x}}}_{xi}}\!+\!{{\mathbb{B}}}{{{u}}_{xi}})\\ &- {{\mathbb{L}}_{xi}^T}({{\mathbb{A}}_{xi}}{{{\boldsymbol{x}}}_{xi}} + {{\mathbb{B}}_{xi}}{{{u}}_{xi}} + {{\mathbb{B}}_{xi}}{{{\delta}}_{ik}}) \\ &\!\!\!\!= \lambda_{xi}({{{{\hat {\delta}}}}_{xi}} - {{{\delta}}_{xi}}) = - \lambda_{xi}{{{e}}_{{xi}_d}}. \end{array} \end{equation*} | (17) |
The solution of (17) is {{{e}}_{{xi}_d}} = \exp (-\lambda_{xi} t){{{e}}_{{xi}_d}}(0), where {{e}}_{{xi}_d}(0) is the initial condition at t=0. Owing to \lambda_{xi}>0, this fact indicates that the estimation-error variable {{{e}}_{{xi}_d}} is exponentially convergent to 0 as t\to\infty.
The formation maneuvers of the multi-robot system need to achieve a designated formation pattern with velocity consensus, where the agents have to transmit information among local neighbors according to a designated communication topology. Therefore, the error function is defined as
\begin{equation*} \begin{array}{llll} e_{xi}=\!\!\!\!\!&\sum\limits_{j=1}^{N-1} {{a_{ij}}\left[({x_{hi}}-{x_{hj}}-d_{ij}^x) + \rho_{xi} ({v_{xi}}-{v_{xj}})\right]}\\ & + {{\bar b}_i}({x_{hi}} - {x_{hN}} - d_{iN}^x) + {{\bar b}_i}\rho_{xi} ({v_{xi}} - {v_{xN}}) \end{array} \end{equation*} | (18) |
where \rho_{xi}>0 is a pre-defined constant, d_{ij}^x is the pre-defined relative position between the ith follower and the jth follower and d_{iN}^x is the pre-defined relative position between the ith follower and the leader.
Differentiating e_{xi} in (18) with respect to time t and substituting the x-axis subsystem (14) into the derivative of e_{xi} yields
\begin{equation*} \begin{array}{lllllllll} \dot{e}_{xi}=\!\!\!\!& \sum\limits_{j = 1}^{N\!-\!1} {{a_{ij}}\left[({v_{xi}}\!-\!{v_{xj}})\!+\!\rho_{xi} ({u_{xi}}\!-\!{u_{xj}})\!+\!\rho_{xi} ({\delta_{xi}}\!-\!{\delta_{xj}})\right]}\\ & + {{\bar b}_i}\left[({v_{xi}}-{v_{xN}}) + \rho_{xi} ({u_{xi}}- {u_{xN}}) + \rho_{xi} {\delta_{xi}}\right]. \end{array} \end{equation*} | (19) |
Successively, differentiating \dot{e}_{xi} in (19) with respect to time t and substituting the x-axis subsystem (14) into the second derivative of e_{xi} yields
\begin{equation} \begin{split} \ddot{e}_{xi}=& \sum\limits_{j = 1}^{N - 1} {a_{ij}}\Big[({u_{xi}}-{u_{xj}}) + ({\delta _{xi}}-{\delta _{xj}})\!+\! {\rho _{xi}}({{\dot u}_{xi}}\!-\! {{\dot u}_{xj}})\\ &~~~~~~~~~~~+{\rho _{xi}}({{\dot \delta }_{xi}} -{{\dot \delta }_{xj}}) \Big]\\ &+ {{\bar b}_i}\left[{u_{xi}}-{u_{xN}} + {\delta _{xi}} + {\rho _{xi}}({{\dot u}_{xi}}-{{\dot u}_{xN}}) + {\rho _{xi}}{{\dot \delta }_{xi}}\right]. \end{split} \end{equation} | (20) |
With regard to the x-axis subsystem (14), a sliding surface with the output of the NDO-based observer (16) is defined as
\begin{equation} {s_{xi}} = {\dot{e}_{xi}} + c_{xi} {e_{xi}} + {\hat{\delta} _{xi}} \end{equation} | (21) |
where c_{xi}>0 is constant.
Differentiating the sliding-surface variable with respect to time t gives
\begin{equation} {\dot{s}_{xi}} = {\ddot{e}_{xi}} + c_{xi} {{\dot{e}_{xi}}} + {\dot{\hat{\delta}} _{xi}}. \end{equation} | (22) |
Substituting (18), (19) and (20) into (22) yields
\begin{equation} \begin{array}{lllllllllllll} {{\dot s}_{xi}}\!\!\!\! &= \sum\limits_{j = 1}^{N - 1} {a_{ij}}\Big[({u_{xi}}-{u_{xj}}) + ({\delta _{xi}}-{\delta _{xj}})\notag\\ & \, \, \, \, ~~~~+ {\rho _{xi}}({{\dot u}_{xi}}-{{\dot u}_{xj}}){\rm{ + }}{\rho _{xi}}({{\dot \delta }_{xi}} - {{\dot \delta }_{xj}})\Big] \notag\\ &~~+ {{\bar b}_i}({u_{xi}} - {u_{xN}} + {\delta _{xi}} + {\rho _{xi}}{{\dot u}_{xi}} + {\rho _{xi}}{{\dot \delta }_{xi}}) \notag\\ &~~+ {c_{xi}}\sum\limits_{j\! =\! 1}^{N\! -\! 1} {a_{ij}}\Big[({v_{xi}}\!\!-\!\! {v_{xj}})\!\! +\!\! {\rho _{xi}}({u_{xi}}\!\!-\!\! {u_{xj}})\!+\! {\rho _{xi}}({\delta _{xi}}\!\!-\!\! {\delta _{xj}})\Big] \notag\\ &~~+ {{\bar b}_i}{c_{xi}}\Big[({v_{xi}}\!\!-\!\! {v_{xN}})\!\!+\!\! {\rho _{xi}}({u_{xi}}\!\!-\!\! {u_{xN}})\!\! +\!\! {\rho _{xi}}{\delta _{xi}}\Big]\!\! +\!\! {{\dot {\hat {\delta}} }_{xi}}. \end{array} \end{equation} | (23) |
Design the following formation-control law for the x-axis subsystem of the ith follower.
\begin{equation} \begin{array}{llllllll} &\dfrac{{{\rho _{xi}}}}{{{c_{xi}}{\rho _{xi}} + 1}}\!\!\!\!\!\!&{{\dot u}_{xi}}+{u_{xi}}\notag\\ &&\!\!\!\! =\dfrac{1}{{{{\bar d}_i} + {{\bar b}_i}}}\sum\limits_{j = 1}^{N - 1} {{a_{ij}}} {u_{xj}}\notag\\ &&+ \dfrac{1}{{({c_{xi}}{\rho _{xi}} + 1)({{\bar d}_i} + {{\bar b}_i})}}\sum\limits_{j = 1}^{N - 1} {{a_{ij}}{\rho _{xi}}} {{\dot u}_{xj}} \notag\\ &&- \dfrac{{{c_{xi}}}}{{({c_{xi}}\rho + 1)({{\bar d}_i} + {{\bar b}_i})}}\sum\limits_{j = 1}^{N - 1} {{a_{ij}}} ({v_{xi}} - {v_{xj}}) \notag\\ &&- \dfrac{{{{\bar b}_i}{c_{xi}}}}{{({c_{xi}}{\rho _{xi}} + 1)({{\bar d}_i} + {{\bar b}_i})}}({v_{xi}} - {v_{xN}}){\rm{ }} \notag\\ &&+ \dfrac{{{{\bar b}_i}}}{{{{\bar d}_i} + {{\bar b}_i}}}{u_{xN}} +\dfrac{{{{\bar b}_i}{\rho _{xi}}}}{{({c_{xi}}\rho + 1)({{\bar d}_i} + {{\bar b}_i})}}{{\dot u}_{xN}}\notag\\ &&+ \dfrac{1}{{{{\bar d}_i} + {{\bar b}_i}}}\sum\limits_{j = 1}^{N - 1} {{a_{ij}}} {{\hat \delta }_{xj}} - {{\hat \delta }_{xi}} \notag\\ &&- \dfrac{{{\kappa_{xi}}}}{{({c_{xi}}{\rho _{xi}} + 1)({{\bar d}_i} + {{\bar b}_i})}}\operatorname{sgn} ({s_{xi}})\quad~\quad\qquad \end{array} \end{equation} | (24) |
where \kappa_{xi}>0 is a predefined parameter and \operatorname{sgn}(\cdot) is the sign function. In (24), the control signal u_{xi} is determined by a first-order differential equation with the zero initial condition. Further, the control signals of other agents also contribute to u_{xi}, which can be obtained by the given communication topology.
Substituting (24) into (23) and re-arranging {\dot{s}_{xi}} in (23) gives
\begin{equation*} \begin{array}{llllllll} {\dot{s}_{xi}}\!\!\!\!\!&= ({\bar d_i} + {\bar b_i})({c_{xi}}{\rho _{xi}} + 1){e_{xi_d}}\notag\\ & ~~- ({c_{xi}}{\rho _{xi}} + 1)\sum\limits_{j = 1}^{N-1} {{a_{ij}}{e_{xjd}}} - {\kappa_{xi}}\operatorname{sgn}({s_{xi}}) + {\dot {\hat {\delta}} _{xi}}. \end{array} \end{equation*} | (25) |
From (17), {{\dot{\hat{\delta}}}_{xi}} = \lambda_{xi}{e_{xi_d}} can be obtained. Replacing \dot{\hat{\delta}}_{xi} with \lambda_{xi}{e_{xi_d}} in (25) yields
\begin{equation} \begin{split} {\dot{s}_{xi}} =& ({\bar d_i} + {\bar b_i})({c_{xi}}{\rho _{xi}} + 1){e_{xi_d}}- ({c_{xi}}{\rho _{xi}} + 1)\sum\limits_{j = 1}^{N-1} {{a_{ij}}{e_{xi_d}}}\\ &- {\kappa_{xi}}\operatorname{sgn}({s_{xi}})+ \lambda_{xi}{e_{xi_d}}.\\ \end{split} \end{equation} | (26) |
Theorem 1: For the ith follower agent, consider its x-axis subsystem (14), design the NDO-based observer (16), define the sliding-mode surface (21) and utilize the SMC-based control law (24). The closed-loop control system of the x-axis subsystem is asymptotically stable if \kappa_{xi} > [({c_{xi}}{\rho _{xi}}+ 1){\bar b_i} + {\lambda _{xi}}]e_{{xi}_d}^*.
Proof: Pick up a Lyapunov candidate function
\begin{equation} V = \dfrac{1}{2}{ {{{{s}}_{xi}^2}} }. \end{equation} | (27) |
Differentiate V with respect to time t in (27). The derivative of V can be written by {\dot {V}} = s_{xi}{\dot s}_{xi}. Replace {\dot s}_{xi} with (26). The derivative of V has the form of
\begin{equation*} \begin{array}{lllllllllll} \dot V\!=\!\!\!\!\!\!& {s_{xi}}\Big[({{\bar d}_i}\! +\! {{\bar b}_i})({c_{xi}}{\rho _{xi}}\! +\! 1){e_{xi_d}}-({c_{xi}}{\rho _{xi}}\!\! +\!\! 1)\sum\limits_{j = 1}^N {{a_{ij}}{e_{xj_d}}} \\ &~~~~~-\!\! {\kappa_{xi}}{\rm sgn}({s_{xi}})+{\lambda _{xi}}{e_{xj_d}}\Big] \\ % =&{s_{xi}}[({{\bar d}_i} + {{\bar b}_i})({c_{xi}}{\rho _{xi}} + 1){e_{xid}}-({c_{xi}}{\rho _{xi}} + 1)\sum\limits_{j = 1}^n {{a_{ij}}{e_{xjd}}} \\ %-{k_{xi}}sgn({s_{xi}}) + {\lambda _{xi}}{e_{xid}}] \\ &\leqslant\! - {\kappa_{xi}}\left| {{s_{xi}}} \right|\!\! +\!\! \Big[({{\bar d}_i} + {{\bar b}_i})({c_{xi}}{\rho _{xi}} + 1)e_{xd}^ * \\&-({c_{xi}}{\rho _{xi}}\! +\! 1)\sum\limits_{j = 1}^N {{a_{ij}}e_{xd}^ * }\! +\! {\lambda _{xi}}e_{xd}^ *\Big]\left|{s_{xi}}\right| \\ % \leqslant& - {k_{xi}}\left| {{s_{xi}}} \right| + [({{\bar d}_i} + {{\bar b}_i})({c_{xi}}{\rho _{xi}} + 1) \\ %-({c_{xi}}{\rho _{xi}} + 1){{\bar d}_i} + {\lambda _{xi}}]{s_{xi}}e_{xd}^ * \\ &\leqslant\big \{ - {k_{xi}} + [({c_{xi}}{\rho _{xi}} + 1){b_i} + {\lambda _{xi}}]e_{xd}^ * \big \} \left| {{s_{xi}}} \right|. \quad \end{array} \end{equation*} | (28) |
Select \kappa_{xi}> (\lambda_{xi} + \rho_{xi} {\bar{b}_i})e_{{xi}_d}^* such that \dot{V} < 0 exists. Concerning V\geq 0, the closed-loop control system of the x-axis subsystem is asymptotically stable in the sense of Lyapunov.
The closed-loop stability of the x-axis subsystem can be extended to the multi-agent system. As far as the whole ith follower agent is concerned, define {{\boldsymbol{z}}_i} = {[{{x_{hi}}}\ {{y_{hi}}}]^T}\in \mathbb{R}^{2\times 1} so that the augmented vector of {{\boldsymbol{z}}_i} in the multi-agent system can be written by {\boldsymbol{z}} = {[{{\boldsymbol{z}}_1^T}\ {{\boldsymbol{z}}_2^T}\ {\ldots}\ {{\boldsymbol{z}}_{N-1}^T}]^T}\in \mathbb{R}^{2(N-1)\times 1}.
Similarly, define the vectors {{\boldsymbol{v}}_i} = {[{{v_{xi}}}\ {{v_{yi}}}]^T}, {{\boldsymbol{u}}_i} = {[{{u_{xi}}}\ {{u_{yi}}}]^T}, {{\mathbf{\Delta }}_{i}}= {[{{\delta_{xi}}}\ {{\delta_{yi}}}]^T}, {\hat{\bf{\Delta }}_i}= {{[{{\widehat{\mathit{\boldsymbol{ }}\!\!\boldsymbol{\delta}\!\!\rm{ }}}_{xi}}\ {{\hat{\delta }}_{yi}}]}^{T}}, {{\boldsymbol{e}}_i} = {[{{e_{xi}}}\ {{e_{yi}}}]^T}, {{\boldsymbol{e}}_{i_d}} = {[{{e_{{xi}_d}}}\ {{e_{{yi}_d}}}]^T}, {{\boldsymbol{d}}_{ij}} = {[{d_{ij}^x}\ {d_{ij}^y}]^T}, {{\boldsymbol{d}}_{iN}} = {[{d_{iN}^x}\ {d_{iN}^y}]^T} and {{\boldsymbol{s}}_i} = {[{{s_{xi}}}\ {{s_{yi}}}]^T}. Here {{\boldsymbol{v}}_i}, {{\boldsymbol{u}}_i}, {{\mathbf{\Delta }}_{i}}, {\hat{\bf{\Delta }}_i}, {{\boldsymbol{e}}_i}, {{\boldsymbol{e}}_{i_d}}, {\boldsymbol{d}_{ij}}, {\boldsymbol{d}}_{iN} and {{\boldsymbol{s}}_i} \in \mathbb{R}^{2\times 1}. Correspondingly, their augmented vectors are determined by {\boldsymbol{v}} = {[{{\boldsymbol{v}}_1^T}\ {{\boldsymbol{v}}_2^T}\ {\ldots}\ {{\boldsymbol{v}}_{N-1}^T}]^T}, {\boldsymbol{u}} = {[{{\boldsymbol{u}}_1^T}\ {{\boldsymbol{u}}_2^T}\ {\ldots}\ {{\boldsymbol{u}}_{N-1}^T}]^T}, \mathbf{\Delta }={{[\mathbf{\Delta }_{1}^{T}\ \mathbf{\Delta }_{2}^{T}\ \ldots \ \mathbf{\Delta }_{N-1}^{T}]}^{T}}, \hat{\bf{\Delta }} = {[{\hat{\bf{\Delta }}_1^T}\ {\hat{\bf{\Delta }}_2^T}\ {\ldots}\ {\hat{\bf{\Delta }}_{N-1}^T}]^T}, {\boldsymbol{e} }= {[{{\boldsymbol{e}}_1^T}\ {{\boldsymbol{e}}_2^T}\ \ldots\ {{\boldsymbol{e}}_{N-1}^T}]^T}, {\boldsymbol{e}}_{d} = {[{{\boldsymbol{e}}_{1_d}^T}\ {{\boldsymbol{e}}_{2_d}^T}\ {\ldots}\ {{\boldsymbol{e}}_{{(N-1)}_d}^T}]^T}, {\boldsymbol{d}}_i = {[{{\boldsymbol{d}}_{i1}^T}\ {{\boldsymbol{d}}_{i2}^T}\ {\ldots}\ {{\boldsymbol{d}}_{i(N-1)}^T}]^T}, {{\boldsymbol{d}}_N} = {[{{\boldsymbol{d}}_{1N}^T}\ {{\boldsymbol{d}}_{2N}^T}\ {\ldots}\ {{\boldsymbol{d}}_{(N-1)N}^T}]^T} and {\boldsymbol{s}} = {[{{\boldsymbol{s}}_1^T}\ {{\boldsymbol{s}}_2^T}\ {\ldots}\ {{\boldsymbol{s}}_{N-1}^T}]^T}. Here {\boldsymbol{v}}, {\boldsymbol{u}}, \bf{\Delta }, \hat{\bf{\Delta }}, \boldsymbol{e}, {\boldsymbol{e}}_{d}, {\boldsymbol{d}}_i, {{\boldsymbol{d}}_N} and {\boldsymbol{s}} \in \mathbb{R}^{2(N-1)\times 1}.
For the Nth leader agent, the following augmented vectors can be defined, that is, {{\boldsymbol{z}}_N} = {[{{x_{hN}}}\ y_{hN}]^T}, {{\boldsymbol{u}}_N} = {[{{u_{xN}}}\ u_{yN}]^T} and {{\boldsymbol{v}}_N} = {[{{v_{xN}}}\ v_{yN}]^T}. Here {\boldsymbol{z}}_N, {\boldsymbol{u}}_N and {\boldsymbol{v}}_N \in \mathbb{R}^{2\times 1}.
Further, define the following diagonal matrices
\boldsymbol{\Upsilon} = \operatorname{diag}\left\{ {{\rho _{x1}}, \ {\rho _{y1}}, \ \ldots, \ {\rho _{x(N-1), }}\ {\rho_{y(N-1)}}} \right\}, |
\boldsymbol{ c} = \operatorname{diag}\left\{ {{c_{x1}}, \ {c_{y1}}, \ \ldots, \ {c_{x(N-1)}}, \ {c_{y(N-1)}}} \right\}, |
\Lambda=\operatorname{diag}\left\{{\lambda _{x1}}, \ {\lambda _{y1}}, \ \ldots, \ {\lambda _{x(n - 1)}}, \ {\lambda _{y(N - 1)}}\right\} |
\boldsymbol{\kappa} =\operatorname{diag}\{ {\kappa_{x1}}, \ {\kappa_{y1}}, \ \ldots, \ {\kappa_{x(N- 1)}}, \ {\kappa_{y(N- 1)}}\}. |
where \boldsymbol{\Upsilon}, \boldsymbol{c}, \boldsymbol{\Lambda} and \kappa \in \mathbb{R}^{2(N-1)\times 2(N-1)}.
The augmented tracking-error vector \bf{e} can be written by
\begin{equation} \begin{split} {\boldsymbol{e}}=&[({\overline {\mathcal{L}} + {\overline {\mathcal{B}}}}) \otimes {{\boldsymbol{I}}_2}]({\boldsymbol{z}} - {{\boldsymbol{d}}_i}) + \Upsilon [({\overline {\mathcal{L}}} + {\overline{\mathcal{B}}}) \otimes{{\boldsymbol{I}}_2}]{\boldsymbol{v}}\\ & - ({\overline {\mathcal{B}}} \otimes {{\boldsymbol{I}}_2})({{\boldsymbol{1}}_{N - 1}} \otimes {{\boldsymbol{z}}_N} - {{\boldsymbol{d}}_N}) - \Upsilon ({\overline {\mathcal{B}}}{{\boldsymbol{1}}_{N - 1}} \otimes {{\boldsymbol{I}}_2}){{\boldsymbol{v}}_n}\\ \end{split} \end{equation} | (29) |
where {\boldsymbol{I}}_2 is a 2\times 2 identity matrix and \otimes means the Kronecker product.
Differentiating {\boldsymbol{e}} in (29) with respect to time t gives
\begin{equation*} ~~~~~~~\begin{array}{llllllll} \dot{{\boldsymbol{e}}}=\!\!\!\!\!&[({\overline{\mathcal{ L}}} + {\overline{ \mathcal{B}}}) \otimes {{\boldsymbol{I}}_2}]\boldsymbol{v} + \Upsilon [({\overline {\mathcal{L}}} + {\overline {\mathcal{B}}}) \otimes {{\boldsymbol{I}}_2}]{\boldsymbol{u}}\notag\\ & + \Upsilon [({\overline {\mathcal{L}}} + {\overline {\mathcal{B}}}) \otimes {{\boldsymbol{I}}_2}]\Delta - ({\overline {\mathcal{B}}}{{\boldsymbol{1}}_{N - 1}} \otimes {{\boldsymbol{I}}_2}){{\boldsymbol{v}}_N}\notag\\ & - \Upsilon ({\overline {\mathcal{B}}}{{\boldsymbol{1}}_{N-1}} \otimes {{\boldsymbol{I}}_2}){{\boldsymbol{u}}_N}. \end{array} \end{equation*} | (30) |
Considering the properties of uncertainties in (9), we have \dot{\Delta}\simeq{{\mathbf{0}}_{2(N-1)}}. Here {{\mathbf{0}}_{2(N-1)}}\in \mathbb{R}^{2(N-1)\times1} is a zero vector. Further, the second derivative of \bf{e} with respect to time t has the form of
\begin{equation*} ~~~~~~~~~\begin{array}{lllllllllllll} \ddot{{\boldsymbol{e}}}=\!\!\!\!\!&\left[({\overline{\mathcal{ L}}} + {\overline{ \mathcal{B}}}) \otimes {{\boldsymbol{I}}_2}\right]{\boldsymbol{u}} + \left[({\overline{\mathcal{ L}}} + {\overline{ \mathcal{B}}}) \otimes {{\boldsymbol{I}}_2}\right]\Delta\\ &+\Upsilon \left[({\overline {\mathcal{L}}} + {\overline {\mathcal{B}}}) \otimes {{\boldsymbol{I}}_2}\right]\dot{{\boldsymbol{u}}} - ({\overline {\mathcal{B}}}{{\boldsymbol{1}}_{N - 1}} \otimes {{\boldsymbol{I}}_2}){{\boldsymbol{u}}_N} \\ &- \Upsilon ({\overline {\mathcal{B}}}{{\boldsymbol{1}}_{N-1}} \otimes {{\boldsymbol{I}}_2}){\dot{{\boldsymbol{u}}}_N}. \end{array} \end{equation*} | (31) |
The augmented sliding-surface vector is formulated by
\begin{equation} {\boldsymbol{s}}=\dot{{\boldsymbol{e}}} + {\boldsymbol{c}}{\boldsymbol{e}} + \hat \Delta. \end{equation} | (32) |
Differentiating \bf{s} with respect to time t yields
\begin{equation*} \begin{array}{llllllllllllll} \dot{{\boldsymbol{s}}}=\!\!\!\!&\ddot{{\boldsymbol{e}}} + {\boldsymbol{c}}\dot{{\boldsymbol{e}}} + \dot{\hat \Delta}\\ ~=\!\!\!\!\!\!&\left[({\overline{\mathcal{ L}}}\! +\! {\overline{ \mathcal{B}}}) \otimes {{\boldsymbol{I}}_2}\right]{\boldsymbol{u}}\!\! +\!\!\left[({\overline{\mathcal{ L}}}\!\! +\!\! {\overline{ \mathcal{B}}}) \otimes {{\boldsymbol{I}}_2}\right]\Delta\!+\!\Upsilon \left[({\overline {\mathcal{L}}}\! +\! {\overline {\mathcal{B}}}) \otimes {{\boldsymbol{I}}_2}\right]\dot{{\boldsymbol{u}}}\\ & - ({\overline {\mathcal{B}}}{{\boldsymbol{1}}_{N - 1}} \otimes {{\boldsymbol{I}}_2}){{\boldsymbol{u}}_N} - \Upsilon ({\overline {\mathcal{B}}}{{\boldsymbol{1}}_{N-1}} \otimes {{\boldsymbol{I}}_2}){\dot{{\boldsymbol{u}}}_N}\\ &+{\boldsymbol{c}}\left[({\overline{\mathcal{ L}}} + {\overline{ \mathcal{B}}}) \otimes {{\boldsymbol{I}}_2}\right]{\boldsymbol{v}} +{\boldsymbol{c}} \Upsilon \left[({\overline {\mathcal{L}}} + {\overline {\mathcal{B}}}) \otimes {{\boldsymbol{I}}_2}\right]{\boldsymbol{u}}\\ & + {\boldsymbol{c}}\Upsilon \left[({\overline {\mathcal{L}}} + {\overline {\mathcal{B}}}) \otimes {{\boldsymbol{I}}_2}\right]\Delta - {\boldsymbol{c}}({\overline {\mathcal{B}}}{{\boldsymbol{1}}_{N - 1}} \otimes {{\boldsymbol{I}}_2}){{\boldsymbol{v}}_N}\\ & - {\boldsymbol{c}}\Upsilon ({\overline {\mathcal{B}}}{{\boldsymbol{1}}_{N-1}} \otimes {{\boldsymbol{I}}_2}){{\boldsymbol{u}}_N}+ \dot{\hat \Delta}. \end{array} \end{equation*} | (33) |
Design the following control law (34).
\begin{equation*} \begin{array}{lllllllll} {\boldsymbol{u}} =\!\!\!\!&-\left[{{\boldsymbol{I}}_{2(N-1)}}+{\boldsymbol{c}} \Upsilon\right]^{-1}\left[({\overline{\mathcal{ L}}} + {\overline{ \mathcal{B}}}) \otimes {{\boldsymbol{I}}_2}\right]^{-1}\\ &\times\Big\{\Upsilon \left[({\overline {\mathcal{L}}} + {\overline {\mathcal{B}}}) \otimes {{\boldsymbol{I}}_2}\right]\dot{{\boldsymbol{u}}}+{\boldsymbol{c}}\left[({\overline{\mathcal{ L}}} + {\overline{ \mathcal{B}}}) \otimes {{\boldsymbol{I}}_2}\right]{\boldsymbol{v}}\\ &~~+\left[{{\boldsymbol{I}}_{2(N-1)}}+{\boldsymbol{c}} \Upsilon\right]\left[({\overline{\mathcal{ L}}} + {\overline{ \mathcal{B}}}) \otimes {{\boldsymbol{I}}_2}\right]\hat\Delta\\ &~~- {\boldsymbol{c}}({\overline {\mathcal{B}}}{{\boldsymbol{1}}_{N - 1}} \otimes {{\boldsymbol{I}}_2}){{\boldsymbol{v}}_N}- \Upsilon ({\overline {\mathcal{B}}}{{\boldsymbol{1}}_{N-1}} \otimes {{\boldsymbol{I}}_2}){\dot{{\boldsymbol{u}}}_N}\\ &~~-\left[{\boldsymbol{c}}\Upsilon\!+\!{\boldsymbol{I}}_{2{(N-1)}}\right]({\overline {\mathcal{B}}}{{\boldsymbol{1}}_{N - 1}} \otimes {{\boldsymbol{I}}_2}){{\boldsymbol{u}}_N}\!+\!\kappa \operatorname{sgn}({\boldsymbol{s}})\Big\} \end{array} \end{equation*} | (34) |
where \operatorname{sgn}({\boldsymbol{s}}) = {[\operatorname{sgn}({{\boldsymbol{s}}_1^T})\ \operatorname{sgn}({{\boldsymbol{s}}_2^T})\ {\ldots}\ \operatorname{sgn}({{\boldsymbol{s}}_{N-1}^T})]^T}.
Substituting (34) into (33) gives
\begin{equation} \begin{split} \dot{{\boldsymbol{s}}}=&\left[{{\boldsymbol{I}}_{2(N-1)}}\!+\!{\boldsymbol{c}} \Upsilon\right]\left[({\overline{\mathcal{ L}}}\! +\! {\overline{ \mathcal{B}}}) \otimes {{\boldsymbol{I}}_2}\right](\Delta - \hat \Delta)-\kappa \operatorname{sgn}({\boldsymbol{s}})\!+\! \dot{\hat \Delta}.\\ \end{split} \end{equation} | (35) |
Theorem 2: Take the multi-agent system into consideration, suppose that its communication graph has a directed spanning tree. The stability of the leader-follower-based formation control is guaranteed if the controller parameters of each follower agent are designed by Theorem 1.
Proof: Define a Lyapunov candidate function
\begin{equation} V'(t) = {\left\| {{{\boldsymbol{s}}}} \right\|_2} \end{equation} | (36) |
where \left\| \cdot \right\|_2 means 2-norm.
Differentiate V'(t) with respect to time t in (36). The derivative of V' can be written by
\begin{equation} {\dot {V}'}(t) = \frac{{{\boldsymbol{s}}^T{{{\boldsymbol{\dot s}}}}}}{{{{\left\| {{{{\boldsymbol{s}}}}} \right\|}_2}}}. \end{equation} | (37) |
Replacing \dot{\boldsymbol{s}} in (37) with (35) yields
\begin{equation} \begin{array}{llllllllllll} {\dot {V}'}(t) =\!\!\!\!\!\!& \dfrac{{{{\boldsymbol{s}}}^T}}{{{{\left\| {{{{\boldsymbol{s}}}}} \right\|}_2}}}\left\{\left[{{\boldsymbol{I}}_{2(N-1)}}+{\boldsymbol{c}} \Upsilon\right]\left[({\overline{\mathcal{ L}}} + {\overline{ \mathcal{B}}}) \otimes {{\boldsymbol{I}}_2}\right](\Delta - \hat \Delta)\right.\notag\\ &\left.-\kappa \operatorname{sgn}({\boldsymbol{s}})+ \dot{\hat \Delta}\right\}. \end{array} \end{equation} | (38) |
Further, \dot{\hat \Delta}=\Lambda {\boldsymbol{e}}_{d} can be drawn. Then, {\dot {V}'}(t) in (38) can have the form of
\begin{equation*} \begin{array}{llllllllll} {\dot {V}'}(t) =\!\!\!\!& \dfrac{{{{\boldsymbol{s}}}^T}}{{{{\left\| {{{{\boldsymbol{s}}}}} \right\|}_2}}}\cdot\Big\{\left[{{\boldsymbol{I}}_{2(N-1)}}+{\boldsymbol{c}} \Upsilon\right]\left[({\overline{\mathcal{ L}}} + {\overline{ \mathcal{B}}}) \otimes {{\boldsymbol{I}}_2}\right]{\boldsymbol{e}}_{d}\\ &-\kappa \operatorname{sgn}({\boldsymbol{s}})+ \Lambda {\boldsymbol{e}}_{d}\Big\}\\ ~~~~~~~\leq\!\!\!\!\!&-\|\kappa\|_1\dfrac{{{\boldsymbol{s}}}^T\operatorname{sgn} ({\boldsymbol{s}})}{{\| {\boldsymbol{s}} \|}_2}+\|\Lambda\|_1\dfrac{{{\boldsymbol{s}}^T{{\mathbf{e}}_{d}}}}{{\|{\boldsymbol{s} }\|}_2}\\ &+\left\|{{\boldsymbol{I}}_{2(N-1)}}+{\boldsymbol{c}} \Upsilon\right\|_1\dfrac{{{\boldsymbol{s}}}^T[({\overline{\mathcal{ L}}} + {\overline{ \mathcal{B}}}) \otimes {{\boldsymbol{I}}_2}]{\boldsymbol{e}}_{d}}{{\| {\boldsymbol{s}} \|}_2}. \end{array} \end{equation*} | (39) |
Note that a_{ii}=0\ (i=1, \ \ldots, \ N-1) in (12) such that [(\overline{\mathcal{L}} + \overline{\mathcal{B}}) \otimes {{\boldsymbol{I}}_2}]{\boldsymbol{1}}_{2(N-1)}=[\bar{b}_1\ \bar{b}_1\ \ldots\ \bar{b}_{N-1}\ \bar{b}_{N-1}]^T\in \mathbb{R}^{2(N-1)}. Let b^*=\max\{\bar{b}_1, \ \ldots, \ \bar{b}_{N-1}\}=\|\mathcal{B}\|_\infty. Subsequently, (39) can be re-arranged by
\begin{equation*} ~~~~~~~\begin{array}{llllllllllll} {\dot {V}'}\leq&\!\!\!-\|\kappa\|_1\dfrac{{\boldsymbol{s}}^T\operatorname{sgn} ({\boldsymbol{s}})}{{\| {\boldsymbol{s}} \|}_2}+\|\Lambda\|_1\dfrac{{{\boldsymbol{s}}^T{{\mathbf{1}}_{2(N-1)}}}}{{\| {\boldsymbol{s}} \|}_2}e_d^*\\ &\!\!\!+\left\|{{\boldsymbol{I}}_{2(N-1)}}+{\boldsymbol{c}} \Upsilon\right\|_1\dfrac{{\boldsymbol{s}}^T{{\mathbf{1}}_{2(N-1)}}}{{\|{ \boldsymbol{s}} \|}_2}b^*e_d^*\\ ~~~\leq&\!\!\! -\|\kappa\|_1 \dfrac{\|{\boldsymbol{s}}\|_1}{{\| {\boldsymbol{s}} \|}_2}+\|\Lambda\|_1 \dfrac{\|{\boldsymbol{s}}\|_1}{{\|{ \boldsymbol{s}} \|}_2}e_d^*\\ &\!\!\!+\left\|{{\boldsymbol{I}}_{2(N-1)}}+{\boldsymbol{c}} \Upsilon\right\|_1\dfrac{\|{\boldsymbol{s}}\|_1}{{\| {\boldsymbol{s} }\|}_2}b^*e_d^* \end{array} \end{equation*} | (40) |
where e_d^*=\|{\boldsymbol{e}}_d\|_\infty where \left\| \cdot \right\|_\infty means \infty-norm.
If the controller parameters of each follower agent are selected by Theorem 1, {\dot {V}'} < 0 can be deduced from (40). Considering V'\geq 0, the formation control of the multi-agent system is asymptotically stable in the sense of Lyapunov.
From Theorem 1 and Theorem 2, the formation stability is concerned to the tracking-error variable e_{{xi}_d}^* that is constant but unknown as well, indicating that it is hard to determine \kappa_{ik} in Theorem 1 as well as \kappa in Theorem 2. To guarantee the formation stability, a conservative value of e_{{xi}_d}^* should be designated. From this aspect, there seem no benefits earned from such a robust control method. However, e_{{xi}_d} originated from the presented method is exponentially convergent as proven, meaning that a small value of e_{{xi}_d}^* could be chosen. According to Theorem 1, the kind of formation-control design could contribute to the decrease of chattering phenomenon as well as the improvement of the formation performance.
This section implements some simulations on a multi-agent platform and discusses the results. The platform consists of four mobile robots. These robots are structured by the leader-follower scheme. One robot is designated as leader and the other three as followers. The follower agents are numbered as indexes 1, 2 and 3, respectively. The sole leader is identified by index 4. Some physical parameters of these agents are picked up from [24], listed as l=0.0265 m, h=0.04 m, r=0.02 m, m_b=0.018 kg, m_w=0.007 kg, I_b=1.44\times 10^{-4} kg\cdotm^2 and I_w=1.44\times 10^{-6} kg\cdotm^2. The communication topology of this multi-agent system under consideration is illustrated in Fig. 2.
According to this communication topology, the communication graph \mathcal{G} in Fig. 2 becomes a standard spanning tree, where the adjacency and Laplacian matrices are determined by
\mathcal{A}=\left[\begin{array}{cccc} 0&1&0&1 \\ 1&0&0&0 \\ 0&0&0&1 \\ 0&0&0&0 \end{array} \right]\ {\rm {and}}\ \mathcal{L}=\left[\begin{array}{cccc} 2&-1&0&-1 \\ -1&1&0&0 \\ 0&0&1&-1 \\ 0&0&0&0 \end{array} \right]. |
Further, the communication subgraph \overline{\mathcal{G}} is derived from \mathcal{G}, whose adjacency and Laplacian matrices are formulated by
\overline{\mathcal{A}}=\left[\begin{array}{cccc} 0&1&0 \\ 1&0&0 \\ 0&0&0 \\ \end{array} \right]\ {\rm {and}}\ \overline{\mathcal{L}}=\left[\begin{array}{cccc} 2&-1&0 \\ -1&1&0 \\ 0&0&1 \\ \end{array} \right]. |
Apparently, the subgraph \overline{\mathcal{G}} is itself a directed graph.
For the ith follower agent (i=1, \ 2, \ 3), the presented robust control design of its x-axis subsystem can be implemented. The uncertain term of the x-axis subsystem is designed by \delta_{xi}=0.02\times \operatorname{rand}(), where \operatorname{rand}() is a uniformly distributed random number in the closed interval [-1\ 1]. Some parameters of the SMC-based controller are predefined as c_{xi}=9 and \kappa_{xi}=0.4. The gain vector of the NDO-based observer is chosen as \mathbb{L}=[0\ 6]^T by trial and error such that \lambda_{xi}=\mathbb{L}^T\mathbb{B}=6 and the constant \rho_{xi} in (18) is set as 1.0. Successively, the SMC-based controller and the NDO-based observer of the y-axis subsystem are kept unchanged from those corresponding parameters of the x-axis subsystem. Considering the motor load of the follower agents, both u_{xi} and u_{yi} are limited to u_{xi}\leq 0.5 and u_{yi}\leq 0.5.
In order to achieve formation maneuvers of the multi-agent system, a given formation task is taken into consideration. In the formation task, the leader agent 4 moves along a straight line and the other follower agents keep tracking the leader and form up into a diamond-shaped formation.
The straight trajectory of the leader is presented as follows. In a Cartesian coordinate system, the initial head position of the leader is located at (0 m, 0.6 m). Correspondingly, its velocities in the x-direction and y-direction are set by 0.2 m/s and 0.1 m/s, respectively. In order to form up into the desired diamond in this coordinate system, the initial head positions of follower agent 1, follower agent 2 and follower agent 3 in order are placed at (0 m, 1.1 m), (0 m, 0.8 m) and (0 m, 0.3 m), respectively. Their relative coordinations in order are designated as (-0.2 m, 0.2 m), (-0.4 m, 0 m) and (-0.2 m, -0.2 m) with respect to the leader agent 4.
Fig. 3 displays the simulation results of the presented robust control method by the multi-agent system. In Fig. 3(a), the four agents form up into the diamond-shaped formation from a string while moving in straight lines, whereas filled triangles denote the initial positions of the agents and filled circles indicate the agents' positions in the dynamic process. In order to demonstrate the formation maneuver, the dashed lines bond the agents together at the same moment.
In Figs. 3(b)-(e), the position errors and the velocity errors of each follower agent in the x and y directions are illustrated. According to the communication topology in Fig. 2, the position errors of follower 1 are defined by {e_{px1}} = [{x_{h1}}-({x_{h2}}-d_{12}^x)] + [{x_{h1}}-({x_{h4}}-d_{14}^x)] and {e_{py1}} = [{y_{h1}}-({y_{h2}}-d_{12}^y)] + [{y_{h1}}-({y_{h4}}-d_{14}^y)]. Similarly, {e_{px2}} = {x_{h2}} -({x_{h1}} -d_{21}^x), {e_{py2}} = {y_{h2}} -({y_{h1}} -d_{21}^y), {e_{px3}} = {x_{h3}} -({x_{h2}} -d_{32}^x), {e_{py3}} = {y_{h3}} -({y_{h2}} -d_{32}^y), {e_{vx1}} = {v_{x1}} -{v_{x2}} + {v_{x1}} -{v_{x4}}, {e_{vy1}} = {v_{y1}} -{v_{y2}} + {v_{y1}} -{v_{y4}}, {e_{vx2}} = {v_{x2}} -{v_{x1}}, {e_{vy2}} = {v_{y2}} -{v_{y1}}, {e_{vx3}} = {v_{x3}} -{v_{x2}} and {e_{vy3}} = {v_{y3}} -{v_{y2}}. From Figs. 3(b)-(e), these defined errors can converge to zero as the desired formation has been achieved. The fact indicates the presented robust control method can achieve the formation maneuver of the multi-agent system in spite of uncertainties. Further, the formation-control law of each follower agent is illustrated in Figs. 3(f)-(g).
These results in Figs. 4 and 5 are adopted for performance comparisons and our motivation is to highlight the superiority of the presented control scheme. Fig. 4 illustrates the simulation results of the sole sliding-mode control approach by the same multi-agent system. In this formation-control system, the parameter of the sliding-surface c_{xi} is kept unchanged from the presented control method and the parameter of \kappa_{xi} is selected as 1.1, where the value of \kappa_{xi} is conservative to guarantee the formation stability. Compared with the results in Figs. 4(f)-(g), the presented robust control method in Figs. 3(f)-(g) can apparently decrease the chattering phenomenon because its formation stability is concerned with the exponentially-convergent tracking error e_{{xi}_d}^*, which is also the benefit we can earn from the presented robust control method.
As another comparison, the simulation results of the adaptive fuzzy sliding-mode control approach [24] is displayed in Fig. 5 by the same multi-agent system. From Fig. 5(a), the approach in [24] can also realize the same formation maneuver as the formation in Fig. 4(a). However, the presented robust control method has better control performance in Figs. 4(f)-(g) via the comparisons in Figs. 5(f)-(g) because it can apparently decrease the magnitude of control action. On the other hand, the presented method in the paper and the approach in [24] focus on dealing with formation maneuvers in spite of uncertainties. In [24], a fuzzy inference system (FIS) is designed to resist the uncertainties such that the control performance is subject to the number of fuzzy logic rules. The FIS with the limited number of fuzzy rules is hard to keep better performance against the variations of uncertainties. The uncertainties in this paper are formulated by 0.02\times rand(), compared with the expression of 0.005\times rand() in [24].
This paper has investigated the formation-control problem of multiple agents. The agents under consideration are wheeled mobile robots. The formation mechanism is leader-follower-based. The uncertainties originated from each individual agent result in the formation uncertainties. It is conveniently assumed that the formation uncertainties are bounded by an unknown boundary. In order to resist the formation uncertainties when forming up the agents, a robust control method that integrates the technique of NDO-based observer and the method of SMC-based controller is addressed. According to a given communication topology, the theoretical analysis has proven that the formation control of multiple agents in the presence of uncertainties is asymptotically stable. The control scheme has achieved the formation maneuvers by a multi-robot platform. The simulation results have demonstrated the effectiveness of the control scheme through some performance comparisons. In order to focus on the motivation of control design, some difficulties in reality, such as communication delays and collisions between agents, are not considered during the control design. The no-communication-delay and no-collision conditions are mild enough for small-scale formations but they are rather idealized for large-scale formations. In order to take the presented robust control method into practical account, this field is of our continuous interest and some contributions are still in progress.
[1] |
L. Cheng, Z. G. Hou, M. Tan, Y. Z. Lin, and W. J. Zhang, "Neuralnetwork-based adaptive leader-following control for multiagent systems with uncertainties, " IEEE Trans. Neural Netw. , vol. 21, no. 8, pp. 1351-1358, Aug. 2010. http://dl.acm.org/citation.cfm?id=1862021.1862035
|
[2] |
L. Cheng, Y. P. Wang, W. Ren, Z. G. Hou, and M. Tan, "On convergence rate of leader-following consensus of linear multi-agent systems with communication noises, " IEEE Trans. Autom. Control, vol. 61, no. 11, pp. 3586-3592, Nov. 2016. http://arxiv.org/abs/1508.06927
|
[3] |
L. Cheng, Y. P. Wang, W. Ren, Z. G. Hou, and M. Tan, "Containment control of multiagent systems with dynamic leaders based on a PIn-type approach, " IEEE Trans. Cybern., vol. 46, no. 12, pp. 3004-3017, Dec. 2016. http://www.ncbi.nlm.nih.gov/pubmed/26571546
|
[4] |
W. Ren and R. W. Beard, Distributed Consensus in Multi-Vehicle Cooperative Control. London, UK: Springer, 2008. http://www.springerlink.com/content/978-1-84800-015-5
|
[5] |
C. L. P. Chen, G. X. Wen, Y. J. Liu, and F. Y. Wang, "Adaptive consensus control for a class of nonlinear multiagent time-delay systems using neural networks, " IEEE Trans. Neural Netw. Learn. Syst., vol. 25, no. 6, pp. 1217-1226, Jun. 2014. doi: 10.1109/TNNLS.2014.2302477
|
[6] |
H. G. Zhang, T. Feng, G. H. Yang, and H. J. Liang, "Distributed cooperative optimal control for multiagent systems on directed graphs: An inverse optimal approach, " IEEE Trans. Cybern., vol. 45, no. 7, pp. 1315-1326, Jul. 2015. http://www.ncbi.nlm.nih.gov/pubmed/25216491
|
[7] |
H. Zhang, R. H. Yang, H. C. Yan, and F. W. Yang, "H∞ consensus of event-based multi-agent systems with switching topology, " Inf. Sci., vol. 370-371, pp. 623-635, Nov. 2016. http://www.sciencedirect.com/science/article/pii/S0020025515008294
|
[8] |
H. Rezaee and F. Abdollahi, "Average consensus over high-order multiagent systems, " IEEE Trans. Autom. Control, vol. 60, no. 11, pp. 3047-3052, Nov. 2015. doi: 10.1109/TAC.2015.2408576
|
[9] |
M. Biglarbegian, "A novel robust leader-following control design for mobile robots, " J. Intell. Robot. Syst., vol. 71, no. 3-4, pp. 391-402, Sep. 2013. doi: 10.1007/s10846-012-9795-1
|
[10] |
J. Y. C. Chen and M. J. Barnes, "Human-agent teaming for multirobot control: A review of human factors issues, " IEEE Trans. Hum. Mach. Syst., vol. 44, no. 1, pp. 13-29, Feb. 2014. doi: 10.1109/THMS.2013.2293535
|
[11] |
C. C. Hua, X. You, and X. P. Guan, "Leader-following consensus for a class of high-order nonlinear multi-agent systems, " Automatica, vol. 73, pp. 138-144, Nov. 2016. http://www.sciencedirect.com/science/article/pii/S0005109816302527
|
[12] |
D. W. Qian, S. W. Tong, J. R. Guo, and S. Lee, "Leader-follower-based formation control of nonholonomic mobile robots with mismatched uncertainties via integral sliding mode, " Proc. Inst. Mech. Eng. I J. Syst. Control Eng., vol. 229, no. 6, pp. 559-569, Jul. 2015. doi: 10.1177/0959651814568365
|
[13] |
D. W. Qian, S. W. Tong, and C. D. Li, "Leader-following formation control of multiple robots with uncertainties through sliding mode and nonlinear disturbance observer, " ETRI J., vol. 38, no. 5, pp. 1008-1018, Oct. 2016. doi: 10.4218/etrij.16.0116.0048/full
|
[14] |
J. Dasdemir and A. Loría, "Robust formation tracking control of mobile robots via one-to-one time-varying communication, " Int. J. Control, vol. 87, no. 9, pp. 1822-1832, Mar. 2014. http://www.sciencedirect.com/science/article/pii/S0378113510000088
|
[15] |
S. J. Yoo, "Formation tracker design of multiple mobile robots with wheel perturbations: Adaptive output-feedback approach, " Int. J. Syst. Sci., vol. 47, no. 15, pp. 3619-3630, Dec. 2016. doi: 10.1080/00207721.2015.1107149
|
[16] |
T. P. Nascimento, A. G. S. Conceição, and A. P. Moreira, "Multi-robot nonlinear model predictive formation control: The obstacle avoidance problem, " Robotica, vol. 34, no. 3, pp. 549-567, Mar. 2016. http://www.researchgate.net/publication/266675548_Multi-Robot_nonlinear_model_predictive_formation_control_the_obstacle_avoidance_problem
|
[17] |
Y. Liu and Y. M. Jia, "Robust formation control of discrete-time multiagent systems by iterative learning approach, " Int. J. Syst. Sci., vol. 46, no. 4, pp. 625-633, Apr. 2015. doi: 10.1007/s12555-012-0507-1
|
[18] |
V. I. Utkin, Sliding Modes in Control and Optimization. Berlin Heidelberg, Germany: Springer, 1992. http://www.springerlink.com/content/978-3-642-84379-2
|
[19] |
Y. H. Chang, C. W. Chang, C. L. Chen, and C. W. Tao, "Fuzzy sliding-mode formation control for multirobot systems: Design and implementation, " IEEE Trans. Syst. Man Cybern. B Cybern., vol. 42, no. 2, pp. 444-457, Apr. 2012. http://www.ncbi.nlm.nih.gov/pubmed/22010151
|
[20] |
Y. Y. Dai, Y. Kim, S. Wee, D. Lee, and S. Lee, "Symmetric caging formation for convex polygonal object transportation by multiple mobile robots based on fuzzy sliding mode control, " ISA Trans., vol. 60, pp. 321-332, Jan. 2016. http://www.ncbi.nlm.nih.gov/pubmed/26704719
|
[21] |
L. J. Dong, S. C. Chai, B. H. Zhang, and S. K. Nguang, "Sliding mode control for multi-agent systems under a time-varying topology, " Int. J. Syst. Sci., 2016, vol. 47, no. 9, pp. 2193-2200, Sep. 2016. http://dl.acm.org/citation.cfm?id=2903600.2903618
|
[22] |
A. M. Zou, K. D. Kumar, and Z. G. Hou, "Distributed consensus control for multi-agent systems using terminal sliding mode and Chebyshev neural networks, " Int. J. Robust Nonlinear Control, vol. 23, no. 3, pp. 334-357, Feb. 2013. doi: 10.1002/rnc.1829/abstract
|
[23] |
D. Zhao, T. Zou, S. Li, and Q. Zhu, "Adaptive backstepping sliding mode control for leader-follower multi-agent systems, " IET Control Theory Appl., vol. 6, no. 8, pp. 1109-1117, May 2012. http://www.ams.org/mathscinet-getitem?mr=2985188
|
[24] |
Y. H. Chang, C. Y. Yang, W. S. Chan, H. W. Lin, and C. W. Chang, "Adaptive fuzzy sliding-mode formation controller design for multirobot dynamic systems, " Int. J. Fuzzy Syst., vol. 16, no. 1, pp. 121-131, Mar. 2014. http://www.researchgate.net/publication/286794416_Adaptive_fuzzy_sliding-mode_formation_controller_design_for_multi-robot_dynamic_systems
|
[25] |
W. H. Chen, J. Yang, L. Guo, and S. H. Li, "Disturbance-observerbased control and related methods-an overview, " IEEE Trans. Industr. Electron., vol. 63, no. 2, pp. 1083-1095, Feb. 2016. doi: 10.1109/TIE.2015.2478397
|
[26] |
B. Xiao, S. Yin, and O. Kaynak, "Tracking control of robotic manipulators with uncertain kinematics and dynamics, " IEEE Trans. Industr. Electron., vol. 63, no. 10, pp. 6439-6449, Oct. 2016. doi: 10.1109/tie.2016.2569068
|
[27] |
T. Du, L. Guo, and J. Yang, "A fast initial alignment for SINS based on disturbance observer and Kalman filter, " Trans. Inst. Meas. Control, vol. 38, no. 10, pp. 1261-1269, Oct. 2016. doi: 10.1177/0142331216649019
|
[1] | Honghai Wang, Qing-Long Han. Designing Proportional-Integral Consensus Protocols for Second-Order Multi-Agent Systems Using Delayed and Memorized State Information[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(4): 878-892. doi: 10.1109/JAS.2024.124308 |
[2] | Dan Zhang, Jiabin Hu, Jun Cheng, Zheng-Guang Wu, Huaicheng Yan. A Novel Disturbance Observer Based Fixed-Time Sliding Mode Control for Robotic Manipulators With Global Fast Convergence[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(3): 661-672. doi: 10.1109/JAS.2023.123948 |
[3] | Jianquan Yang, Chunxi Yang, Xiufeng Zhang, Jing Na. Fixed-Time Sliding Mode Control With Varying Exponent Coefficient for Modular Reconfigurable Flight Arrays[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(2): 514-528. doi: 10.1109/JAS.2023.123645 |
[4] | Chi Ma, Dianbiao Dong. Finite-time Prescribed Performance Time-Varying Formation Control for Second-Order Multi-Agent Systems With Non-Strict Feedback Based on a Neural Network Observer[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(4): 1039-1050. doi: 10.1109/JAS.2023.123615 |
[5] | Wei Chen, Qinglei Hu. Sliding-Mode-Based Attitude Tracking Control of Spacecraft Under Reaction Wheel Uncertainties[J]. IEEE/CAA Journal of Automatica Sinica, 2023, 10(6): 1475-1487. doi: 10.1109/JAS.2022.105665 |
[6] | Zhenyu Gao, Ge Guo. Fixed-time Sliding Mode Formation Control of AUVs Based on a Disturbance Observer[J]. IEEE/CAA Journal of Automatica Sinica, 2020, 7(2): 539-545. doi: 10.1109/JAS.2020.1003057 |
[7] | Zhenhua Wang, Juanjuan Xu, Huanshui Zhang. Consensus Seeking for Discrete-time Multi-agent Systems with Communication Delay[J]. IEEE/CAA Journal of Automatica Sinica, 2015, 2(2): 151-157. |
[8] | Chaoxu Mu, Qun Zong, Bailing Tian, Wei Xu. Continuous Sliding Mode Controller with Disturbance Observer for Hypersonic Vehicles[J]. IEEE/CAA Journal of Automatica Sinica, 2015, 2(1): 45-55. |
[9] | Jie Huang, Hao Fang, Lihua Dou, Jie Chen. An Overview of Distributed High-order Multi-agent Coordination[J]. IEEE/CAA Journal of Automatica Sinica, 2014, 1(1): 1-9. |
[10] | Jing Yan, Xian Yang, Cailian Chen, Xiaoyuan Luo, Xinping Guan. Bilateral Teleoperation of Multiple Agents with Formation Control[J]. IEEE/CAA Journal of Automatica Sinica, 2014, 1(2): 141-148. |
[11] | Hejin Zhang, Zhiyun Zhao, Ziyang Meng, Zongli Lin. Experimental Verification of a Multi-robot Distributed Control Algorithm with Containment and Group Dispersion Behaviors: the Case of Dynamic Leaders[J]. IEEE/CAA Journal of Automatica Sinica, 2014, 1(1): 54-60. |
[12] | Yi Dong, Jie Huang. Leader-following Rendezvous with Connectivity Preservation of Single-integrator Multi-agent Systems[J]. IEEE/CAA Journal of Automatica Sinica, 2014, 1(1): 19-23. |
[13] | Hongbin Ma, Yini Lv, Chenguang Yang, Mengyin Fu. Decentralized Adaptive Filtering for Multi-agent Systems with Uncertain Couplings[J]. IEEE/CAA Journal of Automatica Sinica, 2014, 1(1): 101-112. |
[14] | Xiaoming Sun, Shuzhi Sam Ge. Adaptive Neural Region Tracking Control of Multi-fully Actuated Ocean Surface Vessels[J]. IEEE/CAA Journal of Automatica Sinica, 2014, 1(1): 77-83. |
[15] | Tengfei Liu, Zhongping Jiang. Distributed Control of Nonlinear Uncertain Systems: A Cyclic-small-gain Approach[J]. IEEE/CAA Journal of Automatica Sinica, 2014, 1(1): 46-53. |
[16] | Chuanrui Wang, Xinghu Wang, Haibo Ji. A Continuous Leader-following Consensus Control Strategy for a Class of Uncertain Multi-agent Systems[J]. IEEE/CAA Journal of Automatica Sinica, 2014, 1(2): 187-192. |
[17] | Airong Wei, Xiaoming Hu, Yuzhen Wang. Tracking Control of Leader-follower Multi-agent Systems Subject to Actuator Saturation[J]. IEEE/CAA Journal of Automatica Sinica, 2014, 1(1): 84-91. |
[18] | Chenghui Zhang, Le Chang, Xianfu Zhang. Leader-follower Consensus of Upper-triangular Nonlinear Multi-agent Systems[J]. IEEE/CAA Journal of Automatica Sinica, 2014, 1(2): 210-217. |
[19] | Huiyang Liu, Long Cheng, Min Tan, Zengguang Hou. Containment Control of General Linear Multi-agent Systems with Multiple Dynamic Leaders: a Fast Sliding Mode Based Approach[J]. IEEE/CAA Journal of Automatica Sinica, 2014, 1(2): 134-140. |
[20] | Wen Qin, Zhongxin Liu, Zengqiang Chen. Formation Control for Nonlinear Multi-agent Systems with Linear Extended State Observer[J]. IEEE/CAA Journal of Automatica Sinica, 2014, 1(2): 171-179. |