Loading [MathJax]/jax/output/HTML-CSS/jax.js
A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 6 Issue 3
May  2019

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 15.3, Top 1 (SCI Q1)
    CiteScore: 23.5, Top 2% (Q1)
    Google Scholar h5-index: 77, TOP 5
Jin Zhu, Qin Ding, Maksym Spiryagin and Wanqing Xie, "State and Mode Feedback Control for Discrete-time Markovian Jump Linear Systems With Controllable MTPM," IEEE/CAA J. Autom. Sinica, vol. 6, no. 3, pp. 830-837, May 2019. doi: 10.1109/JAS.2016.7510217
Citation: Jin Zhu, Qin Ding, Maksym Spiryagin and Wanqing Xie, "State and Mode Feedback Control for Discrete-time Markovian Jump Linear Systems With Controllable MTPM," IEEE/CAA J. Autom. Sinica, vol. 6, no. 3, pp. 830-837, May 2019. doi: 10.1109/JAS.2016.7510217

State and Mode Feedback Control for Discrete-time Markovian Jump Linear Systems With Controllable MTPM

doi: 10.1109/JAS.2016.7510217
Funds:

the National Natural Science Foundation of China 61374073

the National Natural Science Foundation of China 61503356

Anhui Provincial Natural Science Foundation 1608085QF153

More Information
  • In this note, the state and mode feedback control problems for a class of discrete-time Markovian jump linear systems (MJLSs) with controllable mode transition probability matrix (MTPM) are investigated. In most achievements, controller design of MJLSs pays more attention to state/output feedback control for stability, while the system cost in practice is out of consideration. In this paper, we propose a control mechanism consisting of two parts: finite-path-dependent state feedback controller design with which uniform stability of MJLSs can be ensured, and mode feedback control which aims to decrease system cost. Differing from the traditional state/output feedback controller design, the main novelty is that the proposed control mechanism not only guarantees system stability, but also decreases system cost effectively by adjusting the occurrence probability of system modes. The effectiveness of the proposed mechanism is illustrated via numerical examples.

     

  • Many practical systems are naturally subjected to abrupt changes in structures which may arise from the sudden variation of environment [1], communication fault of networked systems [2], or failure of subsystems' connection [3], etc.. Among the different approaches to model the concerned phenomena, Markovian jump linear systems (MJLSs) are well suited to describe such dynamics. An MJLS, generally speaking, is composed of several subsystems where the dynamic of each subsystem is named a "mode". The dynamic of the MJLS jumps among these modes and the jumping rule is governed by a Markov process/chain. Since the MJLS models come into stage, we have successfully witnessed the increasing interest in this area with fruitful achievements, for instance, stability analysis, filter, controller design and so on.

    In the past decades, stability and controller design problems of MJLSs have been intensively studied. Surrounding the research of MJLSs, a consensus has nowadays been drawn that system stability, controller design as well as the system cost of MJLSs are largely dependent on the mode transition rate/probability matrix (MTRM/MTPM) of Markov process/chain. This consensus is based on the fact that the MTRM/MTPM can affect the mode transition probability according to the knowledge of stochastic process [4]. Consequently, existing work mainly focuses on stability analysis and controller design based on the prior knowledge of MTRM/MTPM. By assuming that MTRM/MTPM is fully known, several kinds of stochastic stability are given, e.g., pth moment stability (specially mean-square stability when p=2) and stability in probability [5][7]. Based on this, further exploration is carried out for the case that MTRM/MTPM is partially known. In this scenario, system stability as well as stabilization is discussed where the notion of stochastic stability is mostly defined as mean square stability. Reference [8] considers the admissible bound for MTRM/MTPM such that stochastic stability can be guaranteed, and [9], [10] present the sufficient conditions for robust stabilization. In these mentioned achievements, sufficient or sufficient-necessary conditions are investigated for mean square stability (certainly stability in probability can be included as a direct conclusion). On the basis of stability analysis, existing papers discuss controller design and synthesis further. For example, [11], [12] investigate the state feedback controller where MTRM/MTPM is completely known while [8], [10], [13] study the stability and H control problem where MTRM/MTPM is partially known.

    The above results are all given on the assumption that MTRM/MTPM is unchangeable no matter it is fully accessible or not. Nevertheless in many practical MJLSs, the MTRM/MTPM can be adjusted by human artificial action [3], [14]. Regarding the case of manufacturing system which is modeled as an MJLS with two modes: work mode and fault mode, its MTRM is dependent on the status of the machine itself considering the fact that an aging machine will have a larger going-wrong transition rate than a new one. For such manufacturing systems, human artificial actions including advisable maintenances and repairs can help to decrease the going-wrong transition rate. This shows that the control of MTRM/MTPM is feasible and can make sense to decreasing system cost. Noticing the fact that system cost of MJLSs can be regarded as a combination of these subsystems' costs via the mode transition probability as weight factors, consequently, system cost will be changed if control is performed to change MTRM/MTPM such that weight factors are altered. It is easy to see that the unchangeable MTRM/MTPM case can be regarded as a specified scenario of controllable MTRM/MTPM when no actions are taken, therefore the state and MTRM/MTPM control pair offers us more freedom than the traditional state feedback controller. However, recalling that stability analysis is dependent on the information of MTRM/MTPM, if control is introduced to change MTRM/MTPM, the desired state feedback controller may fail and demands of redesigning state feedback controller tend to be inevitable. This will increase design complexity since MJLSs' stability is usually accompanied with the solution of coupled linear matrix inequality (LMI). Then, an intuitionistic question is that: taking into account system stability, can we change MTRM/MTPM and meanwhile keep the designed state/output controller unchanged? This means we should firstly find appropriate state/output feedback controller which is independent of MTRM/MTPM to ensure system stability, and then perform control on MTRM/MTPM to decrease system cost.

    In this paper, we take into account the state and mode feedback control of discrete-time MJLSs with controllable MTPM. Here the control mechanism contains two parts: state feedback controller is for system state such that system stability can be ensured, while mode feedback controller is for MTPM where the occurrence probability of each mode can be adjusted and stability can be maintained with less system cost. In this situation, two arguments may appear here for state feedback controller design: one is that the desired system stability is in mean square sense which will be a restriction to practical applications since stability demand of many systems is extremely rigorous. The other is that stability criteria are given in the form of coupled LMI with the application of Lyapunov function and infinitesimal generator which bring about difficulties in non-conservative controller synthesis. To avoid solving coupled LMI, some novel stability notions are proposed, for instance, almost sure stability [15] and uniform stability [16]. Differing from traditional mode-dependent state feedback controller which is obtained by solving LQ problem, in this paper, state feedback controller is given in the form of finite-path-dependent controller [16], [17]. The advantages of such design lie in two aspects: firstly system stability is uniform instead of mean square, thus system state can be exponentially stable definitely. Secondly the desired state feedback controller is independent of MTPM, therefore the control of MTPM only adjusts the occurrence probability of system mode, and will not affect the designed state feedback controller. It is worth mentioning here that these results pay attention to state feedback controller design for novel stability of MJLSs without solving coupled LMI, while the system cost is still not in the scope. This drawback naturally becomes the motivation of our research. Based on the finite-path-dependent controller given in existing results, we go further to design the mode feedback controller which performs control on MTPM with the aim of decreasing system cost. In the first place, the mode indicator differential equation of MJLSs involving mode feedback controller is introduced. Then we introduce the quadratic performance index based on which the optimal mode feedback controller can be deduced together with its admissible value set discussed. Compared with finite-path-dependent state feedback controllers, the proposed state and mode feedback control pair helps to change the inherent stochastic property of MJLSs and the occurrence probability of subsystems are adjusted such that a better system performance can be achieved on the basis of ensuring uniform stability.

    The paper is organized as follows. After reviewing in Section Ⅰ-A the finite-path-dependent state feedback controllers with the main results given, the designing of optimal mode feedback controller is investigated subjecting to the quadratic system performance as well as its admissible value set is discussed in Section Ⅰ-B and numerical examples verify the effectiveness of the proposed mechanism in Section Ⅰ-C. And a brief conclusion is drawn in Section Ⅱ.

    Notations: The notations used in this paper are standard. The superscript "" denotes the vector or matrix optimal variable. E{} denotes the mathematical expectation. If X is a vector in Rn, then X.2 denotes the squares of the corresponding entries in X.

    Considering the discrete-time MJLS represented by

    {xk+1=arkxk+brkustakyk=xk (1)

    where xkR is system state, ustakR is system input or controller, ykR is the output or observation, arkR and brkR are system coefficients. Let {rk,kk0} be a Markov chain on the complete probability space (Ω,F,P) taking values in a finite set S={1,2,,N}, where rk stands for system mode. The initial mode transition probability is given as

    P(rk+1=j|rk=i)=pij

    where i,jS, pij0, Nj=1pij=1, and here pij is the element of MTPM P={pij}N×N, which is controllable in this paper. For simplification, we write ark as ai when rk=i.

    Definition 1 [15]: The MJLS (1) is said to be uniformly (exponentially) stable if there exist c1 and λ(0,1) such that

    |xk|cλkk0|xk0| (2)

    for kk00 and xk0R.

    It is remarkable that existing results are focusing on the designing of mode-dependent output/state feedback controller, i.e., the control gain Ki is given as a function of mode i such that stability in mean squared sense is obtained. In [14], a finite-path-dependent output/state feedback controller is proposed where the control gain K is dependent on the history path of system mode. For example, if at time k0, system mode is rk0=i0, and at time k1, system mode is rk1=i1,, until at time kL, system mode is rkL=iL. Thus for a finite-path (i0, i1,,iL), M is admissible path length and LM, there may exist a finite-path-dependent control gain KM=Ki0iL, which makes it possible that system state can be uniformly stable.

    Definition 2 [16]: If there exist a nonnegative integer L and matrices Ki0iLR, i0,,iLS, such that

    KM={Kir0kLir01rk,k<LKrkLrk,kL (3)

    then the controller ustak=KMxk is said to be L-path dependent or finite-path dependent. In particular, if ustak is zero-path dependent, ustak is said to be mode-dependent.

    Theorem 1 [16]: The discrete-time MJLS is uniformly exponentially stable if and only if there exist a nonnegative integer M and positive scalar xj1jMR, j1,,jMS, such that

    aTiMxi1iMaiMxi0iM1<0 (4)

    for all k0, where

    xM={xir(0)Mir(0)1,if k=0xir(0)kMir(0)1r(k1),if 0<k<Mxr(kM)r(k1),otherwise

    whenever (i0,,iM){1,,N}M+1 is an admissible switching path of length M. Applying the Schur complement formula (4) to the case when kM>0, we have that (4) is equivalent to

    HrkMrk+KMFrk<0 (5)

    where

    HrkMrk=[xTrkM+1rkarkarkxrkMrk1]Frk=[0brkbrk0].

    The case where k<M or M=0 is similar.

    Theorem 2 [16]: The discrete-time MJLS (1) is uniformly exponentially stabilizable via finite-path-dependent control if and only if there exist a nonnegative integer and M and positive scalar Rj1jMR, j1,,jMS, such that

    aiMRi0iM1aTiMRi1iM<0 (6)

    for all admissible switching paths (i0,,iM){1,, N}M+1 all of length M. The controller design algorithms are given in more detail in [14].

    From the above result, we can obtain the finite-path-dependent state feedback controller which is independent of MTPM. Noticing this property, the re-design of state feedback controller is then not necessary if we apply mode controller to MTPM. This brings us great convenience to perform control on MTPM. For example, if the MJLS (1) has N modes and we take path length of M, then we may obtain NM+1 new path-dependent modes at most. Obviously, these new path-dependent modes are also governed by a Markov chain and each of them has different control gain. Above all, the new path-dependent MLJSs can be described as

    {xk+1=ˉaθkxkustak=KMxk (7)

    where ˉaθk=ark+brkKM. Obviously, θk is still a Markov process on the complete probability space (Ω,F,P) taking values in a finite set ˉS={1,2,,N}M+1.

    Thus, the new path-dependent mode transition probability matrix ˉP is defined such that the transition from a mode θk= (i0,,iM)ˉS to another mode θk+1=(j0,,jM)ˉS occurs only if (i1,,iM)=(j0,,jM1), where i0,, iM S, j0,,jMS. The cases where k<M or M=0 are similar. Then, the new path-dependent mode transition probability matrix (PTPM for brief) can be described as

    ˉP(θk+1=m|θk=n)=ˉpnm (8)

    where

    ˉpnm={piMjM,if (j0,,jM1)=(i1,,iM)0,else

    n,mˉS, ˉpnm0, NM+1m=1ˉpnm=1, and here ˉpnm is the element of PTPM ˉP={ˉpnm}NM+1×NM+1 which is determined by the original MTPM.

    However, there are differences among these path-dependent modes, such as, for the same initial state of each mode, the stability cost of the system is different. For this reason, the stability cost of the MJLS can be regarded as a combination of each subsystems' stability cost governed by the PTPM. This motivated us to apply control to the PTPM so that path-dependent mode with less cost occurs with higher probability. The adjustment of new modes occurrence probability may lead to less system cost.

    Here we define the traditional control performance which is given as the intuitive finite-path modes cost function, that is

    J=Mk=0[xTkQxk+(ustak)TLustak]+xTMFxM (9)

    where Q>0, L>0, F>0. In above discussion, for an N-mode system we will obtain NM+1 paths at most. Each path corresponding to the finite-path mode cost J will be different. In consideration of the traditional performance in (9) which can turn into the form of J=xT0Rx0 with the same initial state x0 eventually, we can sort the cost of each path such that Jn1 <Jn2<<JnNM+1, where n1,n2,,nNM+1ˉS. On the basis of the finite-path mode cost order, control mechanisms are introduced to PTPM to let the MLJS (7) take more time on the mode n1. Motivated by variation of the PTPM, we figure out that changing the transition probability is a feasible way to decrease cost of the MJLS (7). In the next section, we focus our attention on designing a mode feedback controller for minimizing the system cost.

    The purpose of this section is to find optimal mode feedback controller which is for the control of PTPM such that the occurring probability of each new mode can be adjusted with less system cost. Now recalling that for continuous-time MJLSs

    ˙xt=Aθtxt

    there is the following mode indicator equation [1]

    dϕt=ΠTϕtdt+dMt

    with xtR being the system state, ϕtRNM+1 being the mode indicator, Π being the PTPM of Markov process and Mt being a martingale process. For any given time t, system mode θt will be resident at one certain mode, and thus mode indicator ϕt will be a vector of this form: for θt=nˉS, the nth component of vector ϕt will be 1 and other components be zero. If we choose the sampling time interval as dt, the discrete-time mode indicator equation can be written as

    ϕk+1ϕk=ΠTϕkdt+Mk+1Mk.

    Rewrite the above equation as

    ϕk+1=(ΠTdt+IN×N)ϕk+Mk+1Mk.

    Noticing the relationship between transition rate and transition probability is

    1+πnndt=pnnπnmdt=pnmn,mˉS; nm.

    Consider the MJLSs and PTPM given in (7), (8), if sampling time interval is small enough, we can get the discrete-time mode indicator equation as

    ϕk+1=ˉPT+ϕk+Mk+1Mk

    where ˉP is PTPM and Mk is a discrete-time martingale. Now we introduce the control to PTPM ˉP, we suggest the mode indicator equation with mode feedback controller be written as [16]

    ϕk+1=(ˉPT+U)ϕk+Mk+1Mk. (10)

    Here matrix U=ukB is the control matrix, where BRNM+1×NM+1 is the control matrix whose columns sum to zero, but not necessarily to be negative on its diagonal and uk is the mode feedback control of PTPM. In order to guarantee the original form of PTPM, the control matrix B should be defined as

    B={bmn,if ˉpnm00,if ˉpnm=0.

    The admissible value set of uk controller is given as uk Uad which corresponds to the establishment of inequality (19). Here we define the quadratic performance index of mode control as

    ˜J=E{T1k=0[ϕTk˜Qϕk+uTk˜Luk]|ϕ0,M0}+ϕTT˜FϕT (11)

    where weight matrix is

    ˜Q=[q1q1NqN1qNN]0˜F=[f11f1NfN1fNN]0˜L= l.

    In performance index (11), weight matrix ˜Q reflects the corresponding penalty of different path-dependent modes, thus in this paper the diagonal components of ˜Q will be path-dependent modes cost given in (9) and other components be zero. In general, the optimal mode feedback control problem is declared as following:

    minuk ˜Js.t.(7), (10). (12)

    Remark 1: Considering optimization problem (12), it is easy to see that there must be ukUad, thus for MJLS (7) without any controller, the trajectory of system state is totally determined by the PTPM ˉP, i.e., uk=0, and its performance is fixed as

    ˜J=E{T1k=0[ϕTk˜Qϕk]|ϕ0,M0}+ϕTT˜FϕT.

    Now we consider the introduction of mode control uk, noticing that uk=0 is a special case of mode control, and in this meaning, mode control uk will help to decrease system cost ˜J.

    Theorem 3: Consider the discrete-time MJLSs (7) with controllable Markov chain (8), the optimal mode feedback control is given by

    uk=12lγTk+1Bϕk (13)

    where vector γkRNM+1 satisfies the equation

    γk=q+Pγk+114l(BTγk+1).2,   γT=f (14)

    with qT=(q11,q22,,qNN), fT=(f11,f22,,fNN) which are the diagonal elements of weight matrix in (10). Furthermore, the minimum cost of J is given by

    ˜J=γT0ϕ0. (15)

    Proof: Considering MJLS (7), with the application of dynamic programming, we define the cost-to-go function

    Vk=minukE{T1s=kϕTs˜Qϕs+uTs˜Lus|ϕk,θk,Mk} (16)

    and there is

    Vk=minukE{ϕTk˜Qϕk+uTk˜Luk+Vk+1} (17)

    Now observing the mathematical form of (16), (17), suppose that Vk=ϕTkΛkϕk with components

    Λk=[γ11(k)γ1N(k)γN1(k)γNN(k)]

    and we have

    ϕTkΛkϕk=minukE{ϕTk˜Qϕk+uTk˜Luk+ϕTk+1Λk+1ϕk+1}.

    Noticing that mode indicator ϕk is a vector containing only one "1" and N1 "0", there is

    ϕTk˜Qϕk=qTϕkϕTk+1Λk+1ϕk=γTk+1ϕk+1

    where

    qT=(q11,q22,,qNN)γTk=(γ11(k),γ22(k),,γNN(k))

    thus from the above equation there is

    γTk=minukE{qTϕk+uTkLuk+γTk+1ϕk+1}.

    Substitute (10) into above equation, and bearing in mind that E{Mk+1|Mk}=Mk with probability 1, there is

    γTkϕk=minukE{qTϕk+uTkLuk+γTk+1(ˉPT+ukB)ϕk}=minukE{qTϕk+lu2k+γTk+1(ˉPT+ukB)ϕk}. (18)

    For the minimization of the right side of (18), it can be obtained that

    uk=12lγTk+1Bϕk. (19)

    Substituting optimal control into (18), there is

    γk=q+ˉPγk+114l(BTγk+1).2,   γT=f.

    Meanwhile we can get the minimum cost given as

    ˜J=γT0ϕ0.

    Remark 2: Notice also that, under the assumption of the discrete-time MJLS (7) with PTPM is uniformly exponentially stabilizable, and there exists variation ΔˉP=[Δˉpnm]NM+1×NM+1 to ˉP, which means the new PTPM is of the form

    ˉP=ˉP+ΔˉP=[ˉpnm]NM+1×NM+1NM+1m=1Δˉpnm=NM+1m=1ukbmn=0

    where Δˉpnm satisfies

    Δˉpnm=ukbnm>pnm (20)

    for all n=1,2,,NM+1, nm, where bnm is the component of B in nth row and mth column. In above analysis, we can know that uk is limited to take value in the admissible set {Uad,ukUad}.

    Remark 3: In above analysis, we discuss in details how to find the optimal mode feedback controller while in which the constraints ukUad are temporarily not considered. It is easy to see that the performance ˜J is a quadratic form of controller uk. For such standard optimization problem

    min ˜J=a+bTuk+uTkuks.t.ukUad

    the minimum value can be obtained either at the global optimal point uk=b/2 if b/2Uad or at the boundary if b/2Uad.

    Remark 4: The proposed control mechanism consists of two parts: finite-path-dependant state feedback controller (7) and mode feedback controller (19). Bearing in mind that state feedback controller is independent of MTPM, this means the variation of MTPM will not cause any changes to the pre-designed state feedback controller. Thus compared with existing results in [17], the complexity of our mechanism mainly comes from the design of mode feedback control. For a finite set with N modes which means system mode ϕk has N possible values, the number of mode feedback controller is also N if time tends to infinity. Therefore, the introduction of mode feedback control can effectively decrease system cost at a low design complexity. Considering that this work is of theoretical meaning, it can be extended to practical environment with more factors considered such as time-delays, packet dropouts and nonlinear case, etc..

    In this section, a failure prone manufacturing process example is presented to illustrate the effectiveness of the proposed method for the concerned MJLS model. For a production task, let xk denote the penalisation of backlog at time t with initial value x0 and ustak denote the production rate. yk is the measurement of xk. The manufacturing process can be modelled with two modes: fault status and work status

    fault status (mode 1): {xk+1=8xk+ustakyk=xkwork status (mode 2): {xk+1=0.6xk+ustakyk=xk

    In general, we define the initial MTPM of the aforementioned manufacturing process as

    P=[0.70.30.90.1]

    which indicates the work status will jump to the fault status with a certain high probability. However, many practical actions such as proper preventive maintenances can be adopted to change the probability of fault status mode occurring and prevent the going-wrong rate increasing. Then our aim is to find out production rate ustak and proper maintenances uk to minimize the cost of the operating failure prone manufacturing systems.

    We first design the state feedback controllers ustak as production rate for each mode. Here we set the path length with M =1. According to the feedback control design algorithm in [16], we obtain finite-path-dependent control gain of four paths

    K11=8.7,  K12=1.3,  K21=7.5,  K22=0.2.

    Then the new four path-dependent modes of MJLS can be written as

    Path(1,1) {xk+1=8xk+ustakustak=8.7xkyk=xkPath(1,2) {xk+1=0.6xk+ustakustak=1.3xkyk=xkPath(2,1) {xk+1=8xk+ustakustak=7.5xkyk=xkPath(2,2) {xk+1=0.6xk+ustakustak=0.2xkyk=xk

    where Path(1,1), Path(2,1) denote work status and Path(1,2), Path(2,2) denote fault status apparently. And the new MTPM of the manufacturing process which can be determined by the previous MTPM is represented as

    ˉP=[0.70.30.00.00.00.00.90.10.70.30.00.00.00.00.90.1].

    In addition, the traditional system cost J incorporated in (9) consists of both backlog cost and production cost for each path and we set Q=10, F=2, L=10. By calculating each new path-dependent mode cost with x0=1, we obtain

    J11=87.5162,J12=59.0122J21=11.5200,J22=2.2628.

    Based on the state feedback controller for each mode and the order of each mode cost above, the Path(2,2) takes minimum cost obviously so that our problem is to find out proper maintenances uk to minimize the cost of the operating failure prone manufacturing systems. For such manufacturing process, if preventive-maintenance actions are taken such as dust-cleaning, lubricant oil adding etc, the machine will stay on work status with a high probability.

    Now consider the MJLS (7) with controllable mode probabilities given in (10), and here we define B as follows:

    B=[0.5500.550  0.550  0.55000.7200.720  0.720  0.72].

    Then we go further to apply control to PTPM with the aim of decreasing system cost. The mode feedback controller uk which denotes the proper maintenances satisfies the mode indicator equation [16] ϕk+1=(ˉP+ukB)Tϕk+Mk+1Mk. Consider the quadratic system cost with constraints uk Uad (19) and we define

    ˜Q=[87.5162000059.0122000011.520000002.4648]˜F=[101108]˜L= l=10.

    From Theorem 3, the equation for the optimal control is

    uk=12lγTk+1Bϕk

    and utilizing the formula (14), we obtain

    (γ1γ2γ3γ4)T=(163.0172169.194794.20741076.028775.966768.7582160.942991.251717.9644105.809419.328912.19768).

    Then, we can obtain the optimal mode controllers for each new path-dependent mode given by

    (u1u2u3u4)=(2.43872.64940.69990.0720).

    As for the constraints ukUad, the optimal controllers should be given as

    (u1u2u3u4)=(1.241.240.69990.0720).

    Simultaneously, the cost of preventive actions uk should be added into the sum of cost in the end. By applying the optimal decisions to the previous PTPM, the renewed PTPM can be obtained for each new path-dependent mode

    ˉP(1,1)=[0.01800.982000000.00720.99280.01800.982000000.00720.9928]ˉP(1,2)=[0.01800.982000000.00720.99280.01800.982000000.00720.9928]ˉP(2,1)=[0.31510.684900000.39610.60390.31510.684900000.39610.6039]ˉP(2,2)=[0.66040.339600000.84820.15180.66040.339600000.84820.1518].

    Fig. 1 simulates one sample of path-dependent mode jump without/with mode feedback controller respectively. It can be seen from Fig. 1 that with mode feedback controller applied, system mode tends to be resident in path-dependent mode 4 (Path(2,2)) with higher probability.

    Figure  1.  (a) System mode without mode feedback control; (b) System mode with mode feedback control.

    When the path-dependent mode evolution has been decided, Fig. 2 expresses more details about the trajectories of state and mode control. As shown in Fig. 2, the state and state feedback control with mode feed-back control turn into zero in less time while will take more time without mode feed-back control.

    Figure  2.  (a) System state without/with mode feedback control; (b) Control without/with mode feedback control.

    Table Ⅰ is the simulation results of the whole system cost concerning the two cases. That are cost J without mode feedback control and cost ˜J with mode feedback control. For comparison, we respectively sample 15 sets of cost data computed by

    Table  Ⅰ.  COMPARISON OF SYSTEM COST WITHOUT/WITH FEEDBACK CONTROL
    Sample (N=15)Cost without mode feed-back control JCost with mode feed-back control ˜J
    1 2.78×102 2.60×102
    2 3.11×102 2.51×102
    3 3.16×102 2.58×102
    4 3.12×102 2.55×102
    5 2.97×102 2.60×102
    6 2.83×102 2.53×102
    7 2.72×102 2.60×102
    8 3.16×102 2.60×102
    9 3.14×102 2.50×102
    10 3.14×102 2.50×102
    11 2.94×102 2.56×102
    12 3.16×102 2.52×102
    13 3.07×102 2.59×102
    14 2.97×102 2.52×102
    15 3.05×102 2.49×102
    Average-cost 3.02×102 2.55×102
     | Show Table
    DownLoad: CSV
    J=Numk=0[xTkQxk+(ustak)TLustak]+xTNumFxNum

    and

    ˜J=Numk=0[xTkQxk+(ustak)TLustak+uTkLuk]+xTNumFxNum

    with the same initial state x0=1, Num=60, Q=10, L = 1, F=10. It is easy to see that the values of average system cost is superior to the cases without mode feedback control.

    In this paper, the state and mode feedback control for discrete-time MJLSs with controllable MTPM is discussed. Considering the fact that controller design as well as the system cost of MJLSs is largely dependent on MTRM/MPTM of the Markov chain, a finite-path-dependent state feedback controller is proposed which is independent of MTPM such that system uniform stability can be ensured. Based on this, we design the mode feedback controller to decrease system cost by performing control on MTPM and the occurrence probability of each mode can be adjusted. The simulation results demonstrate the effectiveness of this mechanism. In future, we hope to extend this result to multi-variable case and practical systems.

  • [1]
    M. Mariton, Jump Linear Systems in Automatic Control. Marcel Dekker: NY, USA, 1990.
    [2]
    J. Qiu, H. Gao, and S. Ding, "Recent advances on fuzzy-model-based nonlinear networked control systems: a survey, " IEEE Trans. Industrial Electronics, vol. 63, no. 2, pp. 1207-1217, 2016. doi: 10.1109/TIE.2015.2504351
    [3]
    J. Zhu, L. P. Wang, and M, "Spiryagin, Control and decision strategy for a class of Markovian jump systems in failure prone manufacturing process, " IET Control Theory & Applications, vol. 6, no. 12, pp. 1803-1811, 2012.
    [4]
    D. W. Stroock, An Introduction to Markov Processes. Springer-Verlag: Berlin Heidelberg, 2004.
    [5]
    P. Bolzern, P. Colaneri, and G. De Nicolao, "On almost sure stability of continuous-time Markov jump linear systems, " Automatica, vol. 42, no. 6, pp. 983-988, 2006. doi: 10.1016/j.automatica.2006.02.007
    [6]
    A. R. Fioravanti, A. Goncalves, and J. Geromel, "Discrete-time H output feedback for Markov jump systems with uncertain transition probabilities, " Int. J. Robust and Nonlinear Control, vol. 23, no. 8, pp. 894-902, 2013. doi: 10.1002/rnc.v23.8
    [7]
    X. Mao, "Stability of stochastic differential equations with Markovian switching, " Stochastic Processes and Their Applications, vol. 79, no. 1, pp. 45-67, 1999. doi: 10.1016/S0304-4149(98)00070-2
    [8]
    M. Karan, P. Shi, and C. Y. Kaya, "Transition probability bounds for the stochastic stability robustness of continuous-and discrete-time Markovian jump linear systems, " Automatica, vol. 42, no. 12, pp. 2159-2168, 2006. doi: 10.1016/j.automatica.2006.07.002
    [9]
    J. Xiong, J. Lam, H. Gao, and D. W. C. Ho, "On robust stabilization of Markovian jump systems with uncertain switching probabilities, " Automatica, vol. 41, no. 5, pp. 897-903, 2005. doi: 10.1016/j.automatica.2004.12.001
    [10]
    L. Zhang and E. K. Boukas, "H control for discrete-time Markovian jump linear systems with partly unknown transition probabilities, " Int. J. Robust and Nonlinear Control, vol. 19, no. 8, pp. 868-883, 2009. doi: 10.1002/rnc.v19:8
    [11]
    B. Lincoln and B. Bernhardsson, "LQR optimization of linear system switching, " IEEE Trans. Autom. Control, vol. 47, no. 10, pp. 1701-1705, 2002. doi: 10.1109/TAC.2002.803539
    [12]
    I. Kordonis and G. P. Papavassilopoulos, "On stability and LQ control of MJLS with a Markov chain with general state space, " IEEE Trans. Autom. Control, vol. 59, no. 2, pp. 535-540, 2014. doi: 10.1109/TAC.2013.2274688
    [13]
    J. Qiu, Y. Wei, and H. R. Karimi, "New approach to delay-dependent H control for continuous-time Markovian jump systems with time-varying delay and deficient transition descriptions, " J. Franklin Institute, vol. 352, no. 1, pp. 189-215, 2015. doi: 10.1016/j.jfranklin.2014.10.022
    [14]
    P. A. Kawka and A. G. Alleyne, "Robust wireless servo control using a discrete-time uncertain Markovian jump linear model, " IEEE Trans. Control Syst. Technology, vol. 17, no. 3, pp. 733-742, 2009. doi: 10.1109/TCST.2008.2002321
    [15]
    M. Tanelli, B. Picasso, P. Bolzern, and P. Colaneri, "Almost sure stabilization of uncertain continuous-time Markov jump linear systems, " IEEE Trans. Autom. Control, 2010, vol. 55, no. 5, pp. 195-201 http://www.wanfangdata.com.cn/details/detail.do?_type=perio&id=ae72787a878552baa73b58482080cb4a
    [16]
    J. Lee and G. E. Dullerud, "Uniform stabilization of discrete-time switched and Markovian jump linear systems, " Automatica, vol. 42, no. 2, pp. 205-218, 2006. doi: 10.1016/j.automatica.2005.08.019
    [17]
    J. Lee and G. E. Dullerud, "Optimal disturbance attenuation for discrete-time switched and Markovian jump linear systems, " SIAM J. Control and Optimization, vol. 45, no. 4, pp. 1329-1358, 2006. doi: 10.1137/050627538
  • Related Articles

    [1]Shengao Lu, Tong Wu, Lixian Zhang, Jianan Yang, Ye Liang. Interpolated Bumpless Transfer Control for Asynchronously Switched Linear Systems[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(7): 1579-1590. doi: 10.1109/JAS.2023.124155
    [2]Qian Ma, Peng Jin, Frank L. Lewis. Guaranteed Cost Attitude Tracking Control for Uncertain Quadrotor Unmanned Aerial Vehicle Under Safety Constraints[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(6): 1447-1457. doi: 10.1109/JAS.2024.124317
    [3]Zheng Wu, Yiyun Zhao, Fanbiao Li, Tao Yang, Yang Shi, Weihua Gui. Asynchronous Learning-Based Output Feedback Sliding Mode Control for Semi-Markov Jump Systems: A Descriptor Approach[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(6): 1358-1369. doi: 10.1109/JAS.2024.124416
    [4]Rong Zhao, Jun-e Feng, Dawei Zhang. Self-Triggered Set Stabilization of Boolean Control Networks and Its Applications[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(7): 1631-1642. doi: 10.1109/JAS.2023.124050
    [5]Bing Zhu, Xiaozhuoer Yuan, Li Dai, Zhiwen Qiang. Finite-Time Stabilization for Constrained Discrete-time Systems by Using Model Predictive Control[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(7): 1656-1666. doi: 10.1109/JAS.2024.124212
    [6]Zhongcai Zhang, Guangren Duan. Stabilization Controller of An Extended Chained Nonholonomic System With Disturbance:  An FAS Approach[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(5): 1262-1273. doi: 10.1109/JAS.2023.124098
    [7]Jingshu Sang, Dazhong Ma, Yu Zhou. Group-Consensus of Hierarchical Containment Control for Linear Multi-Agent Systems[J]. IEEE/CAA Journal of Automatica Sinica, 2023, 10(6): 1462-1474. doi: 10.1109/JAS.2023.123528
    [8]Haihua Guo, Min Meng, Gang Feng. Lyapunov-Based Output Containment Control of Heterogeneous Multi-Agent Systems With Markovian Switching Topologies and Distributed Delays[J]. IEEE/CAA Journal of Automatica Sinica, 2023, 10(6): 1421-1433. doi: 10.1109/JAS.2023.123198
    [9]Yupin Wang, Hui Li. Global Stabilization Via Adaptive Event-Triggered Output Feedback for Nonlinear Systems With Unknown Measurement Sensitivity[J]. IEEE/CAA Journal of Automatica Sinica. doi: 10.1109/JAS.2023.123984
    [10]Younes Solgi, Alireza Fatehi, Ala Shariati. Non-Monotonic Lyapunov-Krasovskii Functional Approach to Stability Analysis and Stabilization of Discrete Time-Delay Systems[J]. IEEE/CAA Journal of Automatica Sinica, 2020, 7(3): 752-763. doi: 10.1109/JAS.2020.1003102
    [11]Shuyi Shao, Mou Chen. Fractional-Order Control for a Novel Chaotic System Without Equilibrium[J]. IEEE/CAA Journal of Automatica Sinica, 2019, 6(4): 1000-1009. doi: 10.1109/JAS.2016.7510124
    [12]Qing-Kui Li, Xiaoli Li, Jiuhe Wang, Shengli Du. Stabilization of Networked Control Systems Using a Mixed-Mode Based Switched Delay System Method[J]. IEEE/CAA Journal of Automatica Sinica, 2018, 5(6): 1089-1098. doi: 10.1109/JAS.2018.7511228
    [13]Qing-Kui Li, Xiaoli Li, Jiuhe Wang, Shengli Du. Stabilization of Networked Control Systems Using a Mixed-Mode Based Switched Delay System Method[J]. IEEE/CAA Journal of Automatica Sinica, 2018, 1(1): 1-10.
    [14]Jufeng Wang, Chunfeng Liu. Stabilization of Uncertain Systems With Markovian Modes of Time Delay and Quantization Density[J]. IEEE/CAA Journal of Automatica Sinica, 2018, 5(2): 463-470. doi: 10.1109/JAS.2017.7510823
    [15]Xiaojun Tang, Jie Chen. Direct Trajectory Optimization and Costate Estimation of Infinite-horizon Optimal Control Problems Using Collocation at the Flipped Legendre-Gauss-Radau Points[J]. IEEE/CAA Journal of Automatica Sinica, 2016, 3(2): 174-183.
    [16]Mojtaba Naderi Soorki, Mohammad Saleh Tavazoei. Constrained Swarm Stabilization of Fractional Order Linear Time Invariant Swarm Systems[J]. IEEE/CAA Journal of Automatica Sinica, 2016, 3(3): 320-331.
    [17]Yan Song, Haifeng Lou, Shuai Liu. Distributed Model Predictive Control with Actuator Saturation for Markovian Jump Linear System[J]. IEEE/CAA Journal of Automatica Sinica, 2015, 2(4): 374-381.
    [18]Wen Qin, Zhongxin Liu, Zengqiang Chen. Formation Control for Nonlinear Multi-agent Systems with Linear Extended State Observer[J]. IEEE/CAA Journal of Automatica Sinica, 2014, 1(2): 171-179.
    [19]Xisheng Dai, Senping Tian, Yunjian Peng, Wenguang Luo. Closed-loop P-type Iterative Learning Control of Uncertain Linear Distributed Parameter Systems[J]. IEEE/CAA Journal of Automatica Sinica, 2014, 1(3): 267-273.
    [20]Huiyang Liu, Long Cheng, Min Tan, Zengguang Hou. Containment Control of General Linear Multi-agent Systems with Multiple Dynamic Leaders: a Fast Sliding Mode Based Approach[J]. IEEE/CAA Journal of Automatica Sinica, 2014, 1(2): 134-140.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(2)  / Tables(1)

    Article Metrics

    Article views (807) PDF downloads(24) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return