Loading [MathJax]/jax/output/HTML-CSS/jax.js
A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Keyvan Majd, Mohammad Razeghi-Jahromi and Abdollah Homaifar, "A Stable Analytical Solution Method for Car-Like Robot Trajectory Tracking and Optimization," IEEE/CAA J. Autom. Sinica, vol. 7, no. 1, pp. 39-47, Jan. 2020. doi: 10.1109/JAS.2019.1911816
Citation: Keyvan Majd, Mohammad Razeghi-Jahromi and Abdollah Homaifar, "A Stable Analytical Solution Method for Car-Like Robot Trajectory Tracking and Optimization," IEEE/CAA J. Autom. Sinica, vol. 7, no. 1, pp. 39-47, Jan. 2020. doi: 10.1109/JAS.2019.1911816

A Stable Analytical Solution Method for Car-Like Robot Trajectory Tracking and Optimization

doi: 10.1109/JAS.2019.1911816
Funds:  This work was partially supported by the Air Force Research Laboratory and Office of the Secretary of Defense (OSD) (FA8750-15-2-0116), the US Department of Transportation (USDOT), and Research and Innovative Technology Administration (RITA) under University Transportation Center (UTC) Program (DTRT13-G-UTC47)
More Information
  • In this paper, the car-like robot kinematic model trajectory tracking and control problem is revisited by exploring an optimal analytical solution which guarantees the global exponential stability of the tracking error. The problem is formulated in the form of tracking error optimization in which the quadratic errors of the position, velocity, and acceleration are minimized subject to the rear-wheel car-like robot kinematic model. The input-output linearization technique is employed to transform the nonlinear problem into a linear formulation. By using the variational approach, the analytical solution is obtained, which is guaranteed to be globally exponentially stable and is also appropriate for real-time applications. The simulation results demonstrate the validity of the proposed mechanism in generating an optimal trajectory and control inputs by evaluating the proposed method in an eight-shape tracking scenario.

     

  • CONTROL of the nonholonomic car-like robot (NCLR) kinematic model has been a cardinal research topic since the model is widely prevalent in autonomous driving control, motion planning, robotics, and so on. The nonholonomic nature of the robot constraints has made this problem challenging as these class of systems cannot be asymptotically stabilized around the equilibrium point by using any smooth time-invariant state feedback [1]. The existing tracking control techniques can be divided into two categories of point-to-point stabilization and trajectory tracking [2]. In point-to-point stabilization, the control task is to solve a stabilization problem for an equilibrium point in the robot state space where the robot must reach from a given initial configuration. However, in trajectory tracking, the vehicle must follow a desired reference path with an associated timing law starting from an initial point which may be on or off the reference path. The trajectory tracking problem is of interest in the literature and it is also the scope of this paper since the point-to-point stabilization requires a copious amount of information and may produce sporadic and large control signals for a large initial configuration error.

    The early study, in [3], employed the Lyapanov stability theory to design a locally asymptotically stable tracking controller for the linearized kinematic model around the reference trajectory. The dynamic feedback linearization techniques, as explained in [4] and [5], have motivated various recent control techniques, such as the linear parameter varying (LPV) control technique in [6], kinematic-dynamic cascade controller design in [7], and the event-triggered tracking control structure proposed in [8], to achieve the globally stable output tracking performance. The aforementioned frameworks solely have explored a stabilized solution; however, achieving a better control performance such as optimality of the trajectory and control inputs have been generally neglected in these works.

    Recently, the model predictive control (MPC) technique has propelled to the forefront of the car-like robot control strategies (e.g., [9]–[11]) as it can provide an optimal solution by predicting the vehicle motion and control actions at future time instants [12]. A major drawback of MPC is the computational overhead it imposes to the system due to the receding horizon calculation. Analytical solutions, proposed in [11] and [13], made this method computationally inexpensive for real-time applications. Nevertheless, they reduced the solution space by linearizing the error dynamics around the reference trajectory which causes sub-optimality. The performance of the continuous nonlinear MPC (NMPC) and discrete NMPC are also analyzed in [14], which showed a large tracking error produced in discrete NMPC due to the variable sampling compared with the continuous NMPC.

    Generally speaking, the methods that addressed the kinematic model control has mostly followed either of the two following strategies. First, are the methods that transformed the system error dynamics into the robot coordinate frame where the car inputs explicitly appear in the model, and consequently, be considered in the control, as in [3], [13], and [14]. However, for the purpose of continuous control design, the error dynamics are linearized around the reference trajectory which, indeed, results in local stability. Second, are the methods which employed the input-output linearization technique by defining a linear relation from the defined auxiliary input vector to the outputs [15]. This strategy guarantees the global stability as long as the internal dynamics are bounded. While the first method is more attractive in trajectory tracking due to the explicit consideration of control inputs, linearization around the reference trajectory causes sub-optimality in this strategy. To this end, in this paper, the input-output linearization technique is employed to guarantee the global stability of the trajectory tracking and an error minimization framework for the transformed model is designed to address the sub-optimality of generated trajectory by using the flatness property of kinematic model.

    This paper proposes a novel globally exponentially stable trajectory optimization and tracking control framework to minimize the tracking error of position, velocity, and acceleration. The input-output linearization technique is employed to transform the car-like nonlinear system tracking error model into a linear differential map from the defined auxiliary input to the output. Unlike the aforementioned studies, the proposed analytical solution drastically lowers the computational cost. The variational approach is used to solve the trajectory optimization problem, and the simulation results show that our trajectory planning technique is capable of following any arbitrary reference function with continuous first and second derivatives. In addition, the error cost is significantly down compared to the numerical solution. The comparison of our proposed method with the three most well-known tracking control techniques in the literature is shown in Table I.

    Table  I.  Comparison of Methods
    MethodStabilityOptimalityModelAssumptionDescription
    Kanayama [3]LAS*NoUnicycleModel linearizationState Feedback + Lyapanov Stability Function
    Luca [4]GAS*NoUnicycle and bicycleNonlinear modelDynamic Feedback Linearization
    CMPC [14]LASNoUnicycleModel linearizationReference Error Dynamic Tracking
    Proposed methodGASYesUnicycle and bicycleNonlinear modelInput-Output Linearization + Analytical Solution
    *LAS: Locally asymptotically stable, and GAS: Globally asymptotically stable
     | Show Table
    DownLoad: CSV

    The remainder of this paper is organized as follows: Section II formulates the model and problem definition. The input-output linearization technique and the proposed solution method are explained in Sections III and IV, respectively. In Section V, the simulation results are discussed. Finally, the conclusions and future works are presented in Section VI.

    The kinematics of a rear-wheel car-like robot model, as shown in Fig. 1, includes four states x(t)=[x(t)y(t)ψ(t)v(t)]T and two control inputs u(t)=[δ(t)a(t)]T which can be expressed as follows [16]:

    Figure  1.  Rear-wheel car-like robot model.
    ˙x(t)=v(t)cos(ψ(t))˙y(t)=v(t)sin(ψ(t))˙ψ(t)=1v(t)tan(δ(t))˙v(t)=a(t), (1)

    where t refers to time, (x,y) are longitudinal and lateral positions of the rear-wheel axis midpoint with the magnitude of velocity denoted by v, ψ[π,π] is heading angle, δ(π2,π2) is the steering angle of the front wheels, a refers to the acceleration, and is the wheelbase length. For the trajectory planning problem, we define the output of the system as

    z(t)=[x(t)y(t)]. (2)

    The trajectory planning is formulated in the form of an optimal control problem as follows

    Optimal Control Problem 1 (OPT1): Find an admissible and bounded control input u(t) that causes the system (1) to follow an admissible trajectory x(t) that minimizes the performance measure

    J(u(t))=12tft0[e2p(x(t),y(t),xr(t),yr(t))+e2v(˙x(t),˙y(t),˙xr(t),˙yr(t))+e2a(¨x(t),¨y(t),¨xr(t),¨yr(t))]dt, (3)

    where tf is the fixed final time and zr(t)=[xr(t)yr(t)]T is the twice differentiable geometric reference path that both are given by the path planner.

    The error terms are defined as

    e2p()=[x(t)xr(t)y(t)yr(t)]T[q100q2][x(t)xr(t)y(t)yr(t)]e2v()=[˙x(t)˙xr(t)˙y(t)˙yr(t)]T[q300q4][˙x(t)˙xr(t)˙y(t)˙yr(t)]e2a()=[¨x(t)¨xr(t)¨y(t)¨yr(t)]T[r100r2][¨x(t)¨xr(t)¨y(t)¨yr(t)], (4)

    where q1,q2,q3,q4,r1 and r2 are positive weights. Taking the derivative of ˙x(t) and ˙y(t) in (1) gives

    ¨x(t)=a(t)cos(ψ(t))1v2(t)tan(δ(t))sin(ψ(t))¨y(t)=a(t)sin(ψ(t))+1v2(t)tan(δ(t))cos(ψ(t)), (5)

    then by substituting ˙x(t), ˙y(t), ¨x(t), and ¨y(t) in (4),

    e2p()=q1[x(t)xr(t)]2+q2[y(t)yr(t)]2e2v()=q3[v(t)cos(ψ(t))˙xr(t)]2+q4[v(t)sin(ψ(t))˙yr(t)]2e2a()=r1[a(t)cos(ψ(t))1v2(t)tan(δ(t))sin(ψ(t))¨xr(t)]2+r2[a(t)sin(ψ(t))+1v2(t)tan(δ(t))cos(ψ(t))¨yr(t)]2. (6)

    The terms e2v() and e2a() are added as the control inputs are required to appear explicitly in the cost function, that allows the optimal control u(t) to be extracted without considering any parametrized polynomial function representation for trajectory and control inputs. Therefore, [˙x(t)˙y(t)] and [¨x(t)¨y(t)] should follow [˙xr(t)˙yr(t)] and [¨xr(t)¨yr(t)], respectively, since the goal is for [x(t)y(t)] to follow [xr(t)yr(t)], and the reference trajectory is assumed to be twice differentiable. This framework makes the NCLR error minimization problem tractable for the higher dimensional kinematic models by adding the higher order derivatives of position error to the cost function (3).

    The initial states x(t0), y(t0), ψ(t0), and v(t0) follow the initial configuration of the vehicle, and they are not necessarily aligned with the reference trajectory, but considering the fact that the error e2v() is expected to be zero at final time tf,

    ˙xr(tf)=v(tf)cos(ψ(tf))˙yr(tf)=v(tf)sin(ψ(tf)). (7)

    Accordingly, ψ(tf) and v(tf) can be found as

    ψ(tf)=arctan(˙yr(tf)˙xr(tf))v(tf)={˙xr(tf)cos(ψ(tf))=˙yr(tf)sin(ψ(tf)),ψ(tf){0,±π2,π}˙xr(tf),ψ(tf)=0˙xr(tf),ψ(tf)=πsign(ψ(tf))˙yr(tf),ψ(tf)=±π2. (8)

    In order to simplify the problem formulation and circumvent dealing with the nonlinear differential equations, the input-output linearization technique is used as introduced in [4] to transform the nonlinear kinematic model (1) into a linear model from input to output. Since the system inputs explicitly appeared in the second derivative of the defined output z(t) in (5), it can be rewritten into the following matrix form

    [¨x(t)¨y(t)]=[cos(ψ(t))1v2(t)sin(ψ(t))sin(ψ(t))1v2(t)cos(ψ(t))][a(t)tan(δ(t))]=G(v,ψ)[a(t)tan(δ(t))]. (9)

    Defining an auxiliary input vector ζ(t)=[ζ1(t)ζ2(t)]T gives

    [a(t)tan(δ(t))]=G1(v,ψ)ζ(t), (10)

    that results in two decoupled double integrators linearized from input to output as

    ¨x(t)=ζ1(t)¨y(t)=ζ2(t). (11)

    The system has the relative degree 2 in the region Ω={x(t)R4v(t)0,δ(π2,π2)} that remains with two internal dynamics ψ(t) and v(t).

    If we form ¨x(t)¨xr(t) and ¨y(t)¨yr(t) errors as

    ¨x(t)¨xr(t)=ζ1(t)¨xr(t)=η1(t)¨y(t)¨yr(t)=ζ2(t)¨yr(t)=η2(t), (12)

    and define the error vector e(t) as

    e(t)=[e1(t)e2(t)e3(t)e4(t)]=[x(t)xr(t)y(t)yr(t)˙x(t)˙xr(t)˙y(t)˙yr(t)], (13)

    the tracking error state equation ˙e(t)=Ae(t)+Bη(t) can be written as follows

    .e(t)=[0010000100000000]e(t)+[00001001]η(t), (14)

    that is completely controllable. The OPT1 is reformulated as

    Optimal Control Problem 2 (OPT2): Find an admissible control η(t) that causes the system (14) to follow an admissible trajectory e(t) that minimizes the performance measure

    J(η(t))=12tft0[eT(t)Qe(t)+ηT(t)Rη(t)]dt, (15)

    where

    Q=[q10000q20000q30000q4]andR=[r100r2] (16)

    are positive definite weighted matrices.

    Clearly, the boundary conditions are changed to

    e(t0)=[x(t0)xr(t0)y(t0)yr(t0)v(t0)cos(ψ(t0))˙xr(t0)v(t0)sin(ψ(t0))˙yr(t0)]ande(tf)=[0000]. (17)

    Remark 1: It should be noted that OPT2 is not a linear quadratic regulator (LQR) problem since, unlike the LQR problem formulation, e(tf) is assumed to be fixed.

    Since the problem is minimizing the cost functional (15) subject to the tracking error state equations (14), the variational method is employed to solve the optimization. The proposed solution in this section is not only optimal but also guarantees the exponential stability of trajectory and control inputs.

    Lemma 1: An optimal control η(t) for t[t0,tf] which gives the optimal state e(t) that minimizes the cost function (15) is the solution of the following two-point boundary value (TPBV) state linear differential equations

    ˙e1(t)=e3(t)˙e2(t)=e4(t)˙e3(t)=η1(t)˙e4(t)=η2(t). (18)

    and co-state linear differential equations

    ˙p1(t)=q1e1(t) (19)
    ˙p2(t)=q2e2(t) (20)
    ˙p3(t)=q3e3(t)p1(t) (21)
    ˙p4(t)=q4e4(t)p2(t), (22)

    with boundary conditions in (17). The optimal control laws are also given by

    η1(t)=1r1p3(t)=˙e3(t)η2(t)=1r2p4(t)=˙e4(t). (23)

    Proof: To start solving the minimization (15), we form the Hamiltonian as follows

    H(e(t),η(t),p(t))=12eT(t)Qe(t)+12ηT(t)Rη(t)+pT(t)(Ae(t)+Bη(t)), (24)

    where the vector p(t)=[p1(t)p2(t)p3(t)p4(t)]T includes the Lagrange multipliers.

    The necessary conditions of optimality are [17]

    ˙e(t)=Hp(e(t),η(t),p(t)) (25)
    ˙p(t)=He(e(t),η(t),p(t)) (26)
    0=Hη(e(t),η(t),p(t)). (27)

    State and co-state equations are derived from the substitution of H() in (25) and (26), respectively. Also, (27) gives

    H()η1=r1(t)η1(t)+p3(t)=0H()η2=r2(t)η2(t)+p4(t)=0

    from which the optimal control laws in (23) are obtained. ■

    Taking the second and third derivative of p1(t) and p2(t) from (19) and (20) gives

    ¨pi(t)=qi˙ei(t)=qiej(t)pi(t)=qi˙ej(t)=qiripj(t),(i,j)=(1,3),(2,4), (28)

    where ˙ej(t) is given from substitution of (23) in (18). Similarly, substituting ˙pj(t) into the forth derivative of pi(t) gives

    pi(t)=qiri˙pj(t)=qiqjriej(t)qiripi(t). (29)

    From (28), we have ej(t)=1qi¨pi(t). Thus, equation (29) can be rewritten as

    ripi(t)qj¨pi(t)+qipi(t)=0, (30)

    which is a homogeneous linear ordinary differential equation. To solve the linear differential (30), the Laplace transform of pi(t) is first obtained as

    Pi(s)=pit0s3+˙pit0s2+(¨pit0qjripit0)s+pit0qjri˙pit0(s2+2mis+qiri)(s22mis+qiri), (31)

    where pit0=pi(t)|t=t0. From (28), ηi(t) is found as

    pi(t)=qi˙ej(t)=qiηj(t). (32)

    Transforming (32) into the Laplace domain gives

    s3Pi(s)s2pit0s˙pit0¨pit0=qiHj(s). (33)

    Substitution of (31) in (33) gives

    Hj(s)=1qipit0s3qjqiri(¨pit0pit0)s2+qjqiri˙pit0s+qjqiri¨pit0(s2+2mis+qiri)(s22mis+qiri). (34)

    Taking the inverse Laplace transform of (31) and substituting the resulted pi(t) into the state and co-state equations gives the analytical optimal solution of the OPT2 problem with eight unknown parameters of pit0, ˙pit0, ¨pit0, and pit0, which can be found to satisfy the eight initial and final boundary conditions in (17). However, (34) shows that the resulted ηi(t) is unbounded due to the right-half plane roots of the polynomial (s22mis+qiri). Although, this analytical solution can address the OPT2 assumptions, it fails to guarantee the stability of the system.

    We approach the instability issue by relaxing e(tf) to be free while setting four of the initial conditions in (31) to cancel the unstable poles. The new assumption automatically changes the original problem to the infinite-time LQR problem formulated as follows.

    Optimal Control Problem 3 (OPT3): Find an admissible control η(t) that causes the origin e=0 of the closed loop system (14) to be globally exponentially stable with an admissible trajectory e(t) that minimizes the performance measure

    J(η(t))=12tft0[eT(t)Qe(t)+ηT(t)Rη(t)]dt, (35)

    where

    Q=[q10000q20000q30000q4]andR=[r100r2] (36)

    are positive definite diagonal weighted matrices satisfying the boundary conditions

    e(t0)=[x(t0)xr(t0)y(t0)yr(t0)v(t0)cos(ψ(t0))˙xr(t0)v(t0)sin(ψ(t0))˙yr(t0)]ande(tf)=[0000] (37)

    Underdamped fi>0:

    ei(t)=niqifiemi(tt0)[ainisin(fi(tt0))bicos(fi(tt0)+αi)] (38)
    ej(t)=n2iqifiemi(tt0)[ainicos(fi(tt0)+αi)+bisin(fi(tt0)+2αi)] (39)
    ηi(t)=n3iqifiemi(tt0)[ainisin(fi(tt0)+2αi)bicos(fi(tt0)+3αi)]. (40)

    Critically damped fi=0:

    ei(t)=1qi[biemi(tt0)mi(miai+bi)temi(tt0)] (41)
    ej(t)=miqi[(miai+2bi)emi(tt0)mi(miai+bi)temi(tt0)] (42)
    ηi(t)=m2iqi[(2miai+3bi)emi(tt0)mi(miai+bi)temi(tt0)]. (43)

    Overdamped fi<0:

    ei(t)=ai2qi|fi|c1ic2i(ec1i(tt0)ec2i(tt0))+bi2qi|fi|(c1iec1i(tt0)c2iec2i(tt0)) (44)
    ej(t)=ai2qi|fi|c1ic2i(c2iec2i(tt0)c1iec1i(tt0))+bi2qi|fi|(c22iec2i(tt0)c21iec1i(tt0)) (45)
    ηi(t)=ai2qi|fi|c1ic2i(c21iec1i(tt0)c22iec2i(tt0))+bi2qi|fi|(c31iec1i(tt0)c32iec2i(tt0)). (46)

    as tf approaches infinity.

    Theorem 1: The globally exponentially stable analytical solutions of the OPT3 are given by (38)–(46), where

    fi=14(2qiriqjri),mi=122qiri+qjrini=fi+m2i=4qiri,αi=arctan(mifi)c1i=mi|fi|>0,c2i=mi+|fi|>0, (47)

    and ai and bi are given as in Table II, which satisfy the initial boundary condition e(t0) in (17).

    Table  II.  ai and bi for Three Different Cases
    Damping casesai and bi coefficients
    Underdamped fi>0ai=qifin2icos(αi)[ejt0ni+2eit0sin(αi)]bi=qifieit0nicos(αi),
    Critically damped fi=0ai=1m2i[qiejt0+2miqieit0],bi=qieit0,
    Overdamped fi<0ai=1m2i|fi|[qiejt0+2miqieit0]bi=qieit0.
     | Show Table
    DownLoad: CSV

    Proof: As mentioned in Section IV-A, in order to guarantee the exponential stability of the origin in the closed loop system (14), the initial conditions in (31) are tuned so as to cancel the unstable poles related to the polynomial (s22mis+qiri). To do so, the initial conditions of (31) are assigned as

    pit0=ai˙pit0=bi¨pit0=2mi(bi+2miai)+qjriai+aiqiripit0=(bi+2miai)qiri+qjribi, (48)

    which gives

    Pi(s)=ais+bi+2miai(s+mi)2+fi, (49)

    where ai and bi should be determined by the initial boundary conditions. Since Pi(s) is a strictly-proper rational function of s, pi(t) is a time-invariant function [18]. Location of poles in (49) leads in three underdamped, critically damped, and overdamped solution cases:

    1) Underdamped fi>0: If Pi(s) has complex roots, the inverse Laplace transform of (49) gives pi(t) for t[t0,tf] as

    pi(t)=1fiemi(tt0)[ainicos(fi(tt0)αi)+bisin(fi(tt0))]. (50)

    By substituting the first derivative of pi(t) in (19) and (20),

    qiei(t)=nifiemi(tt0)[ainisin(fi(tt0))bicos(fi(tt0)+αi)], (51)

    which gives ei(t) as (38). Consequently, ej(t) is found by taking the first derivative of ei(t) as in (39).

    2) Critically Damped fi=0: This case occurs when the weights follow the equality q2j4riqi=0. In this case, the inverse Laplace transform of (49) gives pi(t) for t[t0,tf] as

    pi(t)=aiemi(tt0)+(miai+bi)temi(tt0). (52)

    Substitution of its first and second derivatives in (19), (20), and (28) gives the closed form solution for ei(t) and ej(t) as in (41) and (42).

    3) Overdamped fi<0: Equation (49) can be rewritten as

    Pi(s)=ais+bi+2miai(s+mi)2|fi|, (53)

    where |fi| is the absolute value of fi. By taking the inverse Laplace of Pi(s),

    pi(t)=ai2|fi|(c2iec1i(tt0)c1iec2i(tt0))+bi2|fi|(ec1i(tt0)ec2i(tt0)). (54)

    Similarly as the former cases, ei(t) and ej(t) are found as in (44) and (45).■

    Finding the errors give us the optimal states as

    x(t)=xr(t)+e1(t)y(t)=yr(t)+e2(t)ψ(t)=arctan(˙y(t)˙x(t))v(t)={˙x(t)cos(ψ(t))=˙y(t)sin(ψ(t));ψ(t){0,±π2,π}˙x(t);ψ(t)=0˙x(t);ψ(t)=πsign(ψ(t))˙y(t);ψ(t)=±π2, (55)

    where

    ˙x(t)=˙xr(t)+e3(t)˙y(t)=˙yr(t)+e4(t). (56)

    The optimal control inputs are found from (10) as

    [a(t)tan(δ(t))]=[cos(ψ(t))sin(ψ(t))v2(t)sin(ψ(t))v2(t)cos(ψ(t))]ζ(t), (57)

    where ζ(t)=η(t)+[˙x(t)˙xr(t)˙y(t)˙yr(t)].

    Remark 2: Since ˙x(t) and ˙y(t) are stable and arctan() is the continuous function of its argument in (π2,π2), it can be concluded from (55) that the internal dynamics ψ(t) and v(t) are bounded.

    The parameters ai and bi should be set to satisfy the boundary conditions. Substituting the initial conditions (37) into ei(t0) and ej(t0) gives ai and bi as illustrated for each case in Table II. As mentioned in Section IV-A, the final time boundary conditions may not be fulfilled since two unstable poles are eliminated in (31) to guarantee the stability, and the unknown parameters ai and bi are used up to satisfy the initial boundary conditions. However, the solution is a good approximation to satisfy e(tf)0 due to the terms ec1i(tft0) and ec2i(tft0) in overdamped, and emi(tft0) in underdamped and critically-damped cases. Thus, there exists a trade off between guaranteeing the stability, as proved for the OPT3 problem, and meeting the exact final time conditions as elaborated in the solutions of OPT2.

    Remark 3: From (47) it can be concluded that for the fixed value of ri, if the values of qi, and qj increase by the factors of β2i and βi (for βi>1), respectively, the values of mi for the underdamped and critically damped cases, c1i and c2i for the overdamped case will be increased as

    mnewi=βimoldi,cnew1i,2i=βicold1i,2i, (58)

    which determines how fast the tracking errors converge to zero.

    In this section, the effect of the weight change in controller performance, and tracking error is evaluated. At the end, the proposed closed-form solution in this paper is compared with other well-known methods in the literature. The simulations are developed in MATLAB. The reference time-parametrized trajectory, for t[0,30], is an eight-shaped Lissajous curve expressed as

    xr(t)=1.1+0.7sin(2πt30)yr(t)=0.9+0.7sin(4πt30). (59)

    The system starts from the initial configurations, x(0)=1.1, y(0)=0.8, v(0)=1, and ψ(0)=1.3, and the errors are expected to be zero at final time.

    The tracking errors for three different cases are shown in Fig. 2. The errors can remain in a certain bound by making a trade-off among qi, qj, and ri weights. The impact of weight choice on the final time error can also be seen by decreasing the control effort that will increase the tracking time and consequently, it may unfavorably cause the final time error not to be close to zero. However, the trajectory error is globally exponentially stable and control inputs are guaranteed to be in t[0,30].

    Figure  2.  Tracking errors compared for different weighting parameters (UD = Underdamped, OD = Overdamped, and CD = Critically damped).

    The vehicle trajectory, velocity, heading angle, and control variables, for three different cases, are illustrated in Figs. 3-5, respectively. As shown in Table III, the measured error e(tf) is almost zero at tf = 30 s, 40 s, and 50 s for the three simulated cases, and converges to the zero for large enough tf due to the exponential stability of the solution. Boundedness of the control effort and internal dynamics v(t) and ψ(t) are also evident in these figures.

    Figure  3.  Weighting parameters impact on final time errors for the eight-shaped trajectory tracking.
    Figure  4.  Eight-shaped trajectory tracking: (a) heading angle: ψ(t) and ψr(t); (b) velocity: v(t) and vr(t).
    Figure  5.  Eight-shaped trajectory tracking: (a) steering angle: δ(t); (b) acceleration: a(t).
    Table  III.  Error Measurements at Three Time Instances (tf=30s, tf=40s, and tf=50s).
    ErrorCasetf = 30 stf = 40 stf = 50 s
    e1(t)UD–1.35E–202.69E–253.93E–31
    OD3.09E–061.29E–075.36E–09
    CD5.41E–068.23E–081.18E–09
    e2(t)UD–7.47E–201.49E–242.18E–30
    OD1.71E–057.13E–072.97E–08
    CD3.00E–054.57E–076.52E–09
    e3(t)UD–2.64E–19–5.76E–25–3.23E–31
    OD–9.82E–07–4.09E–08–1.70E–09
    CD–2.24E–06–3.48E–08–5.02E–10
    e4(t)UD–1.47E–18–3.19E–24–1.79E–30
    OD–5.44E–06–2.27E–07–9.45E–09
    CD–1.24E–05–1.93E–07–2.78E–09
    η1(t)UD7.58E–197.06E–25–3.69E–31
    OD3.12E–071.30E–085.41E–10
    CD9.20E–071.46E–082.14E–10
    η2(t)UD4.20E–183.92E–24–2.05E–30
    OD1.73E–067.21E–083.00E–09
    CD5.10E–068.11E–081.19E–09
     | Show Table
    DownLoad: CSV

    Finally, the proposed analytical solution in this paper is compared with our previous numerical solution (based on MATLAB two-point boundary value problem solution package BVP4C) in [19]. The integrand of cost function (3), i.e., ep()2+ev()2+ea()2, is plotted over time in Fig. 6. The weights are set to be qi=1, qj=1, and ri=1 for both cases. The cost function value J() for the analytical solution is 9.87 which shows more than 380% reduction compared to the numerical case which gives 47.45.

    Figure  6.  Comparison of analytical and numerical methods based on their measured cost function integrand, i.e., ep()2+ev()2+ea()2, over time.

    In this work, a novel globally exponentially stable trajectory optimization and tracking control formulation is proposed to generate the optimal trajectory and tracking control inputs for the car-like robot kinematic model.

    The velocity and acceleration errors are minimized in addition to the position error, which allows us to extract the optimal control u(t) without considering any parametrized geometric function representations for control inputs since they explicitly appear in the cost function. The input-output linearization technique is employed to linearize the system from the defined auxiliary input to output. The closed-form optimal trajectories and control inputs are obtained by the calculus of variation technique, and boundedness of the solution and internal dynamics are also proved.

    The proposed framework can be easily extended to tackle the optimal trajectory optimization problem of a higher dimensional kinematic model with more states by adding the higher order derivatives of the x(t) and y(t) into the cost function and assuming that the reference trajectory is smooth.

    Our proposed optimal planning framework is an open loop control strategy which yields continuous control signals (sequence of inputs) as a function of the initial state of the vehicle. Thus, if the states evolves exactly according to the model then the predicted state trajectory is exactly obtained in reality by applying the calculated optimal input signal. However, if the model is inaccurate and the system is subject to disturbance not included in the model, the result of applying the open-loop control signal may be different than what is expected. Thus, in our future work, we aim to apply the repeated application of this open-loop strategy in discrete-time domain to obtain the same feedback effect of recursive techniques, known as receding horizon strategy in the literature.

  • [1]
    R. W. Brockett et al., " Asymptotic stability and feedback stabilization,” Differential Geometric Control Theory, vol. 27, no. 1, pp. 181–191, 1983.
    [2]
    B. Paden, M. Čáp, S. Z. Yong, D. Yershov, and E. Frazzoli, " A survey of motion planning and control techniques for self-driving urban vehicles,” IEEE Trans. Intelligent Vehicles, vol. 1, no. 1, pp. 33–55, 2016. doi: 10.1109/TIV.2016.2578706
    [3]
    Y. Kanayama, Y. Kimura, F. Miyazaki, and T. Noguchi, " A stable tracking control method for an autonomous mobile robot, ” in Proc. IEEE Int. Conf. Robotics and Automation, 1990.
    [4]
    A. De Luca, G. Oriolo, and C. Samson, " Feedback control of a nonholonomic car-like robot,” Robot Motion Planning and Control, pp. 171–253, 1998.
    [5]
    B. d’Andréa Novel, G. Campion, and G. Bastin, " Control of nonholo-nomic wheeled mobile robots by state feedback linearization,” The Int. J. Robotics Research, vol. 14, no. 6, pp. 543–559, 1995. doi: 10.1177/027836499501400602
    [6]
    P. Gáspár, Z. Szabó, and J. Bokor, " LPV design of fault-tolerant control for road vehicles,” Int. J. Applied Mathematics and Computer Science, vol. 22, no. 1, pp. 173–182, 2012. doi: 10.2478/v10006-012-0013-x
    [7]
    J. Fu, F. Tian, T. Chai, Y. Jing, Z. Li, and C.-Y. Su, " Motion tracking control design for a class of nonholonomic mobile robot systems,” IEEE Trans. Systems, Man, and Cybernetics: Systems, no. 99, pp. 1–7,
    [8]
    R. Postoyan, M. C. Bragagnolo, E. Galbrun, J. Daafouz, D. Neši ć, and E. B. Castelan, " Event-triggered tracking control of unicycle mobile robots,” Automatica, vol. 52, pp. 302–308, 2015. doi: 10.1016/j.automatica.2014.12.009
    [9]
    D. Gu and H. Hu, " Receding horizon tracking control of wheeled mobile robots,” IEEE Trans. Control Systems Technology, vol. 14, no. 4, pp. 743–749, 2006. doi: 10.1109/TCST.2006.872512
    [10]
    Y. Zhang, S. Li, and X. Liu, " Neural network-based model-free adaptive near-optimal tracking control for a class of nonlinear systems,” IEEE Trans. Neural Networks and Learning Systems, 2018.
    [11]
    G. V. Raffo, G. K. Gomes, J. E. Normey-Rico, C. R. Kelber, and L. B. Becker, " A predictive controller for autonomous vehicle path tracking,” IEEE Trans. Intell. Trans. Sys., vol. 10, no. 1, pp. 92–102, 2009. doi: 10.1109/TITS.2008.2011697
    [12]
    D. Q. Mayne, J. B. Rawlings, C. V. Rao, and P. O. Scokaert, " Constrained model predictive control: stability and optimality,” Automatica, vol. 36, no. 6, pp. 789–814, 2000. doi: 10.1016/S0005-1098(99)00214-9
    [13]
    G. Klančar and I. Škrjanc, " Tracking-error model-based predictive control for mobile robots in real time,” Robotics and Autonomous Systems, vol. 55, no. 6, pp. 460–469, 2007. doi: 10.1016/j.robot.2007.01.002
    [14]
    I. Škrjanc and G. Klančar, " A comparison of continuous and discrete tracking-error model-based predictive control for mobile robots,” Robotics and Autonomous Systems, vol. 87, pp. 177–187, 2017. doi: 10.1016/j.robot.2016.09.016
    [15]
    H. K. Khalil and J. Grizzle, Nonlinear Systems. Prentice hall Upper Saddle River, NJ, vol. 3, 2002.
    [16]
    R. Rajamani, Vehicle Dynamics and Control. Springer Science & Business Media, 2011.
    [17]
    D. E. Kirk, Optimal Control Theory: An Introduction. Courier Corporation, 2012.
    [18]
    W. J. Rugh, Linear System Theory. Prentice hall Upper Saddle River, NJ, vol. 2, 1996.
    [19]
    K. Majd, M. Razeghi-Jahromi, and A. Homaifar, " Optimal Kinematicbased Trajectory Planning and Tracking Control of Autonomous Ground Vehicle Using the Variational Approach, ” in Proc. IEEE Intelligent Vehicles Symposium, 2018.
  • Related Articles

    [1]Jin-Xi Zhang, Kai-Di Xu, Qing-Guo Wang. Prescribed Performance Tracking Control of Time-Delay Nonlinear Systems With Output Constraints[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(7): 1557-1565. doi: 10.1109/JAS.2023.123831
    [2]Dan Zhang, Jiabin Hu, Jun Cheng, Zheng-Guang Wu, Huaicheng Yan. A Novel Disturbance Observer Based Fixed-Time Sliding Mode Control for Robotic Manipulators With Global Fast Convergence[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(3): 661-672. doi: 10.1109/JAS.2023.123948
    [3]Yunjun Zheng, Jinchuan Zheng, Ke Shao, Han Zhao, Hao Xie, Hai Wang. Adaptive Trajectory Tracking Control for Nonholonomic Wheeled Mobile Robots: A Barrier Function Sliding Mode Approach[J]. IEEE/CAA Journal of Automatica Sinica, 2024, 11(4): 1007-1021. doi: 10.1109/JAS.2023.124002
    [4]Ding Wang, Jiangyu Wang, Mingming Zhao, Peng Xin, Junfei Qiao. Adaptive Multi-Step Evaluation Design With Stability Guarantee for Discrete-Time Optimal Learning Control[J]. IEEE/CAA Journal of Automatica Sinica, 2023, 10(9): 1797-1809. doi: 10.1109/JAS.2023.123684
    [5]Zhe Chen, Ning Li. An Optimal Control-Based Distributed Reinforcement Learning Framework for A Class of Non-Convex Objective Functionals of the Multi-Agent Network[J]. IEEE/CAA Journal of Automatica Sinica, 2023, 10(11): 2081-2093. doi: 10.1109/JAS.2022.105992
    [6]Wei He, Xinxing Mu, Liang Zhang, Yao Zou. Modeling and Trajectory Tracking Control for Flapping-Wing Micro Aerial Vehicles[J]. IEEE/CAA Journal of Automatica Sinica, 2021, 8(1): 148-156. doi: 10.1109/JAS.2020.1003417
    [7]Xiong Yang, Bo Zhao. Optimal Neuro-Control Strategy for Nonlinear Systems With Asymmetric Input Constraints[J]. IEEE/CAA Journal of Automatica Sinica, 2020, 7(2): 575-583. doi: 10.1109/JAS.2020.1003063
    [8]Aquib Mustafa, Narendra K. Dhar, Nishchal K Verma. Event-Triggered Sliding Mode Control for Trajectory Tracking of Nonlinear Systems[J]. IEEE/CAA Journal of Automatica Sinica, 2020, 7(1): 307-314. doi: 10.1109/JAS.2019.1911654
    [9]Ruizhuo Song, Liao Zhu. Optimal Fixed-Point Tracking Control for Discrete-Time Nonlinear Systems via ADP[J]. IEEE/CAA Journal of Automatica Sinica, 2019, 6(3): 657-666. doi: 10.1109/JAS.2019.1911453
    [10]Shanwei Su. Output-feedback Dynamic Surface Control for a Class of Nonlinear Non-minimum Phase Systems[J]. IEEE/CAA Journal of Automatica Sinica, 2016, 3(1): 96-104.
    [11]Xiaojun Tang, Jie Chen. Direct Trajectory Optimization and Costate Estimation of Infinite-horizon Optimal Control Problems Using Collocation at the Flipped Legendre-Gauss-Radau Points[J]. IEEE/CAA Journal of Automatica Sinica, 2016, 3(2): 174-183.
    [12]Lu Wang, Jianbo Su. Trajectory Tracking of Vertical Take-off and Landing Unmanned Aerial Vehicles Based on Disturbance Rejection Control[J]. IEEE/CAA Journal of Automatica Sinica, 2015, 2(1): 65-73.
    [13]Gang Chen, Ening Feng. Distributed Secondary Control and Optimal Power Sharing in Microgrids[J]. IEEE/CAA Journal of Automatica Sinica, 2015, 2(3): 304-312.
    [14]Zixuan Liang, Zhang Ren, Xingyue Shao. Decoupling Trajectory Tracking for Gliding Reentry Vehicles[J]. IEEE/CAA Journal of Automatica Sinica, 2015, 2(1): 115-120.
    [15]Bin Xian, Jianchuan Guo, Yao Zhang. Adaptive Backstepping Tracking Control of a 6-DOF Unmanned Helicopter[J]. IEEE/CAA Journal of Automatica Sinica, 2015, 2(1): 19-24.
    [16]Qiming Zhao, Hao Xu, Sarangapani Jagannathan. Near Optimal Output Feedback Control of Nonlinear Discrete-time Systems Based on Reinforcement Neural Network Learning[J]. IEEE/CAA Journal of Automatica Sinica, 2014, 1(4): 372-384.
    [17]Xiaoli Li, Kang Wang, Dexin Liu. An Improved Result of Multiple Model Iterative Learning Control[J]. IEEE/CAA Journal of Automatica Sinica, 2014, 1(3): 315-322.
    [18]Kyriakos G. Vamvoudakis. Event-Triggered Optimal Adaptive Control Algorithm for Continuous-Time Nonlinear Systems[J]. IEEE/CAA Journal of Automatica Sinica, 2014, 1(3): 282-293.
    [19]Jing Na, Guido Herrmann. Online Adaptive Approximate Optimal Tracking Control with Simplified Dual Approximation Structure for Continuous-time Unknown Nonlinear Systems[J]. IEEE/CAA Journal of Automatica Sinica, 2014, 1(4): 412-422.
    [20]Nhan T. Nguyen, Sivasubramanya N. Balakrishnan. Bi-Objective Optimal Control Modification Adaptive Control for Systems with Input Uncertainty[J]. IEEE/CAA Journal of Automatica Sinica, 2014, 1(4): 423-434.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(6)  / Tables(3)

    Article Metrics

    Article views (9511) PDF downloads(290) Cited by()

    Highlights

    • The car-like robot kinematic model trajectory tracking, and control problem is revisited.
    • A globally exponentially stable trajectory optimization and tracking framework is proposed.
    • The analytical solution using variational approach is proposed to solve the trajectory optimization problem.
    • The method can tackle the optimal trajectory optimization of higher dimensional kinematic models.

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return