ADAPTIVE control is a proven method for learning feedback controllers for systems with unknown dynamic models,exogenous disturbances,nonzero setpoints,and unmodeled nonlinearities. Adaptive control has been applied for years in process control,industry,aerospace systems,vehicle systems,and elsewhere. Reinforcement learning refers to a broad class of methods for improving control policies based on observation of the performance or value of current policies. Reinforcement learning allows the learning of optimal controls in real-time using data measured along system trajectories.
Reinforcement learning is closely tied theoretically to both adaptive control and optimal control. Recent work has shown how to use reinforcement learning to develop new forms of adaptive controllers that converge to optimal control solutions in real time and to effectively deal with some existing open problems in adaptive control such as handling unmatched uncertainties. Developed methods include approximate dynamic programming,integral reinforcement learning,temporal difference,adaptive function approximation,and others.
However,there are many methods in adaptive control,iterative learning control,model predictive control,and elsewhere those hold out the hope of improving the design of reinforcement learning for feedback control. On the other hand,there remain many techniques in reinforcement learning and iterative learning that have not yet been exploited for adaptive feedback control.
The purpose of this special issue is to present a body of work that shows how to more closely integrate and cross-fertilize techniques from adaptive control,from reinforcement learning and from iterative learning. Applications of reinforcement learning and from iterative learning in adaptive feedback control have by and large employed performance indices based on tracking errors measured along system trajectories. However,learning techniques include a far broader range of methods including planning,episodic learning,learning with reduced information,optimal control for Markov decision processes and more that have not been fully explored for feedback control design.
In response to the call for papers,we received submissions from all over the world. All manuscripts underwent a rigorous peer review process. We finally selected 20 full papers for this special issue. These full papers can be broadly organized into two main categories. The first category includes papers that propose theoretical approaches via blending the results from adaptive control and reinforcement learning. The second category includes papers that develop formal and rigorous learning-based control solutions to open application problems ranging from reactors to aircraft. Most of the first category papers will appear in Issue 3 and those remaining,along with the second category papers,will appear in Issue 4,specifically:
Selected Papers for Issue 3 ''Theoretical Approaches Integrating Adaptive Control ,Reinforcement Learning and Iterative learning'':
1)Off-Policy Reinforcement Learning with Gaussian Processes
2)Concurrent Learning-based Approximate Feedback-Nash Equilibrium Solution of N-player Nonzero-sum Differential Games
3)Clique-based Cooperative Multiagent Reinforcement Learning Using Factor Graphs
4)Reinforcement Learning Transfer Based on Subgoal Discovery and Subtask Similarity
5)Closed-loop P-Type Iterative Learning Control of Uncertain Linear Distributed Parameter Systems
6)Experience Replay for Least-Squares Policy Iteration
7)Event-Triggered Optimal Adaptive Control Algorithm for Continuous-Time Nonlinear Systems
8)Robust Adaptive Model Tracking for Distributed Parameter Control of Linear Infinite-dimensional Systems in Hilbert Space
9)Adaptive Iterative Learning Control for a Class of Nonlinear Time-varying Systems with Unknown Delays and Input Dead-zone
10)An Improved Result of Multiple Model Iterative Learning Control
11)Continuous Action Reinforcement Learning for Control-Affine Systems with Unknown Dynamics
Selected Papers for Issue 4 ''Formal Learning-Based Solutions to Open Control Problems Ranging from Reactors to Aircraft'':
1)Coordinated Adaptive Control for Coordinated Path-following Surface Vessels with a Time-invariant Orbital Velocity
2)Adaptive Iterative Learning Control Based on Unfalsified Strategy for Chylla-Haase Reactor
3)Parameters Tuning of Model Free Adaptive Control Based on Minimum Entropy
4)Near Optimal Output Feedback Control of Nonlinear Discrete-time Systems Based on Reinforcement Neural Network Learning
5)An Adaptive Obstacle Avoidance Algorithm for Unmanned Surface Vehicle in Complicated Marine Environments
6)Adaptive Pinpoint and Fuel Efficient Mars Landing Using Reinforcement Learning
7)Online Adaptive Approximate Optimal Tracking Control with Simplified Dual Approximation Structure for Continuous-time Unknown Nonlinear Systems
8)Bi-Objective Optimal Control Modification Adaptive Control for Systems with Input Uncertainty
9)Reinforcement Learning Based Controller Synthesis for Flexible Aircraft Wings
Finally,we take this opportunity to thank all the authors for their submissions and all the reviewers who took their time in rigorously going through these papers. Deputy Editor-in-Chief Derong Liu provided enormous assistance and help during the entire preparation time of this special issue and we are glad to have a chance to work with him.