A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation

Current Issue

Vol. 11,  No. 11, 2024

Display Method:
PAPERS
On Zero Dynamics and Controllable Cyber-Attacks in Cyber-Physical Systems and Dynamic Coding Schemes as Their Countermeasures
Mahdi Taheri, Khashayar Khorasani, Nader Meskin
2024, 11(11): 2191-2203. doi: 10.1109/JAS.2024.124692
Abstract(377) HTML (32) PDF(157)
Abstract:

In this paper, we study stealthy cyber-attacks on actuators of cyber-physical systems (CPS), namely zero dynamics and controllable attacks. In particular, under certain assumptions, we investigate and propose conditions under which one can execute zero dynamics and controllable attacks in the CPS. The above conditions are derived based on the Markov parameters of the CPS and elements of the system observability matrix. Consequently, in addition to outlining the number of required actuators to be attacked, these conditions provide one with the minimum system knowledge needed to perform zero dynamics and controllable cyber-attacks. As a countermeasure against the above stealthy cyber-attacks, we develop a dynamic coding scheme that increases the minimum number of the CPS required actuators to carry out zero dynamics and controllable cyber-attacks to its maximum possible value. It is shown that if at least one secure input channel exists, the proposed dynamic coding scheme can prevent adversaries from executing the zero dynamics and controllable attacks even if they have complete knowledge of the coding system. Finally, two illustrative numerical case studies are provided to demonstrate the effectiveness and capabilities of our derived conditions and proposed methodologies.

Boosting Adaptive Weighted Broad Learning System for Multi-Label Learning
Yuanxin Lin, Zhiwen Yu, Kaixiang Yang, Ziwei Fan, C. L. Philip Chen
2024, 11(11): 2204-2219. doi: 10.1109/JAS.2024.124557
Abstract(193) HTML (32) PDF(68)
Abstract:

Multi-label classification is a challenging problem that has attracted significant attention from researchers, particularly in the domain of image and text attribute annotation. However, multi-label datasets are prone to serious intra-class and inter-class imbalance problems, which can significantly degrade the classification performance. To address the above issues, we propose the multi-label weighted broad learning system (MLW-BLS) from the perspective of label imbalance weighting and label correlation mining. Further, we propose the multi-label adaptive weighted broad learning system (MLAW-BLS) to adaptively adjust the specific weights and values of labels of MLW-BLS and construct an efficient imbalanced classifier set. Extensive experiments are conducted on various datasets to evaluate the effectiveness of the proposed model, and the results demonstrate its superiority over other advanced approaches.

A State-Migration Particle Swarm Optimizer for Adaptive Latent Factor Analysis of High-Dimensional and Incomplete Data
Jiufang Chen, Kechen Liu, Xin Luo, Ye Yuan, Khaled Sedraoui, Yusuf Al-Turki, MengChu Zhou
2024, 11(11): 2220-2235. doi: 10.1109/JAS.2024.124575
Abstract(142) HTML (26) PDF(46)
Abstract:

High-dimensional and incomplete (HDI) matrices are primarily generated in all kinds of big-data-related practical applications. A latent factor analysis (LFA) model is capable of conducting efficient representation learning to an HDI matrix, whose hyper-parameter adaptation can be implemented through a particle swarm optimizer (PSO) to meet scalable requirements. However, conventional PSO is limited by its premature issues, which leads to the accuracy loss of a resultant LFA model. To address this thorny issue, this study merges the information of each particle’s state migration into its evolution process following the principle of a generalized momentum method for improving its search ability, thereby building a state-migration particle swarm optimizer (SPSO), whose theoretical convergence is rigorously proved in this study. It is then incorporated into an LFA model for implementing efficient hyper-parameter adaptation without accuracy loss. Experiments on six HDI matrices indicate that an SPSO-incorporated LFA model outperforms state-of-the-art LFA models in terms of prediction accuracy for missing data of an HDI matrix with competitive computational efficiency. Hence, SPSO’s use ensures efficient and reliable hyper-parameter adaptation in an LFA model, thus ensuring practicality and accurate representation learning for HDI matrices.

Revisiting the LQR Problem of Singular Systems
Komeil Nosrati, Juri Belikov, Aleksei Tepljakov, Eduard Petlenkov
2024, 11(11): 2236-2252. doi: 10.1109/JAS.2024.124665
Abstract(194) HTML (45) PDF(54)
Abstract:

In the development of linear quadratic regulator (LQR) algorithms, the Riccati equation approach offers two important characteristics—it is recursive and readily meets the existence condition. However, these attributes are applicable only to transformed singular systems, and the efficiency of the regulator may be undermined if constraints are violated in nonsingular versions. To address this gap, we introduce a direct approach to the LQR problem for linear singular systems, avoiding the need for any transformations and eliminating the need for regularity assumptions. To achieve this goal, we begin by formulating a quadratic cost function to derive the LQR algorithm through a penalized and weighted regression framework and then connect it to a constrained minimization problem using the Bellman’s criterion. Then, we employ a dynamic programming strategy in a backward approach within a finite horizon to develop an LQR algorithm for the original system. To accomplish this, we address the stability and convergence analysis under the reachability and observability assumptions of a hypothetical system constructed by the pencil of augmented matrices and connected using the Hamiltonian diagonalization technique.

Image Enhancement via Associated Perturbation Removal and Texture Reconstruction Learning
Kui Jiang, Ruoxi Wang, Yi Xiao, Junjun Jiang, Xin Xu, Tao Lu
2024, 11(11): 2253-2269. doi: 10.1109/JAS.2024.124521
Abstract(593) HTML (431) PDF(73)
Abstract:

Degradation under challenging conditions such as rain, haze, and low light not only diminishes content visibility, but also results in additional degradation side effects, including detail occlusion and color distortion. However, current technologies have barely explored the correlation between perturbation removal and background restoration, consequently struggling to generate high-naturalness content in challenging scenarios. In this paper, we rethink the image enhancement task from the perspective of joint optimization: Perturbation removal and texture reconstruction. To this end, we advise an efficient yet effective image enhancement model, termed the perturbation-guided texture reconstruction network (PerTeRNet). It contains two sub-networks designed for the perturbation elimination and texture reconstruction tasks, respectively. To facilitate texture recovery, we develop a novel perturbation-guided texture enhancement module (PerTEM) to connect these two tasks, where informative background features are extracted from the input with the guidance of predicted perturbation priors. To alleviate the learning burden and computational cost, we suggest performing perturbation removal in a sub-space and exploiting super-resolution to infer high-frequency background details. Our PerTeRNet has demonstrated significant superiority over typical methods in both quantitative and qualitative measures, as evidenced by extensive experimental results on popular image enhancement and joint detection tasks. The source code is available at

https://github.com/kuijiang94/PerTeRNet

.

Two-Stage Approach for Targeted Knowledge Transfer in Self-Knowledge Distillation
Zimo Yin, Jian Pu, Yijie Zhou, Xiangyang Xue
2024, 11(11): 2270-2283. doi: 10.1109/JAS.2024.124629
Abstract(169) HTML (20) PDF(48)
Abstract:

Knowledge distillation (KD) enhances student network generalization by transferring dark knowledge from a complex teacher network. To optimize computational expenditure and memory utilization, self-knowledge distillation (SKD) extracts dark knowledge from the model itself rather than an external teacher network. However, previous SKD methods performed distillation indiscriminately on full datasets, overlooking the analysis of representative samples. In this work, we present a novel two-stage approach to providing targeted knowledge on specific samples, named two-stage approach self-knowledge distillation (TOAST). We first soften the hard targets using class medoids generated based on logit vectors per class. Then, we iteratively distill the under-trained data with past predictions of half the batch size. The two-stage knowledge is linearly combined, efficiently enhancing model performance. Extensive experiments conducted on five backbone architectures show our method is model-agnostic and achieves the best generalization performance. Besides, TOAST is strongly compatible with existing augmentation-based regularization methods. Our method also obtains a speedup of up to 2.95x compared with a recent state-of-the-art method.

Privacy Preserving Distributed Bandit Residual Feedback Online Optimization Over Time-Varying Unbalanced Graphs
Zhongyuan Zhao, Zhiqiang Yang, Luyao Jiang, Ju Yang, Quanbo Ge
2024, 11(11): 2284-2297. doi: 10.1109/JAS.2024.124656
Abstract(116) HTML (28) PDF(36)
Abstract:

This paper considers the distributed online optimization (DOO) problem over time-varying unbalanced networks, where gradient information is explicitly unknown. To address this issue, a privacy-preserving distributed online one-point residual feedback (OPRF) optimization algorithm is proposed. This algorithm updates decision variables by leveraging one-point residual feedback to estimate the true gradient information. It can achieve the same performance as the two-point feedback scheme while only requiring a single function value query per iteration. Additionally, it effectively eliminates the effect of time-varying unbalanced graphs by dynamically constructing row stochastic matrices. Furthermore, compared to other distributed optimization algorithms that only consider explicitly unknown cost functions, this paper also addresses the issue of privacy information leakage of nodes. Theoretical analysis demonstrate that the method attains sublinear regret while protecting the privacy information of agents. Finally, numerical experiments on distributed collaborative localization problem and federated learning confirm the effectiveness of the algorithm.

A Double Sensitive Fault Detection Filter for Positive Markovian Jump Systems With A Hybrid Event-Triggered Mechanism
Junfeng Zhang, Baozhu Du, Suhuan Zhang, Shihong Ding
2024, 11(11): 2298-2315. doi: 10.1109/JAS.2024.124677
Abstract(167) HTML (23) PDF(41)
Abstract:
This paper is concerned with the double sensitive fault detection filter for positive Markovian jump systems. A new hybrid adaptive event-triggered mechanism is proposed by introducing a non-monotonic adaptive law. A linear adaptive event-triggered threshold is established by virtue of 1-norm inequality. Under such a triggering strategy, the original system can be transformed into an interval uncertain system. By using a stochastic copositive Lyapunov function, an asynchronous fault detection filter is designed for positive Markovian jump systems (PMJSs) in terms of linear programming. The presented filter satisfies both $ L_{-} $-gain ($ \ell_{-} $-gain) fault sensitivity and $ L_{1} $ ($ \ell_{1} $) internal differential privacy sensitivity. The proposed approach is also extended to the discrete-time case. Finally, two examples are provided to illustrate the effectiveness of the proposed design.
General Lyapunov Stability and Its Application to Time-Varying Convex Optimization
Zhibao Song, Ping Li
2024, 11(11): 2316-2326. doi: 10.1109/JAS.2024.124374
Abstract(332) HTML (28) PDF(126)
Abstract:

In this article, a general Lyapunov stability theory of nonlinear systems is put forward and it contains asymptotic/finite-time/fast finite-time/fixed-time stability. Especially, a more accurate estimate of the settling-time function is exhibited for fixed-time stability, and it is still extraneous to the initial conditions. This can be applied to obtain less conservative convergence time of the practical systems without the information of the initial conditions. As an application, the given fixed-time stability theorem is used to resolve time-varying (TV) convex optimization problem. By the Newton’s method, two classes of new dynamical systems are constructed to guarantee that the solution of the dynamic system can track to the optimal trajectory of the unconstrained and equality constrained TV convex optimization problems in fixed time, respectively. Without the exact knowledge of the time derivative of the cost function gradient, a fixed-time dynamical non-smooth system is established to overcome the issue of robust TV convex optimization. Two examples are provided to illustrate the effectiveness of the proposed TV convex optimization algorithms. Subsequently, the fixed-time stability theory is extended to the theories of predefined-time/practical predefined-time stability whose bound of convergence time can be arbitrarily given in advance, without tuning the system parameters. Under which, TV convex optimization problem is solved. The previous two examples are used to demonstrate the validity of the predefined-time TV convex optimization algorithms.

Probabilistic Automata-Based Method for Enhancing Performance of Deep Reinforcement Learning Systems
Min Yang, Guanjun Liu, Ziyuan Zhou, Jiacun Wang
2024, 11(11): 2327-2339. doi: 10.1109/JAS.2024.124818
Abstract(150) HTML (30) PDF(41)
Abstract:

Deep reinforcement learning (DRL) has demonstrated significant potential in industrial manufacturing domains such as workshop scheduling and energy system management. However, due to the model’s inherent uncertainty, rigorous validation is requisite for its application in real-world tasks. Specific tests may reveal inadequacies in the performance of pre-trained DRL models, while the “black-box” nature of DRL poses a challenge for testing model behavior. We propose a novel performance improvement framework based on probabilistic automata, which aims to proactively identify and correct critical vulnerabilities of DRL systems, so that the performance of DRL models in real tasks can be improved with minimal model modifications. First, a probabilistic automaton is constructed from the historical trajectory of the DRL system by abstracting the state to generate probabilistic decision-making units (PDMUs), and a reverse breadth-first search (BFS) method is used to identify the key PDMU-action pairs that have the greatest impact on adverse outcomes. This process relies only on the state-action sequence and final result of each trajectory. Then, under the key PDMU, we search for the new action that has the greatest impact on favorable results. Finally, the key PDMU, undesirable action and new action are encapsulated as monitors to guide the DRL system to obtain more favorable results through real-time monitoring and correction mechanisms. Evaluations in two standard reinforcement learning environments and three actual job scheduling scenarios confirmed the effectiveness of the method, providing certain guarantees for the deployment of DRL models in real-world applications.

LETTERS
A Linear Programming-Based Reinforcement Learning Mechanism for Incomplete-Information Games
Baosen Yang, Changbing Tang, Yang Liu, Guanghui Wen, Guanrong Chen
2024, 11(11): 2340-2342. doi: 10.1109/JAS.2024.124464
Abstract(100) HTML (24) PDF(47)
Abstract:
A Distributed Adaptive Second-Order Latent Factor Analysis Model
Jialiang Wang, Weiling Li, Xin Luo
2024, 11(11): 2343-2345. doi: 10.1109/JAS.2024.124371
Abstract(133) HTML (26) PDF(29)
Abstract:
A Transfer Learning Framework for Deep Multi-Agent Reinforcement Learning
Yi Liu, Xiang Wu, Yuming Bo, Jiacun Wang, Lifeng Ma
2024, 11(11): 2346-2348. doi: 10.1109/JAS.2023.124173
Abstract(303) HTML (26) PDF(77)
Abstract:
Multi-USV Formation Collision Avoidance via Deep Reinforcement Learning and COLREGs
Cheng-Cheng Wang, Yu-Long Wang, Li Jia
2024, 11(11): 2349-2351. doi: 10.1109/JAS.2023.123846
Abstract(142) HTML (19) PDF(55)
Abstract:
Prediction-Based State Estimation and Compensation Control for Networked Systems With Communication Constraints and DoS Attacks
Zhong-Hua Pang, Qian Cao, Haibin Guo, Zhe Dong
2024, 11(11): 2352-2354. doi: 10.1109/JAS.2024.124605
Abstract(144) HTML (25) PDF(78)
Abstract: