A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 15.3, Top 1 (SCI Q1)
    CiteScore: 23.5, Top 2% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
D. Su, J. Han, C. Yang, and W. Gui, “Optimization algorithms based on double-integral coevolutionary neurodynamics in deep learning,” IEEE/CAA J. Autom. Sinica, 2025. doi: 10.1109/JAS.2025.125210
Citation: D. Su, J. Han, C. Yang, and W. Gui, “Optimization algorithms based on double-integral coevolutionary neurodynamics in deep learning,” IEEE/CAA J. Autom. Sinica, 2025. doi: 10.1109/JAS.2025.125210

Optimization Algorithms Based on Double-Integral Coevolutionary Neurodynamics in Deep Learning

doi: 10.1109/JAS.2025.125210
Funds:  This work was supported by the National Natural Science Foundation of China (62394340, 62394345, 62473383). This work was carried out in part using computing resources at the High Performance Computing Center of Central South University
More Information
  • Deep neural networks are increasingly exposed to attack threats, and at the same time, the need for privacy protection is growing. As a result, the challenge of developing neural networks that are both robust and capable of strong generalization while maintaining privacy becomes pressing. Training neural networks under privacy constraints is one way to minimize privacy leakage, and one way to do this is to add noise to the data or model. However, noise may cause gradient directions to deviate from the optimal trajectory during training, leading to unstable parameter updates, slow convergence, and reduced model generalization capability. To overcome these challenges, we propose an optimization algorithm based on double-integral coevolutionary neurodynamics (DICND), designed to accelerate convergence and improve generalization in noisy conditions. Theoretical analysis proves the global convergence of the DICND algorithm and demonstrates its ability to converge to near-global minima efficiently under noisy conditions. Numerical simulations and image classification experiments further confirm the DICND algorithm’s significant advantages in enhancing generalization performance.

     

  • loading
  • [1]
    A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep convolutional neural networks,” Commun. ACM, vol. 60, no. 6, pp. 84–90, 2017. doi: 10.1145/3065386
    [2]
    Y. Deng, W. Zhang, W. Xu, Y. Shen, and W. Lam, “Classification-based prediction of network connectivity robustness,” Neural Netw., vol. 157, pp. 136–146, 2023. doi: 10.1016/j.neunet.2022.10.013
    [3]
    X. Chen, M. Zhang, Z. Wu, L. Wu, and X. Guan, “Model-free load frequency control of nonlinear power systems based on deep reinforcement learning,” IEEE Trans Ind. Informat., vol. 20, no. 4, pp. 6825–6833, 2024. doi: 10.1109/TII.2024.3353934
    [4]
    Z. Chen, B. Zhang, C. Du, W. Meng, and A. Meng, “A novel dynamic spatio-temporal graph convolutional network for wind speed interval prediction,” Energy, vol. 294, p. 130930, 2024. doi: 10.1016/j.energy.2024.130930
    [5]
    M. Liu, X. Chen, M. Shang, and H. Li, “A pseudoinversion-free method for weight updating in broad learning system,” IEEE Trans. Neural Netw. Learn. Syst., vol. 35, no. 2, pp. 2378–2389, 2022.
    [6]
    M. Abadi et al., “Deep learning with differential privacy,” in ACM SIGSAC Conf. on Computer and Communications Security, Vienna, Austria, 2016, pp. 308–318.
    [7]
    B. Jayaraman and D. Evans, “Evaluating differentially private machine learning in practice,” in Proc. USENIX Security Symposium, Santa Clara, USA, 2019, pp. 1895–1912.
    [8]
    N. Papernot, P. McDaniel, A. Sinha, and M. Wellman, “Towards the science of security and privacy in machine learning,” arXiv preprint arXiv: 1611.03814, 2016.
    [9]
    D. Su, Predrag S. Stanimirović, L. B. Han, and L. Jin, “Neural dynamics for improving optimiser in deep learning with noise considered,” CAAI Trans. Intell. Technol., vol. 9, no. 3, pp. 722–37, 2024. doi: 10.1049/cit2.12263
    [10]
    C. Lee, H. Hasegawa, and S. Gao, “Complex-valued neural networks: A nomprehensive nurvey,” IEEE/CAA J. Autom. Sinica, vol. 9, no. 8, pp. 1406–1426, Aug. 2022. doi: 10.1109/JAS.2022.105743
    [11]
    Y. Dong, Z. Deng, T. Pang, J. Zhu, and H. Su, “Adversarial distributional training for robust deep learning,” in Proc. Advances in Neural Information Processing Systems, 2020, pp. 8270–8283.
    [12]
    A. Mumuni and F. Mumuni, “Data augmentation: A comprehensive survey of modern approaches,” Array, vol. 16, p. 100258, 2022. doi: 10.1016/j.array.2022.100258
    [13]
    X. Wen and M. Zhou, “Evolution and role of optimizers in training deep learning models,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 10, pp. 2039–2042, 2024. doi: 10.1109/JAS.2024.124806
    [14]
    Y. Liu, B. Tian, Y. Lv, L. Li, and F. Y. Wang, “Point cloud classification using content-based transformer via clustering in feature space,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 1, pp. 231–239, 2023.
    [15]
    N. Zeng, X. Li, P. Wu, H. Li, and X. Luo, “A novel tensor decomposition-based efficient detector for low-altitude aerial objects with knowledge distillation scheme,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 2, pp. 487–501, 2024. doi: 10.1109/JAS.2023.124029
    [16]
    A. Khaled and P. Richtárik, “Better theory for SGD in the nonconvex world,” arXiv preprint arXiv: 2002.03329, 2020.
    [17]
    L. Liu, H. Jiang, P. He, W. Chen, X. Liu, J. Gao, and J. Han, “On the variance of the adaptive learning rate and beyond,” arXiv preprint arXiv: 1908.03265, 2019.
    [18]
    W. Yang, S. Li, and X. Luo, “Data driven vibration control: A review,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 9, pp. 1898–1917, 2024. doi: 10.1109/JAS.2024.124431
    [19]
    W. Qian, Y. Wu, and B. Shen, “Novel adaptive memory event-triggered-based fuzzy robust control for nonlinear networked systems via the differential evolution algorithm,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 8, pp. 1836–1848, 2024. doi: 10.1109/JAS.2024.124419
    [20]
    M. Zhou, M. Cui, D. Xu, S. Zhu, Z. Zhao, and A. Abusorrah, “Evolutionary optimization methods for high-dimensional expensive problems: A survey,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 5, pp. 1092–1105, 2024. doi: 10.1109/JAS.2024.124320
    [21]
    J. Han, Y. Zheng, K. Wang, C. Yang, and X. Yuan, “Worst-case robust optimization based on an adaptive incremental Kriging metamodel,” Expert Syst. Appl., vol. 260, p. 125372, 2025. doi: 10.1016/j.eswa.2024.125372
    [22]
    C. Ren, K. Wang, J. Han, L. Sun, and C. Yang, “Deterministic scenarios guided K-Adaptability in multistage robust optimization for energy management and cleaning scheduling of heat transfer process,” Energy, vol. 312, p. 133558, 2024. doi: 10.1016/j.energy.2024.133558
    [23]
    S. Gupta, S. Singh, R. Su, S. Gao, and J. C. Bansal, “Multiple elite individual guided piecewise search-based differential evolution,” IEEE/CAA J. Autom. Sinica, vol. 10, no. 1, pp. 135–158, 2023. doi: 10.1109/JAS.2023.123018
    [24]
    L. Wei, L. Jin, and X. Luo, “Noise-suppressing neural dynamics for time-dependent constrained nonlinear optimization with applications,” IEEE Trans. Syst., Man, Cybern., vol. 52, no. 10, pp. 6139–6150, 2022. doi: 10.1109/TSMC.2021.3138550
    [25]
    L. Jin and Y. Zhang, “Discrete-time Zhang neural network for online time-varying nonlinear optimization with application to manipulator motion generation,” IEEE Trans. Neural Netw. Learn. Syst., vol. 26, no. 7, pp. 1525–1531, 2014.
    [26]
    L. Jin, Y. Zhang, and S. Li, “Integration-enhanced Zhang neural network for real-time-varying matrix inversion in the presence of various kinds of noises,” IEEE Trans. Neural Netw. Learn. Syst., vol. 27, no. 12, pp. 2615–2627, 2015.
    [27]
    L. Jin, et al, “Noise-tolerant ZNN models for solving time-varying zero-finding problems: A control-theoretic approach,” IEEE Trans. Autom. Control, vol. 62, no. 2, pp. 992–997, 2016.
    [28]
    X. F. Liu, J. Zhang, and J. Wang, “Cooperative particle swarm optimization with a bilevel resource allocation mechanism for large-scale dynamic optimization,” IEEE Trans. Cybern., vol. 53, no. 2, pp. 1000–1011, 2022.
    [29]
    L. Wei, L. Jin, and X. Luo, “A robust coevolutionary neural-based optimization algorithm for constrained nonconvex optimization, IEEE Trans. Neural Netw. Learn. Syst., vol. 35, no. 6, pp. 7778−7791, 2024.
    [30]
    L. Jin, Z. Sn, D. Fu, and X. Xiao, “Coevolutionary neural solution for nonconvex optimization with noise tolerance,” IEEE Trans. Neural Netw. Learn. Syst., vol. 35, no. 12, pp. 17571−17581, 2024.
    [31]
    P. Helber, B. Bischke, A. Dengel, and D. Borth, “Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification,” IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens., vol. 12, no. 7, pp. 2217–2226, 2019. doi: 10.1109/JSTARS.2019.2918242
    [32]
    A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” Technical Report, University of Toronto, 2009.
    [33]
    X. Gao, S. Du, and Y. Yang, “Glimpse and focus: Global and local-scale graph convolution network for skeleton-based action recognition,” Neural Netw., vol. 167, pp. 551–558, 2023. doi: 10.1016/j.neunet.2023.07.051

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(6)  / Tables(3)

    Article Metrics

    Article views (13) PDF downloads(2) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return