A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 15.3, Top 1 (SCI Q1)
    CiteScore: 23.5, Top 2% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
M. Wei, W. Yu, D. Chen, M. Kang, and G. Cheng, “Privacy distributed constrained optimization over time-varying unbalanced networks and its application in federated learning,” IEEE/CAA J. Autom. Sinica, vol. 12, no. 2, pp. 1–12, Feb. 2025.
Citation: M. Wei, W. Yu, D. Chen, M. Kang, and G. Cheng, “Privacy distributed constrained optimization over time-varying unbalanced networks and its application in federated learning,” IEEE/CAA J. Autom. Sinica, vol. 12, no. 2, pp. 1–12, Feb. 2025.

Privacy Distributed Constrained Optimization Over Time-Varying Unbalanced Networks and Its Application in Federated Learning

Funds:  This work was supported in part by the National Key Research and Development Program of China (2022ZD0120001), the National Natural Science Foundation of China (62233004, 62273090, 62073076), and the Jiangsu Provincial Scientific Research Center of Applied Mathematics (BK20233002)
More Information
  • This paper investigates a class of constrained distributed zeroth-order optimization (ZOO) problems over time-varying unbalanced graphs while ensuring privacy preservation among individual agents. Not taking into account recent progress and addressing these concerns separately, there remains a lack of solutions offering theoretical guarantees for both privacy protection and constrained ZOO over time-varying unbalanced graphs. We hereby propose a novel algorithm, termed the differential privacy (DP) distributed push-sum based zeroth-order constrained optimization algorithm (DP-ZOCOA). Operating over time-varying unbalanced graphs, DP-ZOCOA obviates the need for supplemental suboptimization problem computations, thereby reducing overhead in comparison to distributed primary-dual methods. DP-ZOCOA is specifically tailored to tackle constrained ZOO problems over time-varying unbalanced graphs, offering a guarantee of convergence to the optimal solution while robustly preserving privacy. Moreover, we provide rigorous proofs of convergence and privacy for DP-ZOCOA, underscoring its efficacy in attaining optimal convergence without constraints. To enhance its applicability, we incorporate DP-ZOCOA into the federated learning framework and formulate a decentralized zeroth-order constrained federated learning algorithm (ZOCOA-FL) to address challenges stemming from the time-varying imbalance of communication topology. Finally, the performance and effectiveness of the proposed algorithms are thoroughly evaluated through simulations on distributed least squares (DLS) and decentralized federated learning (DFL) tasks.

     

  • loading
  • [1]
    J. Verbraeken, M. Wolting, J. Katzy, J. Kloppenburg, T. Verbelen, and J. S. Rellermeyer, “A survey on distributed machine learning,” ACM Computing Surveys (csur), vol. 53, no. 2, pp. 1–33, 2020.
    [2]
    J. Liu, J. Huang, Y. Zhou, X. Li, S. Ji, H. Xiong, and D. Dou, “From distributed machine learning to federated learning: A survey,” Knowledge and Information Systems, vol. 64, no. 4, pp. 885–917, 2022. doi: 10.1007/s10115-022-01664-x
    [3]
    X. Wang, Y. Han, C. Wang, Q. Zhao, X. Chen, and M. Chen, “In-edge ai: Intelligentizing mobile edge computing, caching and communication by federated learning,” IEEE Network, vol. 33, no. 5, pp. 156–165, 2019. doi: 10.1109/MNET.2019.1800286
    [4]
    T. Yang, X. Yi, J. Wu, Y. Yuan, D. Wu, Z. Meng, Y. Hong, H. Wang, Z. Lin, and K. H. Johansson, “A survey of distributed optimization,” Annu. Reviews in Control, vol. 47, pp. 278–305, 2019. doi: 10.1016/j.arcontrol.2019.05.006
    [5]
    A. Nedić and A. Olshevsky, “Distributed optimization over time-varying directed graphs,” IEEE Trans. Autom. Control, vol. 60, no. 3, pp. 601–615, 2015. doi: 10.1109/TAC.2014.2364096
    [6]
    A. Nedic, A. Olshevsky, and W. Shi, “Achieving geometric convergence for distributed optimization over time-varying graphs,” SIAM J. Optimization, vol. 27, no. 4, pp. 2597–2633, 2017. doi: 10.1137/16M1084316
    [7]
    A. Nedic, A. Olshevsky, W. Shi, and C. A. Uribe, “Geometrically convergent distributed optimization with uncoordinated step-sizes,” in Proc. American Control Conf., 2017, pp. 3950–3955.
    [8]
    S. Liang and G. Yin, “Dual averaging push for distributed convex optimization over time-varying directed graph,” IEEE Trans. Autom. Control, vol. 65, no. 4, pp. 1785–1791, 2019.
    [9]
    C. Wang, S. Xu, D. Yuan, Y. Chu, and Z. Zhang, “Cooperative convex optimization with subgradient delays using push-sum distributed dual averaging,” J. Franklin Institute, vol. 358, no. 14, pp. 7254–7269, 2021. doi: 10.1016/j.jfranklin.2021.07.015
    [10]
    G. Scutari and Y. Sun, “Distributed nonconvex constrained optimization over time-varying digraphs,” Mathematical Programming, vol. 176, pp. 497–544, 2019. doi: 10.1007/s10107-018-01357-w
    [11]
    J. Li, C. Gu, Z. Wu, and T. Huang, “Online learning algorithm for distributed convex optimization with time-varying coupled constraints and bandit feedback,” IEEE Trans. Cybern., vol. 52, no. 2, pp. 1009–1020, 2020.
    [12]
    X. Li, X. Yi, and L. Xie, “Distributed online optimization for multi-agent networks with coupled inequality constraints,” IEEE Trans. Autom. Control, vol. 66, no. 8, pp. 3575–3591, 2020.
    [13]
    Z. He, J. He, C. Chen, and X. Guan, “Constrained distributed nonconvex optimization over time-varying directed graphs,” in Proc. 59th IEEE Conf. Decision and Control , 2020, pp. 378–383.
    [14]
    Q. Lü, H. Li, Z. Wang, Q. Han, and W. Ge, “Performing linear convergence for distributed constrained optimisation over time-varying directed unbalanced networks,” IET Control Theory & Applications, vol. 13, no. 17, pp. 2800–2810, 2019.
    [15]
    S. Lee, A. Nedić, and M. Raginsky, “Stochastic dual averaging for decentralized online optimization on time-varying communication graphs,” IEEE Trans. Autom. Control, vol. 62, no. 12, pp. 6407–6414, 2017. doi: 10.1109/TAC.2017.2650563
    [16]
    C. Wang and S. Xu, “Distributed online constrained optimization with feedback delays,” IEEE Trans. Neural Networks and Learning Systems, pp. 1–13, 2022.
    [17]
    D. Wang, J. Liu, J. Lian, Y. Liu, Z. Wang, and W. Wang, “Distributed delayed dual averaging for distributed optimization over time-varying digraphs,” Automatica, vol. 150, p. 110869, 2023. doi: 10.1016/j.automatica.2023.110869
    [18]
    W. Yu, H. Liu, W. X. Zheng, and Y. Zhu, “Distributed discrete-time convex optimization with nonidentical local constraints over time-varying unbalanced directed graphs,” Automatica, vol. 134, p. 109899, 2021. doi: 10.1016/j.automatica.2021.109899
    [19]
    J. Zimmermann, T. Tatarenko, V. Willert, and J. Adamy, “Projected push-sum gradient descent-ascent for convex optimization with application to economic dispatch problems,” in Proc. 59th IEEE Conf. Decision and Control, 2020, pp. 542–547.
    [20]
    C. Dwork and A. Roth, “The algorithmic foundations of differential privacy,” Foundations and Trends® in Theoretical Computer Science, vol. 9, no. 3-4, pp. 211–407, 2014.
    [21]
    Y. Xiong, X. Li, K. You, and L. Wu, “Distributed online optimization in time-varying unbalanced networks without explicit subgradients,” IEEE Trans. Signal Processing, vol. 70, pp. 4047–4060, 2022. doi: 10.1109/TSP.2022.3194369
    [22]
    C. Gu, L. Jiang, J. Li, and Z. Wu, “Privacy-preserving dual stochastic push-sum algorithm for distributed constrained optimization,” J. Optimization Theory and Applications, pp. 1–29, 2023.
    [23]
    Q. Lü, X. Liao, T. Xiang, H. Li, and T. Huang, “Privacy masking stochastic subgradient-push algorithm for distributed online optimization,” IEEE Trans. Cybernetics, vol. 51, no. 6, pp. 3224–3237, 2020.
    [24]
    S. Mao, Y. Tang, Z. Dong, K. Meng, Z. Y. Dong, and F. Qian, “A privacy preserving distributed optimization algorithm for economic dispatch over time-varying directed networks,” IEEE Trans. Industrial Informatics, vol. 17, no. 3, pp. 1689–1701, 2021. doi: 10.1109/TII.2020.2996198
    [25]
    Q. Xu, C. Yu, X. Yuan, Z. Fu, and H. Liu, “A privacy-preserving distributed subgradient algorithm for the economic dispatch problem in smart grid,” IEEE/CAA J. Autom. Sinica, 2022.
    [26]
    D. Han, K. Liu, Y. Lin, and Y. Xia, “Differentially private distributed online learning over time-varying digraphs via dual averaging,” Int. J. Robust and Nonlinear Control, vol. 32, no. 5, pp. 2485–2499, 2022. doi: 10.1002/rnc.5635
    [27]
    K. Zhang, X. Liao, and Q. Lü, “Privacy-protected decentralized dual averaging push with edge-based correlated perturbations over time-varying directed networks,” IEEE Trans. Network Science and Engineering, vol. 9, no. 6, pp. 4145–4158, 2022. doi: 10.1109/TNSE.2022.3195953
    [28]
    M. Wei, J. Yang, Z. Zhao, X. Zhang, J. Li, and Z. Deng, “Defedhdp: Fully decentralized online federated learning for heart disease prediction in computational health systems,” IEEE Trans. Computational Social Systems, pp. 1–14, 2024.
    [29]
    T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, “Federated learning: Challenges, methods, and future directions,” IEEE Signal Processing Magazine, vol. 37, no. 3, pp. 50–60, 2020. doi: 10.1109/MSP.2020.2975749
    [30]
    J. Zhao, H. Zhu, F. Wang, R. Lu, Z. Liu, and H. Li, “PVD-FL: A privacy-preserving and verifiable decentralized federated learning framework,” IEEE Trans. Infor. Forensics and Security, vol. 17, pp. 2059–2073, 2022. doi: 10.1109/TIFS.2022.3176191
    [31]
    Q. Chen, Z. Wang, H. Wang, and X. Lin, “Feddual: Pair-wise gossip helps federated learning in large decentralized networks,” IEEE Trans. Inform. Forensics and Security, vol. 18, pp. 335–350, 2023. doi: 10.1109/TIFS.2022.3222935
    [32]
    Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated machine learning: Concept and applications,” ACM Trans. Intelligent Systems and Technology, vol. 10, no. 2, pp. 1–19, 2019.
    [33]
    Y. Yuan, J. Liu, D. Jin, et al., “DECEFL: A principled decentralized federated learning framework,” arXiv preprint arXiv: 2107.07171, 2021.
    [34]
    Y. Lu, Z. Yu, and N. Suri, “Privacy-preserving decentralized federated learning over time-varying communication graph,” arXiv preprint arXiv: 2210.00325, 2022.
    [35]
    S. Savazzi, M. Nicoli, and V. Rampa, “Federated learning with cooperating devices: A consensus approach for massive iot networks,” IEEE Internet of Things J., vol. 7, no. 5, pp. 4641–4654, 2020. doi: 10.1109/JIOT.2020.2964162
    [36]
    Y. Lu, X. Huang, K. Zhang, S. Maharjan, and Y. Zhang, “Blockchain empowered asynchronous federated learning for secure data sharing in internet of vehicles,” IEEE Trans. Vehicular Technology, vol. 69, no. 4, pp. 4298–4311, 2020. doi: 10.1109/TVT.2020.2973651
    [37]
    X. Yi, X. Li, T. Yang, L. Xie, T. Chai, and K. H. Johansson, “Distributed bandit online convex optimization with time-varying coupled inequality constraints,” IEEE Trans. Autom. Control, vol. 66, no. 10, pp. 4620–4635, 2021. doi: 10.1109/TAC.2020.3030883
    [38]
    D. Yuan, B. Zhang, D. W. Ho, W. X. Zheng, and S. Xu, “Distributed online bandit optimization under random quantization,” Automatica, vol. 146, p. 110590, 2022. doi: 10.1016/j.automatica.2022.110590
    [39]
    C. Wang, S. Xu, D. Yuan, D. Yuan, B. Zhang, and Z. Zhang, “Push-sum distributed online optimization with bandit feedback,” IEEE Trans. Cybern., pp. 1–11, 2020.
    [40]
    W. Fang, Z. Yu, Y. Jiang, Y. Shi, C. N. Jones, and Y. Zhou, “Communication-efficient stochastic zeroth-order optimization for federated learning,” IEEE Trans. Signal Processing, vol. 70, pp. 5058–5073, 2022. doi: 10.1109/TSP.2022.3214122
    [41]
    Z. Li and L. Chen, “Communication-efficient decentralized zeroth-order method on heterogeneous data,” in Proc. IEEE 13th Int. Conf. Wireless Communi. and Signal Processing, 2021, pp. 1–6.
    [42]
    L. Deng, “The mnist database of handwritten digit images for machine learning research[best of the web],” IEEE Signal Proc. Magazine, vol. 29, no. 6, pp. 141–142, 2012. doi: 10.1109/MSP.2012.2211477
    [43]
    S. Liu, P.-Y. Chen, B. Kailkhura, G. Zhang, A. O. Hero III, and P. K. Varshney, “A primer on zeroth-order optimization in signal processing and machine learning: Principals, recent advances, and applications,” IEEE Signal Proc. Magazine, vol. 37, no. 5, pp. 43–54, 2020. doi: 10.1109/MSP.2020.3003837
    [44]
    G. Hinton, “The forward-forward algorithm: Some preliminary investigations,” arXiv preprint arXiv: 2212.13345, 2022.
    [45]
    A. Nedic, A. Ozdaglar, and P. A. Parrilo, “Constrained consensus and optimization in multi-agent networks,” IEEE Trans. Autom. Control, vol. 55, no. 4, pp. 922–938, 2010. doi: 10.1109/TAC.2010.2041686
    [46]
    H. Liu, W. X. Zheng, and W. Yu, “Distributed discrete-time algorithms for convex optimization with general local constraints on weight-unbalanced digraph,” IEEE Trans. Control of Network Systems, vol. 8, no. 1, pp. 51–64, 2020.
    [47]
    M. J. Islam, S. Ahmad, F. Haque, M. B. I. Reaz, M. A. S. Bhuiyan, and M. R. Islam, “Application of min-max normalization on subject-invariant emg pattern recognition,” IEEE Trans. Instrumentation and Measurement, vol. 71, pp. 1–12, 2022.
    [48]
    B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, “Communication-efficient learning of deep networks from decentralized data,” in Proc. Artificial Intelligence and Statistics, 2017, pp. 1273–1282.
    [49]
    S. Warnat-Herresthal, H. Schultze, K. L. Shastry, et al., “Swarm learning for decentralized and confidential clinical machine learning,” Nature, vol. 594, no. 7862, pp. 265–270, 2021. doi: 10.1038/s41586-021-03583-3

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(5)  / Tables(1)

    Article Metrics

    Article views (101) PDF downloads(37) Cited by()

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return