A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 11 Issue 11
Nov.  2024

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 15.3, Top 1 (SCI Q1)
    CiteScore: 23.5, Top 2% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
Z. Zhao, Z. Yang, L. Jiang, J. Yang, and  Q. Ge,  “Privacy preserving distributed bandit residual feedback online optimization over time-varying unbalanced graphs,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 11, pp. 2284–2297, Nov. 2024. doi: 10.1109/JAS.2024.124656
Citation: Z. Zhao, Z. Yang, L. Jiang, J. Yang, and  Q. Ge,  “Privacy preserving distributed bandit residual feedback online optimization over time-varying unbalanced graphs,” IEEE/CAA J. Autom. Sinica, vol. 11, no. 11, pp. 2284–2297, Nov. 2024. doi: 10.1109/JAS.2024.124656

Privacy Preserving Distributed Bandit Residual Feedback Online Optimization Over Time-Varying Unbalanced Graphs

doi: 10.1109/JAS.2024.124656
Funds:  This work was supported by the National Natural Science Foundation of China (62033010, U23B2061) and Qing Lan Project of Jiangsu Province (R2023Q07)
More Information
  • This paper considers the distributed online optimization (DOO) problem over time-varying unbalanced networks, where gradient information is explicitly unknown. To address this issue, a privacy-preserving distributed online one-point residual feedback (OPRF) optimization algorithm is proposed. This algorithm updates decision variables by leveraging one-point residual feedback to estimate the true gradient information. It can achieve the same performance as the two-point feedback scheme while only requiring a single function value query per iteration. Additionally, it effectively eliminates the effect of time-varying unbalanced graphs by dynamically constructing row stochastic matrices. Furthermore, compared to other distributed optimization algorithms that only consider explicitly unknown cost functions, this paper also addresses the issue of privacy information leakage of nodes. Theoretical analysis demonstrate that the method attains sublinear regret while protecting the privacy information of agents. Finally, numerical experiments on distributed collaborative localization problem and federated learning confirm the effectiveness of the algorithm.

     

  • loading
  • [1]
    Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated machine learning: Concept and applications,” ACM Trans. Intelligent Systems and Technology, vol. 10, no. 2, pp. 1–19, 2019.
    [2]
    Z. Hu, K. Shaloudegi, G. Zhang, and Y. Yu, “Federated learning meets multi-objective optimization,” IEEE Trans. Network Science and Engineering, vol. 9, no. 4, pp. 2039–2051, 2022. doi: 10.1109/TNSE.2022.3169117
    [3]
    S. Wenzhi, H. Zhang, M.-L. Tseng, Z. Weipeng, and L. Xinyang, “Hierarchical energy optimization management of active distribution network with multi-microgrid system,” J. Industrial and Production Engineering, vol. 39, no. 3, pp. 210–229, 2022. doi: 10.1080/21681015.2021.1972478
    [4]
    R. Gao and A. Kleywegt, “Distributionally robust stochastic optimization with wasserstein distance,” Mathematics of Operations Research, vol. 48, no. 2, pp. 603–655, 2023. doi: 10.1287/moor.2022.1275
    [5]
    W. Tang and P. Daoutidis, “Fast and stable nonconvex constrained distributed optimization: The ellada algorithm,” Optimization and Engineering, vol. 23, no. 1, pp. 259–301, 2022. doi: 10.1007/s11081-020-09585-w
    [6]
    D. Yuan, D. W. Ho, and Y. Hong, “On convergence rate of distributed stochastic gradient algorithm for convex optimization with inequality constraints,” SIAM J. Control and optimization, vol. 54, no. 5, pp. 2872–2892, 2016. doi: 10.1137/15M1048896
    [7]
    Y. Zhang, Y. Lou, and Y. Hong, “An approximate gradient algorithm for constrained distributed convex optimization,” IEEE/CAA J. Autom. Sinica, vol. 1, no. 1, pp. 61–67, 2014. doi: 10.1109/JAS.2014.7004621
    [8]
    Y. Sun, G. Scutari, and A. Daneshmand, “Distributed optimization based on gradient tracking revisited: Enhancing convergence rate via surrogation,” SIAM J. Optimization, vol. 32, no. 2, pp. 354–385, 2022. doi: 10.1137/19M1259973
    [9]
    X. Ren, D. Li, Y. Xi, and H. Shao, “Distributed subgradient algorithm for multi-agent optimization with dynamic stepsize,” IEEE/CAA J. Autom. Sinica, vol. 8, no. 8, pp. 1451–1464, 2021. doi: 10.1109/JAS.2021.1003904
    [10]
    P. Nazari, D. A. Tarzanagh, and G. Michailidis, “Dadam: A consensus-based distributed adaptive gradient method for online optimization,” IEEE Trans. Signal Processing, vol. 70, pp. 6065–6079, 2022. doi: 10.1109/TSP.2022.3223214
    [11]
    H. Zhang, L. Du, and J. Shen, “Hybrid MPC system for platoon based cooperative lane change control using machine learning aided distributed optimization,” Transportation Research Part B: Methodo logical, vol. 159, pp. 104–142, 2022. doi: 10.1016/j.trb.2021.10.006
    [12]
    C. Sun, M. Ye, and G. Hu, “Distributed time-varying quadratic optimization for multiple agents under undirected graphs,” IEEE Trans. Autom. Control, vol. 62, no. 7, pp. 3687–3694, 2017. doi: 10.1109/TAC.2017.2673240
    [13]
    X. Yi, X. Li, L. Xie, and K. H. Johansson, “Distributed online convex optimization with time-varying coupled inequality constraints,” IEEE Trans. Signal Processing, vol. 68, pp. 731–746, 2020. doi: 10.1109/TSP.2020.2964200
    [14]
    Y. Pang and G. Hu, “Randomized gradient-free distributed online optimization with time-varying cost functions,” in Proc. IEEE 58th Conf. Decision and Control. Nice, France, 2019, pp. 4910–4915.
    [15]
    M. Akbari, B. Gharesifard, and T. Linder, “Distributed online convex optimization on time-varying directed graphs,” IEEE Trans. Control of Network Systems, vol. 4, no. 3, pp. 417–428, 2015.
    [16]
    Y. Xiong, J. Xu, K. You, J. Liu, and L. Wu, “Privacy-preserving distributed online optimization over unbalanced digraphs via subgradient rescaling,” IEEE Trans. Control of Network Systems, vol. 7, no. 3, pp. 1366–1378, 2020. doi: 10.1109/TCNS.2020.2976273
    [17]
    S. Xia, Z. Yao, Y. Li, and S. Mao, “Online distributed offloading and computing resource management with energy harvesting for heterogeneous mec-enabled IOT,” IEEE Trans. Wireless Communica tions, vol. 20, no. 10, pp. 6743–6757, 2021. doi: 10.1109/TWC.2021.3076201
    [18]
    Y. Xiong, X. Li, K. You, and L. Wu, “Distributed online optimization in time-varying unbalanced networks without explicit subgradients,” IEEE Trans. Signal Processing, vol. 70, pp. 4047–4060, 2022. doi: 10.1109/TSP.2022.3194369
    [19]
    A. D. Flaxman, A. T. Kalai, and H. B. McMahan, “Online convex optimization in the bandit setting: Gradient descent without a gradient,” in Proc. 16th Annual ACM-SIAM Symposium Discrete Algorithm, SODA, 2005, pp. 385−394.
    [20]
    K. K. Patel, L. Wang, A. Saha, and N. Srebro, “Federated online and bandit convex optimization,” in Proc. Int. Conf. Machine Learning. PMLR, 2023, pp. 27439–27460.
    [21]
    C. Wang, S. Xu, D. Yuan, B. Zhang, and Z. Zhang, “Push-sum distributed online optimization with bandit feedback,” IEEE Trans. Cybernetics, vol. 52, no. 4, pp. 2263–2273, 2020.
    [22]
    Y. Pang and G. Hu, “Randomized gradient-free distributed optimization methods for a multiagent system with unknown cost function,” IEEE Trans. Autom. Control, vol. 65, no. 1, pp. 333–340, 2019.
    [23]
    O. Shamir, “An optimal algorithm for bandit and zero-order convex optimization with two-point feedback,” The J. Machine Learning Research, vol. 18, no. 1, pp. 1703–1713, 2017.
    [24]
    J. Li, C. Li, W. Yu, X. Zhu, and X. Yu, “Distributed online bandit learning in dynamic environments over unbalanced digraphs,” IEEE Trans. Network Science and Engineering, vol. 8, no. 4, pp. 3034–3047, 2021. doi: 10.1109/TNSE.2021.3093536
    [25]
    Y. Zhang, Y. Zhou, K. Ji, and M. M. Zavlanos, “A new one-point residual-feedback oracle for black-box learning and control,” Automatica, vol. 136, p. 110006, 2022.
    [26]
    Y. Zhang, Y. Zhou, K. Ji, Y. Shou, and M. M. Zavlanos, “Boosting one-point derivative-free online optimization via residual feedback,” IEEE Trans. Autom. Control, vol. 69, no. 9, pp. 6309−6316, 2014.
    [27]
    S. Mao, Y. Tang, Z. Dong, K. Meng, Z. Y. Dong, and F. Qian, “A privacy preserving distributed optimization algorithm for economic dispatch over time-varying directed networks,” IEEE Trans. Industrial Informatics, vol. 17, no. 3, pp. 1689–1701, 2020.
    [28]
    M. T. Hale and M. Egerstedt, “Cloud-enabled differentially private multiagent optimization with constraints,” IEEE Trans. Control of Network Systems, vol. 5, no. 4, pp. 1693–1706, 2017.
    [29]
    Z. Huang, R. Hu, Y. Guo, E. Chan-Tin, and Y. Gong, “DP-ADMM: ADMM-based distributed learning with differential privacy,” IEEE Trans. Information Forensics and Security, vol. 15, pp. 1002–1012, 2019.
    [30]
    W. Chen, L. Liu, and G.-P. Liu, “Privacy-preserving distributed economic dispatch of microgrids: A dynamic quantization-based consensus scheme with homomorphic encryption,” IEEE Trans. Smart Grid, vol. 14, no. 1, pp. 701–713, 2022.
    [31]
    C. Dwork, “Differential privacy,” in Proc. Autom., Languages and Programming: 33rd Int. Colloquium, Part Ⅱ. Venice, Italy: Springer, 2006, pp. 1–12.
    [32]
    T. Ding, S. Zhu, J. He, C. Chen, and X. Guan, “Consensus-based distributed optimization in multi-agent systems: Convergence and differential privacy,” in Proc. IEEE Conf. Decision and Control. Miami, USA, 2018, pp. 3409–3414.
    [33]
    Q. Lü, X. Liao, T. Xiang, H. Li, and T. Huang, “Privacy masking stochastic subgradient-push algorithm for distributed online optimization,” IEEE Trans. Cybernetics, vol. 51, no. 6, pp. 3224–3237, 2020.

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(12)

    Article Metrics

    Article views (101) PDF downloads(33) Cited by()

    Highlights

    • Differential privacy in distributed online optimization with precise noise control
    • Derives optimal prediction residual feedback boundedness, reducing estimation variance
    • Distributed algorithm with privacy and one-point feedback, handling unbalanced comms

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return