A journal of IEEE and CAA , publishes high-quality papers in English on original theoretical/experimental research and development in all areas of automation
Volume 8 Issue 8
Aug.  2021

IEEE/CAA Journal of Automatica Sinica

  • JCR Impact Factor: 15.3, Top 1 (SCI Q1)
    CiteScore: 23.5, Top 2% (Q1)
    Google Scholar h5-index: 77, TOP 5
Turn off MathJax
Article Contents
X. X. Ren, D. W. Li, Y. G. Xi, and H. B. Shao, "Distributed Subgradient Algorithm for Multi-Agent Optimization With Dynamic Stepsize," IEEE/CAA J. Autom. Sinica, vol. 8, no. 8, pp. 1451-1464, Aug. 2021. doi: 10.1109/JAS.2021.1003904
Citation: X. X. Ren, D. W. Li, Y. G. Xi, and H. B. Shao, "Distributed Subgradient Algorithm for Multi-Agent Optimization With Dynamic Stepsize," IEEE/CAA J. Autom. Sinica, vol. 8, no. 8, pp. 1451-1464, Aug. 2021. doi: 10.1109/JAS.2021.1003904

Distributed Subgradient Algorithm for Multi-Agent Optimization With Dynamic Stepsize

doi: 10.1109/JAS.2021.1003904
Funds:  This work was supported by the Key Research and Development Project in Guangdong Province (2020B0101050001), the National Science Foundation of China (61973214, 61590924, 61963030), the Natural Science Foundation of Shanghai (19ZR1476200). Part of this work has been presented in the 16th International Conference on Control & Automation, Oct. 9–11, 2020 (Virtual) Sapporo, Hokkaido, Japan. This paper also includes the asynchronous case, more theoretical and numerical results compared to the conference paper
More Information
  • In this paper, we consider distributed convex optimization problems on multi-agent networks. We develop and analyze the distributed gradient method which allows each agent to compute its dynamic stepsize by utilizing the time-varying estimate of the local function value at the global optimal solution. Our approach can be applied to both synchronous and asynchronous communication protocols. Specifically, we propose the distributed subgradient with uncoordinated dynamic stepsizes (DS-UD) algorithm for synchronous protocol and the AsynDGD algorithm for asynchronous protocol. Theoretical analysis shows that the proposed algorithms guarantee that all agents reach a consensus on the solution to the multi-agent optimization problem. Moreover, the proposed approach with dynamic stepsizes eliminates the requirement of diminishing stepsize in existing works. Numerical examples of distributed estimation in sensor networks are provided to illustrate the effectiveness of the proposed approach.

     

  • loading
  • [1]
    P. Yi, Y. Hong and F. Liu, “Distributed gradient algorithm for constrained optimization with application to load sharing in power systems,” Systems &Control Letters, vol. 83, pp. 45–52, 2015.
    [2]
    S. Nabavi, J. Zhang, and A. Chakrabortty, “Distributed optimization algorithms for wide-area oscillation monitoring in power systems using interregional pmu-pdc architectures,” IEEE Trans. Smart Grid, vol. 6, no. 5, pp. 2529–2538, 2015. doi: 10.1109/TSG.2015.2406578
    [3]
    P. Braun, L. Grune, C. M. Kellett, S. R. Weller, and K. Worthmann, “A distributed optimization algorithm for the predictive control of smart grids,” IEEE Trans. Automatic Control, vol. 61, no. 12, pp. 3898–3911, 2016. doi: 10.1109/TAC.2016.2525808
    [4]
    J. Lavaei, S. H. Low, R. Baldick, B. Zhang, D. K. Molzahn, F. Dorfler, and H. Sandberg, “Guest editorial distributed control and efficient optimization methods for smart grid,” IEEE Trans. Smart Grid, vol. 8, no. 6, pp. 2939–2940, 2017. doi: 10.1109/TSG.2017.2758907
    [5]
    S. Patterson, Y. C. Eldar, and I. Keidar, “Distributed compressed sensing for static and time-varying networks,” IEEE Trans. Signal Processing, vol. 62, no. 19, pp. 4931–4946, 2014. doi: 10.1109/TSP.2014.2340812
    [6]
    G. Mateos, J. A. Bazerque, and G. B. Giannakis, “Distributed sparse linear regression,” IEEE Trans. Signal Processing, vol. 58, no. 10, pp. 5262–5276, 2010. doi: 10.1109/TSP.2010.2055862
    [7]
    C. Mu and Y. Zhang, “Learning-based robust tracking control of quadrotor with time-varying and coupling uncertainties,” IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 1, pp. 259–273, 2019.
    [8]
    P. Richtárik and M. Takáč, “Distributed coordinate descent method for learning with big data,” The Journal of Machine Learning Research, vol. 17, no. 1, pp. 2657–2681, 2016.
    [9]
    C. Sun and C. Mu, “Important scientific problems of multi-agent deep reinforcement learning,” Acta Automatica Sinica, vol. 46, no. 7, pp. 1301–1312, 2020.
    [10]
    A. Nedic and A. Ozdaglar, “Distributed subgradient methods for multiagent optimization,” IEEE Trans. Automatic Control, vol. 54, no. 1, pp. 48–61, 2009. doi: 10.1109/TAC.2008.2009515
    [11]
    A. Nedic, A. Ozdaglar, and P. A. Parrilo, “Constrained consensus and optimization in multi-agent networks,” IEEE Trans. Automatic Control, vol. 55, no. 4, pp. 922–938, 2010. doi: 10.1109/TAC.2010.2041686
    [12]
    J. C. Duchi, A. Agarwal, and M. J. Wainwright, “Dual averaging for distributed optimization: Convergence analysis and network scaling,” IEEE Trans. Automatic Control, vol. 57, no. 3, pp. 592–606, 2012.
    [13]
    D. Jakovetic, J. Xavier, and J. M. F. Moura, “Fast distributed gradient methods,” IEEE Trans. Automatic Control, vol. 59, no. 5, pp. 1131–1146, 2014. doi: 10.1109/TAC.2014.2298712
    [14]
    A. Nedic, A. Olshevsky, and W. Shi, “Achieving geometric convergence for distributed optimization over time-varying graphs,” SIAM Journal on Optimization, vol. 27, no. 4, pp. 2597–2633, 2017. doi: 10.1137/16M1084316
    [15]
    K. Seaman, F. Bach, S. Bubeck, Y. T. Lee, and L. Massoulié, “Optimal algorithms for smooth and strongly convex distributed optimization in networks, ” in Proc. 34th Int. Conf. on Machine Learning, vol. 70, pp. 3027–3036, 2017.
    [16]
    W. Shi, Q. Ling, G. Wu, and W. Yin, “Extra: An exact first-order algorithm for decentralized consensus optimization,” SIAM Journal on Optimization, vol. 25, no. 2, pp. 944–966, 2015. doi: 10.1137/14096668X
    [17]
    C. Xi, V. S. Mai, R. Xin, E. H. Abed, and U. A. Khan, “Linear convergence in optimization over directed graphs with row-stochastic matrices,” IEEE Trans. Automatic Control, vol. 63, no. 10, pp. 3558–3565, 2018. doi: 10.1109/TAC.2018.2797164
    [18]
    F. Saadatniaki, R. Xin, and U. A. Khan, “Decentralized optimization over time-varying directed graphs with row and columnstochastic matrices, ” IEEE Trans. Automatic Control, 2020, DOI: 10.1109/TAC.2020.2969721.
    [19]
    A. Nedic, “Asynchronous broadcast-based convex optimization over a network,” IEEE Trans. Automatic Control, vol. 56, no. 6, pp. 1337–1351, 2011. doi: 10.1109/TAC.2010.2079650
    [20]
    S. Lee and A. Nedić, “Asynchronous gossip-based random projection algorithms over networks,” IEEE Trans. Automatic Control, vol. 61, no. 4, pp. 953–968, 2016. doi: 10.1109/TAC.2015.2460051
    [21]
    J. Lei, H. F. Chen, and H. T. Fang, “Asymptotic properties of primaldual algorithm for distributed stochastic optimization over random networks with imperfect communications,” SIAM Journal on Control and Optimization, vol. 56, no. 3, pp. 2159–2188, 2018. doi: 10.1137/16M1086133
    [22]
    A. Nedić and A. Olshevsky, “Distributed optimization over time-varying directed graphs,” IEEE Trans. Automatic Control, vol. 60, no. 3, pp. 601–615, 2015. doi: 10.1109/TAC.2014.2364096
    [23]
    P. Wang, P. Lin, W. Ren, and Y. Song, “Distributed subgradientbased multiagent optimization with more general step sizes,” IEEE Trans. Automatic Control, vol. 63, no. 7, pp. 2295–2302, 2018. doi: 10.1109/TAC.2017.2763782
    [24]
    K. Yuan, Q. Ling, and W. Yin, “On the convergence of decentralized gradient descent,” SIAM Journal on Optimization, vol. 26, no. 3, pp. 1835–1854, 2016. doi: 10.1137/130943170
    [25]
    B. T. Polyak, Introduction to Optimization. New York, USA: Optimization Software, Inc., 1987.
    [26]
    S. Boyd, “Subgradient methods. Lecture notes of EE364b, ” Tech. Rep, Standford Univ., Stanford, CA, USA, 2013.
    [27]
    R. A. Horn, R. A. Horn, and C. R. Johnson, Matrix analysis. Cambridge, UK: Cambridge University Press, 1990.
    [28]
    S. Boyd, A. Ghosh, B. Prabhakar, and D. Shah, “Randomized gossip algorithms,” IEEE Trans. on Information Theory, vol. 52, no. 6, pp. 2508–2530, 2006. doi: 10.1109/TIT.2006.874516

Catalog

    通讯作者: 陈斌, bchen63@163.com
    • 1. 

      沈阳化工大学材料科学与工程学院 沈阳 110142

    1. 本站搜索
    2. 百度学术搜索
    3. 万方数据库搜索
    4. CNKI搜索

    Figures(6)

    Article Metrics

    Article views (1086) PDF downloads(68) Cited by()

    Highlights

    • This paper proposes a novel dynamic stepsize selection approach for the distributed gradient algorithm and develops the associated distributed gradient algorithms for synchronous and asynchronous gossip-like communication protocol.
    • Differently from the existing distributed algorithms whose diminishing or fixed stepsizes are determined before the algorithm is run, the proposed algorithms use dynamic stepsizes that rely on the time-varying estimates of the optimal function values generated at each iteration during the algorithms.
    • On the one hand, the dynamic stepsize needs no knowledge of the global information on the network or the objective functions, which makes the proposed algorithms more applicable compared to distributed algorithms with fixed stepsizes. On the other hand, the dynamic stepsize can overcome inefficient computations caused by the diminishing stepsize and the proposed algorithms can achieve faster convergence than distributed algorithms with diminishing stepsizes.

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return