Decentralized optimization on compact submanifolds: Linear consensus and gradient-type methods
科研大讨论系列报告
报告题目(Title):Decentralized optimization on compact submanifolds: Linear consensus and gradient-type methods
报告人(Speaker):户将(哈佛医学院和麻省总医院)
地点(Place):腾讯会议 954 817 795
时间(Time):2023年9月8日(周五)19:00-20:00
邀请人(Inviter):陈华杰
报告摘要
Due to its wide-ranging applications and growing concerns over privacy and robustness, decentralized manifold optimization has captured significant attention. In this talk, we consider the problem of decentralized nonconvex optimization over a compact submanifold, where each local agent's objective function defined by the local dataset is smooth. Firstly, by leveraging the powerful tool of proximal smoothness of the compact submanifold, we show that a convexity-like regularity condition, referred to as the restricted secant inequality, always holds in an explicitly characterized neighborhood around the solution set of the nonconvex consensus problem. This allows establishing local linear convergence of the projected gradient descent method and Riemannian gradient method with unit step size for solving the consensus problem over the compact submanifolds, which plays a central role in designing and analyzing decentralized algorithms. Secondly, we propose two decentralized methods, namely the decentralized projected Riemannian gradient descent (DPRGD) and the decentralized projected Riemannian gradient tracking (DPRGT) methods. We establish their convergence rates of $\mathcal{O}(1/\sqrt{K})$ and $\mathcal{O}(1/K)$, respectively, to reach a stationary point. To the best of our knowledge, DPRGT is the first decentralized algorithm to achieve exact convergence for solving decentralized optimization over a compact submanifold. The key ingredients in the proof are the Lipschitz-type inequalities of the projection operator and smooth functions on the compact submanifold, which could be of independent interest. Finally, we demonstrate the effectiveness of our proposed methods compared to state-of-the-art ones through numerical experiments on eigenvalue problems and low-rank matrix completion.
主讲人简介
户将,现为哈佛医学院和麻省总医院博士后。2020年博士毕业于北京大学,导师是文再文教授;2021年至2022年在香港中文大学做博士后研究;2022年3月起在哈佛医学院和麻省总医院做博士后研究。他的主要研究兴趣包括光滑和非光滑优化,分布式优化和联邦学习,以及在医学图像分析中的应用。目前在SIAM 系列杂志等发表多篇文章,出版教材《最优化:建模、算法与理论》,《最优化计算方法》。