4.7 Article

Decentralized Stochastic Optimization and Machine Learning: A Unified Variance-Reduction Framework for Robust Performance and Fast Convergence

期刊

IEEE SIGNAL PROCESSING MAGAZINE
卷 37, 期 3, 页码 102-113

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/MSP.2020.2974267

关键词

-

资金

  1. National Science Foundation [CCF-1350264, CCF-1513936, CMMI-1903972, CBET-1935555]

向作者/读者索取更多资源

Decentralized methods to solve finite-sum minimization problems are important in many signal processing and machine learning tasks where the data samples are distributed across a network of nodes, and raw data sharing is not permitted due to privacy and/or resource constraints. In this article, we review decentralized stochastic first-order methods and provide a unified algorithmic framework that combines variance reduction with gradient tracking to achieve robust performance and fast convergence. We provide explicit theoretical guarantees of the corresponding methods when the objective functions are smooth and strongly convex and show their applicability to nonconvex problems via numerical experiments. Throughout the article, we provide intuitive illustrations of the main technical ideas by casting appropriate tradeoffs and comparisons among the methods of interest and by highlighting applications to decentralized training of machine learning models.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据