Article
Physics, Multidisciplinary
Hao Tang, Boning Li, Guoqing Wang, Haowei Xu, Changhao Li, Ariel Barr, Paola Cappellaro, Ju Li
Summary: This work presents a communication-efficient quantum algorithm for solving the least-square fitting and softmax regression problems in distributed machine learning. Our algorithm achieves a communication complexity of O(log2(N)/e), providing a communication advantage compared to classical and other quantum methods. The quantum bipartite correlator algorithm used in this work can be further applied to other information processing tasks.
PHYSICAL REVIEW LETTERS
(2023)
Article
Computer Science, Artificial Intelligence
Yifei Cheng, Shuheng Shen, Xianfeng Liang, Jingchang Liu, Joya Chen, Tie Zhang, Enhong Chen
Summary: The efficiency of communication in federated learning is crucial for performance, but non-IID data distribution adds to the communication cost. To tackle this challenge, the SQUARFA algorithm is proposed, which achieves optimal convergence rate and communication complexity for both convex and non-convex objectives under specific conditions. Experimental results demonstrate the superiority of the proposed algorithm.
Article
Computer Science, Artificial Intelligence
Xingcai Zhou, Le Chang, Pengfei Xu, Shaogao Lv
Summary: Communication efficiency and robustness are two major issues in modern distributed learning frameworks. This paper proposes two communication-efficient and robust distributed learning algorithms for convex problems. The algorithms are provably robust against Byzantine failures and achieve optimal statistical rates. Simulated and real data experiments are conducted to demonstrate the numerical performance of the algorithms.
PATTERN RECOGNITION
(2023)
Article
Biology
Changgee Chang, Zhiqi Bu, Qi Long
Summary: Electronic health records (EHRs) provide opportunities for precision medicine, but sharing data is a challenge. We propose a method that aggregates data from external sites by treating it as missing data. We also suggest incorporating posterior samples from remote sites to improve parameter estimates.
Article
Engineering, Electrical & Electronic
Kun Cheng, Fengxian Guo, Mugen Peng
Summary: Distributed machine learning (DML) is a promising computing paradigm for enabling edge intelligence in wireless networks. This paper studies the convergence and system implementation of DML over a wireless device-to-device (D2D) network. By introducing the DML training process and system model, and analyzing the convergence rate and delay, the paper proposes a system implementation approach to reduce the convergence rate and delay. Experimental results show that the proposed D2D framework effectively reduces training delay and improves computation efficiency.
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY
(2023)
Article
Engineering, Electrical & Electronic
Haoyang Hu, Youlong Wu, Yuanming Shi, Songze Li, Chunxiao Jiang, Wei Zhang
Summary: Distributed multi-task learning can improve generalization performance by jointly learning multiple models and exploiting task-related information. However, it suffers from communication bottlenecks, especially in large-scale scenarios. To address this, the paper proposes coded computing schemes for flexible and fixed data placements, reducing communication loads and speeding up training.
IEEE TRANSACTIONS ON COMMUNICATIONS
(2023)
Article
Computer Science, Theory & Methods
Yuhao Zhou, Qing Ye, Jiancheng Lv
Summary: In this article, the authors propose an innovative framework for federated learning called Overlap-FedAvg, which reduces communication overhead by parallelizing model training and model communication phases. Extensive experiments demonstrate that the proposed framework substantially reduces communication overhead and achieves good performance on multiple tasks and datasets.
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
(2022)
Article
Computer Science, Theory & Methods
Haozhao Wang, Song Guo, Zhihao Qu, Ruixuan Li, Ziming Liu
Summary: Communication bottleneck poses a challenge in large-scale decentralized training systems. This research proposes ECSD-SGD, a method that accelerates decentralized training through error-compensated sparsification, with theoretical analysis and experimental validation.
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS
(2022)
Article
Engineering, Electrical & Electronic
Sagar Shrestha, Xiao Fu
Summary: Classic and deep learning-based generalized canonical correlation analysis (GCCA) algorithms aim to find low-dimensional common representations of data entities from multiple views using linear transformations and neural networks, respectively. Federated learning-based GCCA is motivated when the views are stored at different computing agents and data sharing is undesired. This work proposes a communication-efficient federated learning framework for linear and deep GCCA, addressing the high communication overhead issue through aggressive compression of exchanging information. The proposed algorithm achieves substantial reduction in communication overheads without sacrificing accuracy and convergence speed compared to the unquantized version.
IEEE TRANSACTIONS ON SIGNAL PROCESSING
(2023)
Article
Engineering, Electrical & Electronic
Guangfeng Yan, Tan Li, Shao-Lun Huang, Tian Lan, Linqi Song
Summary: This paper proposes an Adaptively-Compressed Stochastic Gradient Descent (AC-SGD) strategy that adjusts the quantization and sparsification parameters based on the norm of gradients, communication budget, and remaining number of iterations. By solving an optimization problem, an enhanced compression algorithm is obtained, which significantly improves model accuracy under given communication budget constraints.
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
(2022)
Article
Engineering, Electrical & Electronic
Yo-Seb Jeon, Mohammad Mohammadi Amiri, Namyoon Lee
Summary: This paper proposes a communication-efficient strategy for federated learning over multiple-input multiple-output multiple access channels (MACs). The strategy involves compressing high dimensional local gradients to lower-dimensional gradient vectors using block sparsification and performing joint MIMO detection and sparse local-gradient recovery. Simulation results demonstrate that the proposed method significantly reduces communication cost while achieving identical classification accuracy.
IEEE TRANSACTIONS ON COMMUNICATIONS
(2022)
Article
Engineering, Electrical & Electronic
Myung Cho, Lifeng Lai, Weiyu Xu
Summary: This paper explores the convergence rate of distributed dual coordinate ascent for machine learning on a general tree-structured network. By analyzing the network effect and optimizing the algorithm considering communication delays, the study aims to maximize convergence speed. Numerical experiments demonstrate the algorithm's usability in tree networks where direct communication to a central node is not possible.
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS
(2021)
Article
Mathematics
Xingcai Zhou, Hao Shen
Summary: Distributed learning has become increasingly important in the era of big data. This article proposes a novel efficient distributed sparse learning algorithm, CSLSVM, which is based on a communication-efficient surrogate likelihood framework for high-dimensional problems with convex or nonconvex penalties. Numerical experiments show that the proposed approach is highly competitive with the centralized method.
Article
Computer Science, Artificial Intelligence
Dimitris Stripelis, Paul M. Thompson, Jose Luis Ambite
Summary: Federated Learning is a promising machine learning method for sharing data in a distributed environment without actually sharing the data. However, in heterogeneous environments, existing approaches show poor performance. This study introduces an energy-efficient Semi-Synchronous Federated Learning protocol that achieves fast convergence and minimal idle time by mixing local models.
ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY
(2022)
Article
Automation & Control Systems
Tianyi Chen, Kaiqing Zhang, Georgios B. Giannakis, Tamer Basar
Summary: This article discusses the problem of distributed policy optimization in reinforcement learning and proposes a novel policy gradient approach that reduces communication overhead without degrading learning performance.
IEEE TRANSACTIONS ON CONTROL OF NETWORK SYSTEMS
(2022)