4.7 Article

Stability-Based Generalization Analysis of Distributed Learning Algorithms for Big Data

Journal

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2019.2910188

Keywords

Big data; distributed learning algorithms; distributed simulations; generalization

Funding

  1. National Key Research and Development Program of China [2018YFB1305104]
  2. National Natural Science Foundation of China [61673118, 61533019]
  3. Beijing Municipal Science and Technology Commission [Z181100008918007]
  4. Intel Collaborative Research Institute for Intelligent and Automated Connected Vehicles (ICRI-IACV)
  5. Shanghai Talents Development Funds [201629]

Ask authors/readers for more resources

As one of the efficient approaches to deal with big data, divide-and-conquer distributed algorithms, such as the distributed kernel regression, bootstrap, structured perception training algorithms, and so on, are proposed and broadly used in learning systems. Some learning theories have been built to analyze the feasibility, approximation, and convergence bounds of these distributed learning algorithms. However, less work has been studied on the stability of these distributed learning algorithms. In this paper, we discuss the generalization bounds of distributed learning algorithms from the view of algorithmic stability. First, we introduce a definition of uniform distributed stability for distributed algorithms and study the distributed algorithms' generalization risk bounds. Then, we analyze the stability properties and generalization risk bounds of a kind of regularization-based distributed algorithms. Two generalization distributed risks obtained show that the generalization distributed risk bounds for the difference between their generalization distributed and empirical distributed/leave-one-computer-out risks are closely related to the size of samples n and the amount of working computers m mathcal O(m/n(1/2)). Furthermore, the results in this paper indicate that, for a good generalization regularized distributed kernel algorithm, the regularization parameter lambda should be adjusted with the change of the term m/n(1/2). These theoretic discoveries provide the useful guidance when deploying the distributed algorithms on practical big data platforms. We explore our theoretic analyses through two simulation experiments. Finally, we discuss some problems about the sufficient amount of working computers, nonequivalence, and generalization for distributed learning. We show that the rules for the computation on one single computer may not always hold for distributed learning.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available