4.5 Article

A robust multi-batch L-BFGS method for machine learning*

Journal

OPTIMIZATION METHODS & SOFTWARE
Volume 35, Issue 1, Pages 191-219

Publisher

TAYLOR & FRANCIS LTD
DOI: 10.1080/10556788.2019.1658107

Keywords

L-BFGS; multi-batch; fault-tolerant; sampling; consistency; overlap

Funding

  1. DARPA Lagrange award [HR-001117S0039]
  2. U.S. National Science Foundation [NSF: CCF: 1618717, NSF: CMMI: 1663256, NSF: CCF: 1740796]

Ask authors/readers for more resources

This paper describes an implementation of the L-BFGS method designed to deal with two adversarial situations. The first occurs in distributed computing environments where some of the computational nodes devoted to the evaluation of the function and gradient are unable to return results on time. A similar challenge occurs in a multi-batch approach in which the data points used to compute function and gradients are purposely changed at each iteration to accelerate the learning process. Difficulties arise because L-BFGS employs gradient differences to update the Hessian approximations, and when these gradients are computed using different data points the updating process can be unstable. This paper shows how to perform stable quasi-Newton updating in the multi-batch setting, studies the convergence properties for both convex and non-convex functions, and illustrates the behaviour of the algorithm in a distributed computing platform on binary classification logistic regression and neural network training problems that arise in machine learning.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.5
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available