4.6 Article

Gradient sliding for composite optimization

期刊

MATHEMATICAL PROGRAMMING
卷 159, 期 1-2, 页码 201-235

出版社

SPRINGER HEIDELBERG
DOI: 10.1007/s10107-015-0955-5

关键词

Convex programming; Complexity; Gradient sliding; Nesterov's method; Data analysis

资金

  1. NSF CAREER Award [CMMI-1254446, CMMI-1537414]
  2. NSF [DMS-1319050]
  3. ONR [N00014-13-1-0036]
  4. Div Of Civil, Mechanical, & Manufact Inn
  5. Directorate For Engineering [1637474] Funding Source: National Science Foundation

向作者/读者索取更多资源

We consider in this paper a class of composite optimization problems whose objective function is given by the summation of a general smooth and nonsmooth component, together with a relatively simple nonsmooth term. We present a new class of first-order methods, namely the gradient sliding algorithms, which can skip the computation of the gradient for the smooth component from time to time. As a consequence, these algorithms require only gradient evaluations for the smooth component in order to find an -solution for the composite problem, while still maintaining the optimal bound on the total number of subgradient evaluations for the nonsmooth component. We then present a stochastic counterpart for these algorithms and establish similar complexity bounds for solving an important class of stochastic composite optimization problems. Moreover, if the smooth component in the composite function is strongly convex, the developed gradient sliding algorithms can significantly reduce the number of graduate and subgradient evaluations for the smooth and nonsmooth component to and , respectively. Finally, we generalize these algorithms to the case when the smooth component is replaced by a nonsmooth one possessing a certain bi-linear saddle point structure.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据