4.6 Article

A coordinate gradient descent method for linearly constrained smooth optimization and support vector machines training

期刊

COMPUTATIONAL OPTIMIZATION AND APPLICATIONS
卷 47, 期 2, 页码 179-206

出版社

SPRINGER
DOI: 10.1007/s10589-008-9215-4

关键词

Support vector machine; Quadratic program; Continuous quadratic knapsack problem; Linear constraints; Conformal realization; Coordinate gradient descent; Global convergence; Linear convergence rate; Error bound

资金

  1. National Science Foundation [DMS-0511283]

向作者/读者索取更多资源

Support vector machines (SVMs) training may be posed as a large quadratic program (QP) with bound constraints and a single linear equality constraint. We propose a (block) coordinate gradient descent method for solving this problem and, more generally, linearly constrained smooth optimization. Our method is closely related to decomposition methods currently popular for SVM training. We establish global convergence and, under a local error bound assumption (which is satisfied by the SVM QP), linear rate of convergence for our method when the coordinate block is chosen by a Gauss-Southwell-type rule to ensure sufficient descent. We show that, for the SVM QP with n variables, this rule can be implemented in O(n) operations using Rockafellar's notion of conformal realization. Thus, for SVM training, our method requires only O(n) operations per iteration and, in contrast to existing decomposition methods, achieves linear convergence without additional assumptions. We report our numerical experience with the method on some large SVM QP arising from two-class data classification. Our experience suggests that the method can be efficient for SVM training with nonlinear kernel.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据