4.7 Article

Nonconvex Regularizations for Feature Selection in Ranking With Sparse SVM

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2013.2286696

关键词

Feature selection; forward-backward splitting algorithms; learning to rank; nonconvex regularizations; regularized support vector machines; sparsity

资金

  1. CALMIP [2012-32]
  2. Research Federation FREMIT [FR3424]
  3. Conseil General of Midi-Pyrenees [10009108]

向作者/读者索取更多资源

Feature selection in learning to rank has recently emerged as a crucial issue. Whereas several preprocessing approaches have been proposed, only a few have focused on integrating feature selection into the learning process. In this paper, we propose a general framework for feature selection in learning to rank using support vector machines with a sparse regularization term. We investigate both classical convex regularizations, such as l(1) or weighted l(1), and nonconvex regularization terms, such as log penalty, minimax concave penalty, or l(p) pseudo-norm with p < 1. Two algorithms are proposed: the first, an accelerated proximal approach for solving the convex problems, and, the second, a reweighted l(1) scheme to address nonconvex regularizations. We conduct intensive experiments on nine datasets from Letor 3.0 and Letor 4.0 corpora. Numerical results show that the use of nonconvex regularizations we propose leads to more sparsity in the resulting models while preserving the prediction performance. The number of features is decreased by up to a factor of 6 compared to the l(1) regularization. In addition, the software is publicly available on the web.(1)

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据