期刊
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
卷 25, 期 6, 页码 1118-1130出版社
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TNNLS.2013.2286696
关键词
Feature selection; forward-backward splitting algorithms; learning to rank; nonconvex regularizations; regularized support vector machines; sparsity
类别
资金
- CALMIP [2012-32]
- Research Federation FREMIT [FR3424]
- Conseil General of Midi-Pyrenees [10009108]
Feature selection in learning to rank has recently emerged as a crucial issue. Whereas several preprocessing approaches have been proposed, only a few have focused on integrating feature selection into the learning process. In this paper, we propose a general framework for feature selection in learning to rank using support vector machines with a sparse regularization term. We investigate both classical convex regularizations, such as l(1) or weighted l(1), and nonconvex regularization terms, such as log penalty, minimax concave penalty, or l(p) pseudo-norm with p < 1. Two algorithms are proposed: the first, an accelerated proximal approach for solving the convex problems, and, the second, a reweighted l(1) scheme to address nonconvex regularizations. We conduct intensive experiments on nine datasets from Letor 3.0 and Letor 4.0 corpora. Numerical results show that the use of nonconvex regularizations we propose leads to more sparsity in the resulting models while preserving the prediction performance. The number of features is decreased by up to a factor of 6 compared to the l(1) regularization. In addition, the software is publicly available on the web.(1)
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据