期刊
IEEE TRANSACTIONS ON COMPUTERS
卷 62, 期 6, 页码 1221-1233出版社
IEEE COMPUTER SOC
DOI: 10.1109/TC.2012.62
关键词
Learning-to-rank; sparse models; ranking algorithm; Fenchel duality
资金
- National Science Foundation of China [61003045, 61003241, 61033010]
- Natural Science Foundation of Guangdong Province, China [10451027501005667]
- Educational Commission of Guangdong Province, China
- Fundamental Research Funds for the Central Universities
Learning-to-rank for information retrieval has gained increasing interest in recent years. Inspired by the success of sparse models, we consider the problem of sparse learning-to-rank, where the learned ranking models are constrained to be with only a few nonzero coefficients. We begin by formulating the sparse learning-to-rank problem as a convex optimization problem with a sparse-inducing l(1) constraint. Since the l(1) constraint is nondifferentiable, the critical issue arising here is how to efficiently solve the optimization problem. To address this issue, we propose a learning algorithm from the primal dual perspective. Furthermore, we prove that, after at most O(1/epsilon) iterations, the proposed algorithm can guarantee the obtainment of an epsilon-accurate solution. This convergence rate is better than that of the popular subgradient descent algorithm. i.e., O(1/epsilon(2)). Empirical evaluation on several public benchmark data sets demonstrates the effectiveness of the proposed algorithm: 1) Compared to the methods that learn dense models, learning a ranking model with sparsity constraints significantly improves the ranking accuracies. 2) Compared to other methods for sparse learning-to-rank, the proposed algorithm tends to obtain sparser models and has superior performance gain on both ranking accuracies and training time. 3) Compared to several state-of-the-art algorithms, the ranking accuracies of the proposed algorithm are very competitive and stable.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据