4.6 Article Proceedings Paper

Multi-Objective Ranked Bandits for Recommender Systems

Journal

NEUROCOMPUTING
Volume 246, Issue -, Pages 12-24

Publisher

ELSEVIER SCIENCE BV
DOI: 10.1016/j.neucom.2016.12.076

Keywords

Recommender systems; Online recommendation; Multi-armed bandits

Funding

  1. CNPq
  2. CEFET-MG [PROPESQ-10314/14]
  3. FAPEMIG [APQ-01400-14]
  4. Microsoft Azure Sponsorship CEFET-MG

Ask authors/readers for more resources

This paper is interested in recommender systems that work with implicit feedback in dynamic scenarios providing online recommendations, such as news articles and ads recommendation in Web portals. In these dynamic scenarios, user feedback to the system is given through clicks, and feedback needs to be quickly exploited to improve subsequent recommendations. In this scenario, we propose an algorithm named multi-objective ranked bandits, which in contrast with current methods in the literature, is able to recommend lists of items that are accurate, diverse and novel. The algorithm relies on four main components: a scalarization function, a set of recommendation quality metrics, a dynamic prioritization scheme for weighting these metrics and a base multi-armed bandit algorithm. Results show that our algorithm provides improvements of 7.8 and 10.4% in click-through rate in two real-world large-scale datasets when compared to the single-objective state-of-the-art algorithm. (C) 2017 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available