4.7 Article

Dual constraints and adversarial learning for fair recommenders

Journal

KNOWLEDGE-BASED SYSTEMS
Volume 239, Issue -, Pages -

Publisher

ELSEVIER
DOI: 10.1016/j.knosys.2021.108058

Keywords

Fair recommendation; Graph neural network; Recommender systems; Adversarial learning; equal qualification [5]; Ad recommenders display racial discrimi-

Funding

  1. National Natural Science Foundation of China [61772103, 62076046, 61976036, 62006034]
  2. Ministry of Educa-tion Humanities and Social Science Project [19YJCZH199]

Ask authors/readers for more resources

Recommender systems have a profound impact on people's lifestyles, but fairness problems have been identified. The presence of sensitive information in user behavior data leads to unfairness. To address this, a fairness-aware recommender model with dual fairness constraints is proposed, utilizing an adversarial graph neural network and fairness constraints to improve the fairness of recommendations.
Recommender systems, which are consist of common artificial intelligence technology, have a profound impact on the lifestyles of people. However, recent studies have demonstrated that recommender systems have fairness problems which means that some people with certain attributes are treated unfairly. A fair recommender means that users with different attributes achieve the same recommender accuracy. In particular, the recommender systems completely rely on users' behavior data for preferences learning, which leads to a high probability of unfair problems because that the behavior data usually contains sensitive information of users. Unfortunately, there are a few studies exploring unfair problem in recommender systems. To alleviate this problem, we present a novel fairnessaware recommender with dual fairness constraints (FRFC) to improve fairness in recommendations and protect the user's sensitive information from being exposed. This model has several advantages: one advantage is that an adversarial-based graph neural network (GNN) is proposed to prevent the target user being infected by sensitive features of neighbor users; another advantage is that two fairness constraints are proposed to solve the problems of adversarial classifier failures in whole data and unfair ranking losses. With this design, the FRFC model can effectively filter out users' sensitive information and give users of different attributes the same training opportunities, which is helpful for making a fair recommendation. Finally, extensive experiments demonstrate that the proposed model can significantly improve the fairness of recommendation results. (c) 2021 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available