4.4 Article

Fast-adapting and privacy-preserving federated recommender system

期刊

VLDB JOURNAL
卷 31, 期 5, 页码 877-896

出版社

SPRINGER
DOI: 10.1007/s00778-021-00700-6

关键词

Recommender system; Federated learning; Meta-learning

资金

  1. ARC Discovery Project [DP190101985]
  2. ARC Future Fellowship [FT210100624]

向作者/读者索取更多资源

In the mobile Internet era, recommender systems play a crucial role in helping users discover useful items and addressing the issue of information overload. Recent advancements in deep neural network-based recommender systems have improved prediction accuracy, but concerns over user privacy leakage have led to the need for systems that balance accuracy with strong privacy protection.
In the mobile Internet era, recommender systems have become an irreplaceable tool to help users discover useful items, thus alleviating the information overload problem. Recent research on deep neural network (DNN)-based recommender systems have made significant progress in improving prediction accuracy, largely attributed to the widely accessible large-scale user data. Such data is commonly collected from users' personal devices and then centrally stored in the cloud server to facilitate model training. However, with the rising public concerns on user privacy leakage in online platforms, online users are becoming increasingly anxious over abuses of user privacy. Therefore, it is urgent and beneficial to develop a recommender system that can achieve both high prediction accuracy and strong privacy protection. To this end, we propose a DNN-based recommendation model called PrivRec running on the decentralized federated learning (FL) environment, which ensures that a user's data is fully retained on her/his personal device while contributing to training an accurate model. On the other hand, to better embrace the data heterogeneity (e.g., users' data vary in scale and quality significantly) in FL, we innovatively introduce a first-order meta-learning method that enables fast on-device personalization with only a few data points. Furthermore, to defend against potential malicious participants that pose serious security threat to other users, we further develop a user-level differentially private model, namely DP-PrivRec, so attackers are unable to identify any arbitrary user from the trained model. To compensate for the loss by adding noise during model updates, we introduce a two-stage training approach. Finally, we conduct extensive experiments on two large-scale datasets in a simulated FL environment, and the results validate the superiority of both PrivRec and DP-PrivRec.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.4
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据