4.7 Article

Differentially Private Byzantine-Robust Federated Learning

期刊

出版社

IEEE COMPUTER SOC
DOI: 10.1109/TPDS.2022.3167434

关键词

Collaborative work; Privacy; Servers; Differential privacy; Computational modeling; Training; Data models; Federated learning; differential privacy; byzantine-robust

资金

  1. National Natural Science Foundation of China [62072132, 61960206014, 62032012]

向作者/读者索取更多资源

This article introduces an efficient differentially private Byzantine-robust federated learning scheme that can effectively prevent adversarial attacks and protect the privacy of individual participants.
Federated learning is a collaborative machine learning framework where a global model is trained by different organizations under the privacy restrictions. Promising as it is, privacy and robustness issues emerge when an adversary attempts to infer the private information from the exchanged parameters or compromise the global model. Various protocols have been proposed to counter the security risks, however, it becomes challenging when one wants to make federated learning protocols robust against Byzantine adversaries while preserving the privacy of the individual participant. In this article, we propose a differentially private Byzantine-robust federated learning scheme (DPBFL) with high computation and communication efficiency. The proposed scheme is effective in preventing adversarial attacks launched by the Byzantine participants and achieves differential privacy through a novel aggregation protocol in the shuffle model. The theoretical analysis indicates that the proposed scheme converges to the approximate optimal solution with the learning error dependent on the differential privacy budget and the number of Byzantine participants. Experimental results on MNIST, FashionMNIST and CIFAR10 demonstrate that the proposed scheme is effective and efficient.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据