4.7 Article

Byzantine-Resilient Secure Federated Learning

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JSAC.2020.3041404

关键词

Federated learning; privacy-preserving machine learning; Byzantine-resilience; distributed training in mobile networks

资金

  1. Defense Advanced Research Projects Agency (DARPA) [HR001117C0053]
  2. Army Research Office (ARO) [W911NF1810400]
  3. NSF [CCF-1703575, CCF-1763673]
  4. Office of Naval Research (ONR) [N00014-16-1-2189]
  5. U.S. Department of Defense (DOD) [W911NF1810400] Funding Source: U.S. Department of Defense (DOD)

向作者/读者索取更多资源

Secure federated learning improves machine learning models by training on data collected from mobile users in a privacy-preserving manner. The BREA framework offers Byzantine-resilient secure aggregation through stochastic quantization, verifiable outlier detection, and secure model aggregation, ensuring privacy and convergence. Experimental results demonstrate convergence and comparable accuracy even in the presence of adversarial users.
Secure federated learning is a privacy-preserving framework to improve machine learning models by training over large volumes of data collected by mobile users. This is achieved through an iterative process where, at each iteration, users update a global model using their local datasets. Each user then masks its local update via random keys, and the masked models are aggregated at a central server to compute the global model for the next iteration. As the local updates are protected by random masks, the server cannot observe their true values. This presents a major challenge for the resilience of the model against adversarial (Byzantine) users, who can manipulate the global model by modifying their local updates or datasets. Towards addressing this challenge, this paper presents the first single-server Byzantine-resilient secure aggregation framework (BREA) for secure federated learning. BREA is based on an integrated stochastic quantization, verifiable outlier detection, and secure model aggregation approach to guarantee Byzantine-resilience, privacy, and convergence simultaneously. We provide theoretical convergence and privacy guarantees and characterize the fundamental trade-offs in terms of the network size, user dropouts, and privacy protection. Our experiments demonstrate convergence in the presence of Byzantine users, and comparable accuracy to conventional federated learning benchmarks.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据