Journal
IEEE INTERNET OF THINGS JOURNAL
Volume 8, Issue 24, Pages 17308-17319Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/JIOT.2021.3079472
Keywords
Training; Servers; Hidden Markov models; Data models; Convergence; Computational modeling; Biological system modeling; Convergence bound; defensive mechanism; federated learning (FL); unreliable clients
Categories
Funding
- National Natural Science Foundation of China [61872184, 62002170, 62071296]
- National Key Project [2020YFB1807700, 2018YFB1801102]
- Sciences and Technology Commission of Shanghai (STCSM) [20JC1416502]
- U.S. National Science Foundation [CCF-1908308]
Ask authors/readers for more resources
This article discusses the security risks that federated learning may encounter when training machine learning models among distributed clients, proposes a defensive mechanism named DeepSA, and validates its effectiveness through theoretical analysis and experimental results, comparing with other state-of-the-art defensive mechanisms.
Owing to the low communication costs and privacy-promoting capabilities, federated learning (FL) has become a promising tool for training effective machine learning models among distributed clients. However, with the distributed architecture, low-quality models could be uploaded to the aggregator server by unreliable clients, leading to a degradation or even a collapse of training. In this article, we model these unreliable behaviors of clients and propose a defensive mechanism to mitigate such a security risk. Specifically, we first investigate the impact on the models caused by unreliable clients by deriving a convergence upper bound on the loss function based on the gradient descent updates. Our bounds reveal that with a fixed amount of total computational resources, there exists an optimal number of local training iterations in terms of convergence performance. We further design a novel defensive mechanism, named deep neural network-based secure aggregation (DeepSA). Our experimental results validate our theoretical analysis. In addition, the effectiveness of DeepSA is verified by comparing with other state-of-the-art defensive mechanisms.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available