Journal
IEEE NETWORK
Volume 36, Issue 1, Pages 84-90Publisher
IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/MNET.011.2000783
Keywords
Collaborative work; Training; Servers; Neurons; Computational modeling; Aggregates; Task analysis
Categories
Funding
- National Natural Science Foundation of China [61972296]
- NSFC [U20B2049, 61822207]
Ask authors/readers for more resources
This article investigates the security issues in federated learning and proposes an effective coordinated backdoor attack method using multiple local triggers to attack federated learning models. Experimental results show that this method outperforms coordinated attacks using random triggers and single trigger backdoor attacks in terms of attack success rate. It is also shown that Byzantine-resilient aggregation methods are not robust to these attacks.
Federated learning enables distributed training of deep learning models among user equipment (UE) to obtain a high-quality global model. A centralized server aggregates the updates submitted by UEs without knowledge of the local training data or process. Despite its privacy preserving merit, we reveal a severe security concern. Malicious UEs can manipulate their training data by injecting a backdoor trigger. Thus, the global model that aggregates those malicious updates may make false predictions on the samples with the backdoor trigger. However, the effect of a single backdoor trigger will quickly be diluted by subsequent benign updates. In this work, we present an effective coordinated backdoor attack against federated learning using multiple local triggers; the global trigger consists of various separate local triggers. Moreover, in contrast to using random triggers, we propose using model-dependent triggers (i.e., generated based on local models of attackers) to conduct backdoor attacks. We conduct extensive experiments to assess the effectiveness of our proposed backdoor attacks on MNIST and CIFAR-10 datasets. Experimental results show that our proposed methodology outperforms both coordinated attacks using random triggers and single trigger backdoor attacks in terms of attack success rate. We also show that Byzantine-resilient aggregation methodologies are not robust to our proposed attacks.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available