4.7 Article

Coordinated Backdoor Attacks against Federated Learning with Model-Dependent Triggers

Journal

IEEE NETWORK
Volume 36, Issue 1, Pages 84-90

Publisher

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/MNET.011.2000783

Keywords

Collaborative work; Training; Servers; Neurons; Computational modeling; Aggregates; Task analysis

Funding

  1. National Natural Science Foundation of China [61972296]
  2. NSFC [U20B2049, 61822207]

Ask authors/readers for more resources

This article investigates the security issues in federated learning and proposes an effective coordinated backdoor attack method using multiple local triggers to attack federated learning models. Experimental results show that this method outperforms coordinated attacks using random triggers and single trigger backdoor attacks in terms of attack success rate. It is also shown that Byzantine-resilient aggregation methods are not robust to these attacks.
Federated learning enables distributed training of deep learning models among user equipment (UE) to obtain a high-quality global model. A centralized server aggregates the updates submitted by UEs without knowledge of the local training data or process. Despite its privacy preserving merit, we reveal a severe security concern. Malicious UEs can manipulate their training data by injecting a backdoor trigger. Thus, the global model that aggregates those malicious updates may make false predictions on the samples with the backdoor trigger. However, the effect of a single backdoor trigger will quickly be diluted by subsequent benign updates. In this work, we present an effective coordinated backdoor attack against federated learning using multiple local triggers; the global trigger consists of various separate local triggers. Moreover, in contrast to using random triggers, we propose using model-dependent triggers (i.e., generated based on local models of attackers) to conduct backdoor attacks. We conduct extensive experiments to assess the effectiveness of our proposed backdoor attacks on MNIST and CIFAR-10 datasets. Experimental results show that our proposed methodology outperforms both coordinated attacks using random triggers and single trigger backdoor attacks in terms of attack success rate. We also show that Byzantine-resilient aggregation methodologies are not robust to our proposed attacks.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available