4.6 Article

Resisting membership inference attacks through knowledge distillation

期刊

NEUROCOMPUTING
卷 452, 期 -, 页码 114-126

出版社

ELSEVIER
DOI: 10.1016/j.neucom.2021.04.082

关键词

Privacy; Membership inference attacks; Knowledge distillation; Membership privacy; Deep learning

资金

  1. National Key R&D Program of China [2020YFB1005702]
  2. National Natural Science Foundation of China [61772035, 61972005, 61932001]

向作者/读者索取更多资源

Two deep learning algorithms, complementary knowledge distillation (CKD) and pseudo complementary knowledge distillation (PCKD), have been proposed to handle the trade-off between privacy and utility effectively.
Recently, membership inference attacks (MIAs) against machine learning models have been proposed. Using MIAs, adversaries can inference whether a data record is in the training set of the target model. Defense methods which use differential privacy mechanisms or adversarial training cannot handle the trade-off between privacy and utility well. Other methods based on knowledge transfer to improve model utility need public unlabeled data in the same distribution as private data, and this requirement may not be satisfied in some scenarios. To handle the trade-off between privacy and utility better, we propose two algorithms of deep learning, i.e., complementary knowledge distillation (CKD) and pseudo complementary knowledge distillation (PCKD). In CKD, the transfer data of knowledge distillation all come from the private training set, but their soft targets are generated from the teacher model which is trained using their complementary set. With similar idea, we propose PCKD which reduces the training set of each teacher model and uses model averaging to generate soft targets of transfer data. Because smaller training set leads to less utility, PCKD utilizes pre-training to improve the utility of teacher models. Experimental results on widely used datasets show that CKD and PCKD can both averagely decrease attack accuracy by nearly 25% with negligible utility loss. The training time of PCKD is nearly 40% lower than that of CKD. Compared with existing defense methods such as DMP, adversarial regularization, dropout, and DPSGD, CKD and PCKD have great advantages on handling the trade-off between privacy and utility. (c) 2021 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据