4.7 Article

Privacy in Neural Network Learning: Threats and Countermeasures

期刊

IEEE NETWORK
卷 32, 期 4, 页码 61-67

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/MNET.2018.1700447

关键词

-

资金

  1. National Natural Science Foundation of China [61672151, 61370205, 61772340, 61472255, 61420106010]
  2. Fundamental Research Funds for the Central Universities [EG2018028]
  3. Shanghai Rising-Star Program [17QA1400100]
  4. DHU Distinguished Young Professor Program

向作者/读者索取更多资源

Algorithmic breakthroughs, the feasibility of collecting huge amount of data, and increasing computational power, contribute to the remarkable achievements of NNs. In particular, since Deep Neural Network (DNN) learning presents astonishing results in speech and image recognition, the amount of sophisticated applications based on it has exploded. However, the increasing number of instances of privacy leakage has been reported, and the corresponding severe consequences have caused great worry in this area. In this article, we focus on privacy issues in NN learning. First, we identify the privacy threats during NN training, and present privacy-preserving training schemes in terms of using centralized and distributed approaches. Second, we consider the privacy of prediction requests, and discuss the privacy-preserving protocols for NN prediction. We also analyze the privacy vulnerabilities of trained models. Three types of attacks on private information embedded in trained NN models are discussed, and a differential privacy-based solution is introduced.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据