4.6 Article

Defense against neural trojan attacks: A survey

期刊

NEUROCOMPUTING
卷 423, 期 -, 页码 651-667

出版社

ELSEVIER
DOI: 10.1016/j.neucom.2020.07.133

关键词

Deep learning; Trojan attacks; Backdoor attacks; Defense

资金

  1. Dongguk University
  2. Basic Science Research Program through the National Research Foundation of Korea (NRFK) - Ministry of Education [2018R1D1A1B07041981]
  3. National Research Foundation of Korea [2018R1D1A1B07041981] Funding Source: Korea Institute of Science & Technology Information (KISTI), National Science & Technology Information Service (NTIS)

向作者/读者索取更多资源

Deep learning techniques are widely used in real-world problems, requiring large datasets, memory, and computations for high-performance neural networks, leading to increased demand for outsourced training. However, malicious inputs during training may pose security risks by activating inserted malicious functionality. These attacks, known as trojan or backdoor attacks, are stealthy and hard to detect. This paper focuses on recent detection and mitigation techniques against such attacks on neural networks.
Deep learning techniques have become significantly prevalent in many real-world problems including a variety of detection, recognition, and classification tasks. To obtain high-performance neural networks, an enormous amount of training datasets, memory, and time-consuming computations are required which has increased the demands for outsource training among users. As a result, the machine-learning-as-aservice(MLaaS) providers or a third party can gain an opportunity to put the model's security at risk by training the model with malicious inputs. The malicious functionality inserted into the neural network by the adversary will be activated in the presence of specific inputs. These kinds of attacks to neural networks, called trojan or backdoor attacks, are very stealthy and hard to detect because they do not affect the network performance on clean datasets. In this paper, we refer to two important threat models and we focus on the detection and mitigation techniques against these types of attacks on neural networks which has been proposed recently. We summarize, discuss, and compare the defense methods and their corresponding results. (c) 2020 Elsevier B.V. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.6
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据