4.7 Article

Small perturbations are enough: Adversarial attacks on time series prediction

期刊

INFORMATION SCIENCES
卷 587, 期 -, 页码 794-812

出版社

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2021.11.007

关键词

Time -series data; Time -series prediction; Adversarial attacks; Adversarial time series

资金

  1. Natural Science Foundation of Chongqing [cstc2020jcyj-msxmX0804]
  2. National Natural Science Foundation of China [61802039, 62106030, 61772091, 61802035]
  3. Postdoctoral Science Foundation of Chongqing [cstc2021jcyj-bsh0176]
  4. National Key R&D Program of China [2018YFB0904900, 2018YFB0904905]
  5. Sichuan Science and Technology Program [2021JDJQ0021, 2020YJ0481]

向作者/读者索取更多资源

This study examines the problem of adversarial attacks in time-series prediction and proposes an attack strategy based on importance measurement. The proposed method is validated through comprehensive experiments on real-world datasets and demonstrates its effectiveness, transferability, and low perturbation requirements across different prediction models.
Time-series data are widespread in real-world industrial scenarios. To recover and infer missing information in real-world applications, the problem of time-series prediction has been widely studied as a classical research topic in data mining. Deep learning archi-tectures have been viewed as next-generation time-series prediction models. However, recent studies have shown that deep learning models are vulnerable to adversarial attacks. In this study, we prospectively examine the problem of time-series prediction adversarial attacks and propose an attack strategy for generating an adversarial time series by adding malicious perturbations to the original time series to deteriorate the performance of time -series prediction models. Specifically, a perturbation-based adversarial example generation algorithm is proposed using the gradient information of the prediction model. In practice, unlike the imperceptibility to humans in the field of image processing, time-series data are more sensitive to abnormal perturbations and there are more stringent requirements regarding the amount of perturbations. To address this challenge, we craft an adversarial time series based on the importance measurement to slightly perturb the original data. Based on comprehensive experiments conducted on real-world time-series datasets, we verify that the proposed adversarial attack methods not only effectively fool the target time-series prediction model LSTNet, they also attack state-of-the-art CNN-, RNN-, and MHANET-based models. Meanwhile, the results show that the proposed methods achieve a good transferability. That is, the adversarial examples generated for a specific prediction model can significantly affect the performance of the other methods. Moreover, through a comparison with existing adversarial attack approaches, we can see that much smaller per-turbations are sufficient for the proposed importance-measurement based adversarial attack method. The methods described in this paper are significant in understanding the impact of adversarial attacks on a time-series prediction and promoting the robustness of such prediction technologies.(c) 2021 Elsevier Inc. All rights reserved.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据