Article
Computer Science, Artificial Intelligence
Yueyue Hu, Shiliang Sun
Summary: This paper investigates the adversarial robustness of RL agents and proposes a novel defense framework RL-VAEGAN based on the idea of style transfer. The framework effectively defends against state-of-the-art methods in white-box and black-box scenarios with diverse magnitudes of perturbations.
KNOWLEDGE-BASED SYSTEMS
(2021)
Article
Engineering, Civil
Yi Ding, Guiqin Zhu, Dajiang Chen, Xue Qin, Mingsheng Cao, Zhiguang Qin
Summary: This paper focuses on the classification, adversarial sample attack, and defense methods for resisting attacks on encrypted traffic in Intelligent Transportation System. The encrypted traffic data is translated into images for classification using deep learning algorithms. Various methods are employed to generate adversarial samples for attacking the classification network, and passive and active defense methods are proposed to resist these attacks. Extensive experiments are conducted to evaluate the effectiveness of these methods.
IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS
(2022)
Article
Computer Science, Hardware & Architecture
Xiaozhang Liu, Lang Li, Xueyang Wang, Li Hu
Summary: This paper studies the nature of attacks from adversarial samples from the perspective of the main and minor features, finding that deep learning models mainly learn the main features and proposing a method to generate adversarial samples in the sample subspace.
COMPUTER STANDARDS & INTERFACES
(2022)
Article
Computer Science, Information Systems
Gabor Szucs, Richard Kiss
Summary: The fast improvement of deep learning methods has led to breakthroughs in image classification, but these models are highly sensitive to adversarial perturbations. We propose a combined defense method that enhances model accuracy and detects adversarial examples. By filtering based on the decision of a detector, we are able to improve accuracy. Additionally, we developed a novel defense method called 2N labeling, which maintains constant classification performance in the presence of adversarial attacks.
MULTIMEDIA TOOLS AND APPLICATIONS
(2023)
Article
Computer Science, Artificial Intelligence
Sheng-lin Yin, Xing-lan Zhang, Li-yu Zuo
Summary: This paper proposes a new defense model called Adversarial Memory Variational AutoEncoder(AdMVAE), which can transform adversarial images into clean images. Experimental results show that this method outperforms existing defense methods on multiple benchmark datasets.
Article
Computer Science, Artificial Intelligence
Shilin Qiu, Qihe Liu, Shijie Zhou, Wen Huang
Summary: This article systematically summarizes the current progress of adversarial techniques in the field of natural language processing. It covers aspects such as the particularity, categorization, and evaluation metrics of textual adversarial examples, as well as commonly used datasets, adversarial attack applications, defense strategies, and future research directions.
Article
Computer Science, Information Systems
Lin Shi, Teyi Liao, Jianfeng He
Summary: Adversarial attacks deceive deep neural network models by adding imperceptibly small but well-designed attack data to the model input. Various defense methods have been provided, and this article proposes a Noise-Fusion Method (NFM) that adds noise to both the model input and the training data to improve the robustness of the model.
Article
Computer Science, Information Systems
Yalin E. Sagduyu, Yi Shi, Tugba Erpek
Summary: An adversarial deep learning approach is used to launch over-the-air spectrum poisoning attacks, where an adversary learns the behavior of a transmitter and falsifies the spectrum sensing data. The attacks are energy efficient, hard to detect, and substantially reduce throughput. A dynamic defense is designed to manipulate the adversary's training data and sustain the transmitter's throughput.
IEEE TRANSACTIONS ON MOBILE COMPUTING
(2021)
Article
Telecommunications
Shilian Zheng, Linhui Ye, Xuanye Wang, Jinyin Chen, Huaji Zhou, Caiyi Lou, Zhijin Zhao, Xiaoniu Yang
Summary: This paper introduces primary user adversarial attack (PUAA) to test the robustness of deep learning-based spectrum sensing models, and a defense method named DeepFilter. Experimental results show that PUAA methods significantly reduce detection probability, while DeepFilter effectively defends against PUAA.
CHINA COMMUNICATIONS
(2021)
Article
Computer Science, Information Systems
Kaleel Mahmood, Phuong Ha Nguyen, Lam M. Nguyen, Thanh Nguyen, Marten Van Dijk
Summary: Most existing adversarial machine learning defenses are designed to mitigate static, white-box attacks, but their robustness against adaptive black-box attacks is still unknown. This paper focuses on the black-box threat model and makes two main contributions: first, it proposes an enhanced adaptive black-box attack that is significantly more effective than previous approaches; second, it tests 10 recent defenses and introduces a new black-box defense called barrier zones, which demonstrates significant improvements in security.
Article
Computer Science, Artificial Intelligence
Huali Ren, Teng Huang, Hongyang Yan
Summary: Deep learning technology is vulnerable to adversarial examples, which bring serious security risks to systems. This paper provides a comprehensive overview of adversarial attacks and defenses in the real physical world, analyzing challenges faced by applications and summarizing works on generating adversarial examples and defense strategies in various tasks. Based on this, potential research directions for adversarial examples in the physical world are proposed.
INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS
(2021)
Review
Computer Science, Information Systems
Guillermo Iglesias, Edgar Talavera, Alberto Diaz-Alvarez
Summary: In recent years, deep learning has been revolutionized by the significant impact of Generative Adversarial Networks (GANs), which provide a unique architecture and generate incredible results. Due to the continuous development and wide range of applications, keeping up with the latest research in GANs becomes challenging. This survey aims to provide an overview of GANs, including the latest architectures, optimizations, validation metrics, and application areas, with the goal of guiding future researchers in achieving better results.
COMPUTER SCIENCE REVIEW
(2023)
Article
Computer Science, Artificial Intelligence
Mengting Xu, Tao Zhang, Zhongnian Li, Mingxia Liu, Daoqiang Zhang
Summary: This study evaluates the robustness of deep diagnostic models against adversarial attacks in medical image diagnosis, finding that adversarial examples can lead to false high-confidence outputs. By conducting adversarial attacks on three models, it is shown that they are not reliable when facing adversarial examples. New defense methods are designed, which significantly improve the models' robustness against adversarial attacks.
MEDICAL IMAGE ANALYSIS
(2021)
Article
Computer Science, Hardware & Architecture
Qi-Xian Huang, Lin-Kuan Chiang, Min-Yi Chiu, Hung-Min Sun
Summary: With the increased use of deep learning, there is a growing concern about the trustworthiness of the results. The focus of research has shifted to interpreting deep learning models instead of just predicting outcomes, especially in fields like medicine. Adversarial attacks pose a direct threat to these models, as they can manipulate both the results and interpretations. This study introduces a targeted adversarial attack algorithm, called the focus-shifting attack (FS Attack), which manipulates the interpretation of the model, making it challenging to detect.
IEEE TRANSACTIONS ON RELIABILITY
(2023)
Article
Computer Science, Artificial Intelligence
Yang Li, Quan Pan, Erik Cambria
Summary: Recent developments in adversarial attacks have made reinforcement learning more vulnerable. The key challenge lies in choosing the right timing for the attack. Existing approaches struggle with designing evaluation functions and lack appropriate assessment indicators. To address these issues and make attacks more intelligent, a reinforcement learning-based attacking framework is proposed along with a novel evaluation metric. Experimental results demonstrate the effectiveness of the proposed model and the goodness of the evaluation metric. Furthermore, the model's transferability and robustness under adversarial training are validated.
KNOWLEDGE-BASED SYSTEMS
(2022)