Article
Computer Science, Hardware & Architecture
Atiye Sadat Hashemi, Saeed Mozaffari, Shahpour Alirezaee
Summary: This paper proposes a new cost function for training convolutional neural networks to improve their adversarial robustness. By utilizing information from the Softmax layer and features extracted from convolutional layers, the model achieves better performance on adversarial inputs.
Article
Computer Science, Theory & Methods
Yashar Deldjoo, Tommaso Di Noia, Felice Antonio Merra
Summary: Latent-factor models based on collaborative filtering are widely used in recommender systems but are vulnerable to adversarial attacks. Recent research shows the susceptibility of these models to adversarial examples, highlighting the need for enhanced security measures. Adversarial machine learning techniques have been successfully applied to improve the security of recommender systems and in generative adversarial networks for generative applications.
ACM COMPUTING SURVEYS
(2021)
Article
Computer Science, Artificial Intelligence
Ahmed Aldahdooh, Wassim Hamidouche, Olivier Deforges
Summary: Deep Neural Networks (DNNs) used in security-sensitive applications are vulnerable to adversarial examples. Existing defense and detection techniques have limited success against various types of attacks. In this study, a novel unsupervised ensemble AE detection mechanism called SFAD is proposed, which utilizes model's uncertainty and processes model layers outputs to improve the detection performance against adversarial examples.
APPLIED INTELLIGENCE
(2023)
Article
Computer Science, Artificial Intelligence
Shao-Yuan Lo, Vishal M. Patel
Summary: This paper proposes a video recognition defense strategy against multiple types of adversarial videos using multiple independent batch normalization (BN) layers and a learning-based BN selection module. The proposed method, MultiBN, exhibits stronger multi-perturbation robustness against various types of adversarial videos, including Lp-bounded attacks and physically realizable attacks, compared to existing adversarial training approaches. Extensive analysis has been conducted to study the properties of the multiple BN structure.
IEEE TRANSACTIONS ON IMAGE PROCESSING
(2022)
Article
Computer Science, Artificial Intelligence
Ningyi Liao, Shufan Wang, Liyao Xiang, Nanyang Ye, Shuo Shao, Pengzhi Chu
Summary: This study demonstrates the correlation between the sparsity of network weights and model robustness, showing that sparsity improves robustness. The proposed inverse weights inheritance method enhances the robustness of large networks by inheriting weights from smaller networks and imposing sparse weights distribution.
Article
Computer Science, Information Systems
Andrew McCarthy, Essam Ghadafi, Panagiotis Andriotis, Phil Legg
Summary: Machine learning plays a key role in protecting computer networks and organizations against cyber security attacks by detecting malicious network activity. Adversarial machine learning has gained attention as it investigates how machine learning models can be compromised, resulting in misclassified outputs. This challenge is present in all applications of machine learning where attackers intend to cause unintended functionality, including cyber security and network traffic analysis.
JOURNAL OF INFORMATION SECURITY AND APPLICATIONS
(2023)
Article
Computer Science, Artificial Intelligence
Korn Sooksatra, Pablo Rivas
Summary: With the remarkable achievements of deep learning models, this paper introduces a Sensitivity-inspired Constrained Evaluation Method (SICEM) to determine the vulnerability of certain regions in the input space against adversarial attacks.
NEURAL COMPUTING & APPLICATIONS
(2022)
Article
Computer Science, Information Systems
Shutong Xu, Zhaohong Li, Zhenzhen Zhang, Junhui Liu
Summary: An end-to-end video steganography based on GAN and multi-scale deep learning network is proposed in this paper. By introducing a noise layer, the model is able to resist video compressions and has achieved good experimental results.
Article
Computer Science, Artificial Intelligence
Hong Joo Lee, Youngjoon Yu, Yong Man Ro
Summary: Recent research has shown that deep neural networks (DNNs) are highly susceptible to adversarial attacks. Adversarial training (AT) has been recognized as the most effective defense strategy against such attacks, although it may compromise natural accuracy. To address this issue, this article proposes a new approach that utilizes an external signal, known as a booster signal, to enhance adversarial robustness. The booster signal, optimized alongside model parameters, is injected outside the image without overlapping the original content, resulting in improved both adversarial and natural accuracy. Experimental results demonstrate that the booster signal can effectively enhance the performance of existing AT methods, and its optimization method is flexible and applicable.
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
(2023)
Article
Computer Science, Artificial Intelligence
Maxwell T. West, Shu-Lok Tsang, Jia S. Low, Charles D. Hill, Christopher Leckie, Lloyd C. L. Hollenberg, Sarah M. Erfani, Muhammad Usman
Summary: Machine learning algorithms are powerful but vulnerable to adversarial attacks. Integrating quantum computing with machine learning can improve accuracy and provide better defense against such attacks.
NATURE MACHINE INTELLIGENCE
(2023)
Article
Computer Science, Information Systems
Quanyu Dai, Xiao Shen, Zimu Zheng, Liang Zhang, Qiang Li, Dan Wang
Summary: The objective of network embedding is to learn compact node representations for downstream learning tasks like link prediction and node classification. Existing methods often focus on preserving network structures and properties, overlooking the noise in networks which may lead to lack of robustness. This paper introduces adversarial training (AdvT) as a local regularization method to enhance model robustness and generalization by defining adversarial perturbations in the embedding space and applying adaptive l(2) norm constraints.
INFORMATION SCIENCES
(2021)
Article
Computer Science, Artificial Intelligence
Xiaochen Yang, Yiwen Guo, Mingzhi Dong, Jing-Hao Xue
Summary: This article proposes a metric learning method that imposes an adversarial margin in the input space to improve the generalization and robustness. By minimizing a perturbation loss, the adversarial margin is enlarged to enhance robustness to instance perturbation. Experimental results demonstrate the superiority of the proposed method in discrimination accuracy and robustness.
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
(2022)
Review
Computer Science, Theory & Methods
Sicong Han, Chenhao Lin, Chao Shen, Qian Wang, Xiaohong Guan
Summary: This article presents a framework that discusses recent works on theoretically explaining adversarial examples from three perspectives, instead of reviewing technical progress in adversarial attacks and defenses. Drawing on reviewed literature, this survey identifies current problems and challenges and highlights potential future research directions in investigating adversarial examples.
ACM COMPUTING SURVEYS
(2023)
Article
Computer Science, Artificial Intelligence
Aleksei Kuvshinov, Stephan Gunnemann
Summary: This research presents a method to obtain a lower bound on the distance to the decision boundary (DtDB) for a deep neural network classifier by solving a convex quadratic programming task, which serves as a robustness certificate for the classifier around a given sample. The approach shows better or competitive results compared to a wide range of existing techniques.
Article
Computer Science, Artificial Intelligence
Hesamodin Mohammadian, Ali A. Ghorbani, Arash Habibi Lashkari
Summary: Intrusion detection systems play a crucial role in defending networks against security threats. Deep neural networks have shown excellent performance in intrusion detection, but they are vulnerable to adversarial attacks. This paper proposes a new approach using Jacobian Saliency Map to generate adversarial examples for deep learning-based malicious network activity classification. The experiments demonstrate that the proposed method achieves better performance with fewer features compared to other attacks.
APPLIED SOFT COMPUTING
(2023)
Article
Computer Science, Theory & Methods
Amir Mahdi Sadeghzadeh, Behrad Tajali, Rasool Jalili
Summary: Adversarial Website Adaptation (AWA) uses adversarial deep learning approaches to defend against website fingerprinting attacks by creating unique transformers for each website to evade adversaries' classifiers, with both universal and non-universal versions available. By incorporating secret random elements in the training phase to generate sets of transformers, AWA can effectively combat adversarial attacks and reduce bandwidth overhead.
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY
(2021)