4.7 Article

Characterizing and Evaluating Adversarial Examples for Offline Handwritten Signature Verification

期刊

出版社

IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
DOI: 10.1109/TIFS.2019.2894031

关键词

Adversarial machine learning; signature verification; biometrics

资金

  1. Fonds de recherche du Quebec-Nature et technologies (FRQNT)
  2. CNPq [206318/2014-6, RGPIN-2015-04490]
  3. NSERC of Canada

向作者/读者索取更多资源

The phenomenon of adversarial examples is attracting increasing interest from the machine learning community, due to its significant impact on the security of machine learning systems. Adversarial examples are similar (from a perceptual notion of similarity) to samples from the data distribution, that fool a machine learning classifier. For computer vision applications, these are images with carefully crafted but almost imperceptible changes, which are misclassified. In this paper, we characterize this phenomenon under an existing taxonomy of threats to biometric systems, in particular identifying new attacks for offline handwritten signature verification systems. We conducted an extensive set of experiments on four widely used datasets: MCYT-75, CEDAR, GPDS-160, and the Brazilian PUC-PR, considering both a CNN-based system and a system using a handcrafted feature extractor. We found that attacks that aim to get a genuine signature rejected are easy to generate, even in a limited knowledge scenario, where the attacker does not have access to the trained classifier nor the signatures used for training. Attacks that get a forgery to be accepted are harder to produce, and often require a higher level of noise-in most cases, no longer imperceptible as previous findings in object recognition. We also evaluated the impact of two countermeasures on the success rate of the attacks and the amount of noise required for generating successful attacks.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.7
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据