Journal
NATURE COMMUNICATIONS
Volume 10, Issue -, Pages -Publisher
NATURE PORTFOLIO
DOI: 10.1038/s41467-019-08931-6
Keywords
-
Categories
Funding
- JHU Office of Undergraduate Research
- JHU Science of Learning Institute
Ask authors/readers for more resources
Does the human mind resemble the machine-learning systems that mirror its performance? Convolutional neural networks (CNNs) have achieved human-level benchmarks in classifying novel images. These advances support technologies such as autonomous vehicles and machine diagnosis; but beyond this, they serve as candidate models for human vision itself. However, unlike humans, CNNs are fooled by adversarial examples-nonsense patterns that machines recognize as familiar objects, or seemingly irrelevant image perturbations that nevertheless alter the machine's classification. Such bizarre behaviors challenge the promise of these new advances; but do human and machine judgments fundamentally diverge? Here, we show that human and machine classification of adversarial images are robustly related: In 8 experiments on 5 prominent and diverse adversarial imagesets, human subjects correctly anticipated the machine's preferred label over relevant foils-even for images described as totally unrecognizable to human eyes. Human intuition may be a surprisingly reliable guide to machine (mis)classification-with consequences for minds and machines alike.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available