Analysis of classifiers’ robustness to adversarial perturbations

Title
Analysis of classifiers’ robustness to adversarial perturbations
Authors
Keywords
Adversarial examples, Classification robustness, Random noise, Instability, Deep networks
Journal
MACHINE LEARNING
Volume 107, Issue 3, Pages 481-508
Publisher
Springer Nature
Online
2017-08-26
DOI
10.1007/s10994-017-5663-3

Ask authors/readers for more resources

Reprint

Contact the author

Find Funding. Review Successful Grants.

Explore over 25,000 new funding opportunities and over 6,000,000 successful grants.

Explore

Create your own webinar

Interested in hosting your own webinar? Check the schedule and propose your idea to the Peeref Content Team.

Create Now