4.8 Article

What Do Different Evaluation Metrics Tell Us About Saliency Models?

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TPAMI.2018.2815601

Keywords

Saliency models; evaluation metrics; benchmarks; fixation maps; saliency applications

Funding

  1. postgraduate scholarship (PGS-D) from the Natural Sciences and Engineering Research Council of Canada
  2. Toyota Research Institute / MIT CSAIL Joint Research Center

Ask authors/readers for more resources

How best to evaluate a saliency model's ability to predict where humans look in images is an open research question. The choice of evaluation metric depends on how saliency is defined and how the ground truth is represented. Metrics differ in how they rank saliency models, and this results from how false positives and false negatives are treated, whether viewing biases are accounted for, whether spatial deviations are factored in, and how the saliency maps are pre-processed. In this paper, we provide an analysis of 8 different evaluation metrics and their properties. With the help of systematic experiments and visualizations of metric computations, we add interpretability to saliency scores and more transparency to the evaluation of saliency models. Building off the differences in metric properties and behaviors, we make recommendations for metric selections under specific assumptions and for specific applications.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.8
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available