4.7 Article

A general framework for the statistical analysis of the sources of variance for classification error estimators

Journal

PATTERN RECOGNITION
Volume 46, Issue 3, Pages 855-864

Publisher

ELSEVIER SCI LTD
DOI: 10.1016/j.patcog.2012.09.007

Keywords

Supervised classification; Error estimation; Prediction error; Sensitivity analysis; Sources of variance; Model selection

Funding

  1. Saiotek and Research Groups programs (Basque Government) [IT-242-07]
  2. Spanish Ministry of Science and Innovation [TIN2008-06815-C02-01, CSD2007-00018]
  3. COMBIOMED network in computational biomedicine (Carlos III Health Institute)

Ask authors/readers for more resources

Estimating the prediction error of classifiers induced by supervised learning algorithms is important not only to predict its future error, but also to choose a classifier from a given set (model selection). If the goal is to estimate the prediction error of a particular classifier, the desired estimator should have low bias and low variance. However, if the goal is the model selection, in order to make fair comparisons the chosen estimator should have low variance assuming that the bias term is independent from the considered classifier. This paper follows the analysis proposed in [1] about the statistical properties of k-fold cross-validation estimators and extends it to the most popular error estimators: resubstitution, holdout, repeated holdout, simple bootstrap and 0.632 bootstrap estimators, without and with stratification. We present a general framework to analyze the decomposition of the variance of different error estimators considering the nature of the variance (irreducible/reducible variance) and the different sources of sensitivity (internal/external sensitivity). An extensive empirical study has been performed for the previously mentioned estimators with naive Bayes and C4.5 classifiers over training sets obtained from assorted probability distributions. The empirical analysis consists of decomposing the variances following the proposed framework and checking the independence assumption between the bias and the considered classifier. Based on the obtained results, we propose the most appropriate error estimations for model selection under different experimental conditions. (C) 2012 Elsevier Ltd. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available