Journal
INTERNATIONAL JOURNAL OF REMOTE SENSING
Volume 32, Issue 19, Pages 5321-5330Publisher
TAYLOR & FRANCIS LTD
DOI: 10.1080/01431161.2010.498841
Keywords
-
Funding
- National Aeronautics and Space Administration ( NASA) [NAG13-99021]
- South Dakota School of Mines and Technology, National Science Foundation [EPS-0091948, EPS-9720642]
- South Dakota Space Grant Consortium
Ask authors/readers for more resources
Reference data and accuracy assessments via error matrices build the foundation for measuring success of classifications. An error matrix is often based on the traditional holdout method that utilizes only one training/test dataset. If the training/test dataset does not fully represent the variability in a population, accuracy may be over - or under - estimated. Furthermore, reference data may be flawed by spatial errors or autocorrelation that may lead to overoptimistic results. For a forest study we first corrected spatially erroneous ground data and then used aerial photography to sample additional reference data around the field-sampled plots (Mannel et al. 2006). These reference data were used to classify forest cover and subsequently determine classification success. Cross-validation randomly separates datasets into several training/test sets and is well documented to perform a more precise accuracy measure than the traditional holdout method. However, random cross-validation of autocorrelated data may overestimate accuracy, which in our case was between 5% and 8% for a 90% confidence interval. In addition, we observed accuracies differing by up to 35% for different land cover classes depending on which training/test datasets were used. The observed discrepancies illustrate the need for paying attention to autocorrelation and utilizing more than one permanent training/test dataset, for example, through a k-fold holdout method.(1)
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available