4.7 Article

Use of the data depth function to differentiate between case of interpolation and extrapolation in hydrological model prediction

Journal

JOURNAL OF HYDROLOGY
Volume 477, Issue -, Pages 213-228

Publisher

ELSEVIER
DOI: 10.1016/j.jhydrol.2012.11.034

Keywords

DIE algorithm; Predictive uncertainty; Critical events; ICE algorithm; Calibration and validation data

Funding

  1. New Zealand Ministry of Science and Innovation [C01X0812]
  2. New Zealand Ministry of Business, Innovation & Employment (MBIE) [C01X0812] Funding Source: New Zealand Ministry of Business, Innovation & Employment (MBIE)

Ask authors/readers for more resources

Hydrological models are subject to significant sources of uncertainty including input data, model structure and parameter uncertainty. A key requirement for an operational flow forecasting model is therefore to give accurate estimates of model uncertainty. This estimate is often presented in terms of confidence bounds. The quality and quantity of observed rainfall and flow data available for calibration has a great influence on the identification of hydrological model parameters, and hence the model error distribution and width of the confidence bounds. The information contained in the observed time series is not uniformly distributed, and may not represent all types of behaviour or activation of flow pathways that could occur in the catchment. A model calibrated with data from a given time period could therefore perform well or poorly when evaluated over a new time period, depending on the information content and variability of the calibration data, in relation to the validation period. Our hypothesis is that we can improve the estimate of hydrological predictive uncertainty, based on our knowledge of the range of data available for calibration. If the characteristics of the validation data are similar in information content and variability to those in the calibration period, we term this an interpolation case, and expect the model errors during calibration to be similar to those in validation. Otherwise, it is an extrapolation case, where we may expect model errors to be greater. In this study, we developed an algorithm to differentiate cases of 'interpolation' versus 'extrapolation' in the prediction time period. The algorithm is based on the concept of 'data depth', i.e. the location of new data in relation to the convex hull of the calibration data set. Using a case study, we calculated uncertainty bounds for the predictive time period using methods with/without differentiation of interpolation and extrapolation cases. The performance of the resulting confidence bounds in accurately representing model error was evaluated using both visual inspection for specific events, and rank histograms and range statistics for the simulated model error quantiles. We show that the proposed algorithm enables us to give differentiated predictive uncertainty bounds which represent model error in more realistic ways. (c) 2012 Elsevier B.V. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available