In this work, we provide suggestions for designing experiments where calibration of many models is required and guidance for identifying problematic calibrations. Calibration of many conceptual models which have different representations of the physical processes in the system, as is done in cross-validation studies or multi-model analysis, often uses computationally frugal inversion techniques to achieve tractable execution times. However, because these frugal methods are usually local methods, and the inverse problem is almost always nonlinear, there is no guarantee that the optimal solution will be found. Furthermore, evaluation of each inverse model's performance to identify poor calibrations can be tedious. Results of this study show that if poorly calibrated models are included in the analysis, simulated predictions and measures of prediction uncertainty can be affected in unexpected ways. Guidelines are provided to help identify problematic regressions and correct them.
作者
我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。
推荐
暂无数据