4.7 Article

Bayesian inversion of hierarchical geostatistical models using a parallel-tempering sequential Gibbs MCMC

Journal

ADVANCES IN WATER RESOURCES
Volume 141, Issue -, Pages -

Publisher

ELSEVIER SCI LTD
DOI: 10.1016/j.advwatres.2020.103614

Keywords

Bayesian inversion; MCMC; Parallel tempering; Multiple-point statistics; Sequential geostatistical resampling; Training image

Funding

  1. Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) [327154368 -SFB 1313, 359880532 -COMPU-FLOW]

Ask authors/readers for more resources

The feasibility of probabilistic Bayesian inversion strongly depends on the dimensionality and complexity of the statistical prior model. Most geostatistical inversion approaches assume multi-Gaussian fields, and some assume (non-Gaussian) categorical fields, e.g., via multiple-point geostatistics. We combine these two into one hierarchical joint problem, which accounts for two (and possibly more) categories as well as heterogeneities inside each category. Recent works developed the conditional probability field method based on the Ensemble Kalman filter (EnKf) for this scenario. However, EnKf-type approaches take implicit linearity and (trans-)Gaussian assumptions, which are not feasible in weak-information regimes. Therefore, we develop a tailored Gibbs sampler, a kind of Markov chain Monte Carlo (MCMC) method. It can do this inversion without assumptions. Our algorithm extends an existing Gibbs sampler with parallel tempering for categorical fields to account for multi-Gaussian internal heterogeneity. We show our key idea and derive our algorithm from the detailed balance, required for MCMC algorithms. We test our algorithm on a synthetic channelized flow scenario for different levels of data available: A highly informative setting (transient flow data) where the synthetic truth can be recovered and a weakly informative setting (steady-state data only) where the synthetic truth cannot be recovered. Instead, we obtain a multi-modal posterior. For the proper testing of convergence, we use the scale reduction factor by Gelman and Rubin. Overall, the test illustrates that our algorithm performs well in both settings.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available