4.6 Article

Uncertainty quantification in fault detection using convolutional neural networks

Journal

GEOPHYSICS
Volume 86, Issue 3, Pages M41-M48

Publisher

SOC EXPLORATION GEOPHYSICISTS
DOI: 10.1190/GEO2020-0424.1

Keywords

-

Funding

  1. Innovation Fund Denmark [615400011B]

Ask authors/readers for more resources

The article discusses the importance of fault segmentation based on seismic images in reservoir characterization and how automatic interpretation of seismic faults has become possible with recent advances in deep learning methods. The study utilizes the dropout approach to quantify fault model uncertainty and decomposes the variance of the learned model into aleatoric and epistemic components. Results show that as the number of Monte Carlo realizations increases, the model standard deviation decreases, enabling the quantification of confidence in fault predictions with less uncertainty.
Segmentation of faults based on seismic images is an important step in reservoir characterization. With the recent developments of deep-learning methods and the availability of massive computing power, automatic interpretation of seismic faults has become possible. The likelihood of occurrence for a fault can be quantified using a sigmoid function. Our goal is to quantify the fault model uncertainty that is generally not captured by deeplearning tools. We have used the dropout approach, a regularization technique to prevent overfitting and coadaptation in hidden units, to approximate the Bayesian inference and estimate the principled uncertainty over functions. Particularly, the variance of the learned model has been decomposed into alea-toric and epistemic parts. Our method is applied to a real data set from the Netherlands F3 block with two different dropout ratios in convolutional neural networks. The aleatoric uncertainty is irreducible because it relates to the stochastic dependency within the input observations. As the number of Monte Carlo realizations increases, the epistemic uncertainty asymptotically converges and the model standard deviation decreases because the variability of the model parameters is better simulated or explained with a larger sample size. This analysis can quantify the confidence to use fault predictions with less uncertainty. In addition, the analysis suggests where more training data are needed to reduce the uncertainty in low-confidence regions.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available