Presented at Workshop for Maximum Entropy and Bayesian Methods, July 31, 1998, Garching, Germany

Assessing uncertainties in simulation predictions

Kenneth M. Hanson
Los Alamos National Laboratory

Abstract

There is a growing need to assess the uncertainties in the predictions made by simulation codes. This assessment process, which is called validation, requires a complete methodology for characterizing the uncertainties in simulation codes. The analysis of any single experiment yields the uncertainties in model parameters that derive from uncertain measurements and experimental conditions. It is assumed in the present work that the simulation code correctly predicts a physical situation given the correct model parameters, together with the exact initial and boundary conditions.

The present challenge is to analyze numerous experiments that contribute to our cumulative knowledge of the parameters of multiple models. Bayes law allows accumulation of information by basing the prior for each analysis on the posterior of previous analyses. Equivalently, a grand analysis of all available experiments may be conceptualized, in which the total likelihood is the product of the likelihood of all experiments. In this approach, each experiment requires a recalculation of its simulation for each parameter set. Such a grand analysis may be simplified by replacing the recalculation of each experiment by an approximation to the likelihood, e.g., as a Gaussian. A graphical display of this cumulative analysis scheme permits one to follow and analyze the logic of the analysis. For drawing valid inferences about the models, it is crucial to keep straight the dependencies of the final results on individual experiments and to include correlations among the uncertainties in the parameters for various interacting models.

The uncertainty in a prediction of a new physical situation is obtained by propagating the uncertainties in the parameters through the simulation code into the prediction to obtain the so-called predictive distribution. If this distribution is not amenable to functional approximation, it may be estimated using Markov Chain Monte Carlo to draw random model parameters from the cumulative uncertainty distribution and running the simulation for each set of parameters.

Advantages to the proposed approach to uncertainty analysis include 1) identification of the sources of relevant uncertainties, and 2) optimal design of new experiments to reduce the uncertainties in simulation predictions.

Keywords: validation, probabilistic modeling, simulation code, uncertainty analysis, Bayes law, Markov Chain Monte Carlo, MCMC

Get presentation (pdf, 110 KB)
Return to publication list
Send e-mail to author at kmh@hansonhub.com