The goal in validating a physics simulation code is to determine how well it replicates the actual physical behavior in a specified application. Uncertainty quantification (UQ) is the central issue in specifying ‘how well’ the code follows experimental observations. Bayesian analysis provides the essential tools for conducting UQ in a disciplined manner. In the Bayesian approach, uncertainties are quantified in terms of probability density functions and calculations are based on the rules of probability. Uncertainties in simulation outputs are obtained by propagating uncertainties in models through the forward calculation. The reverse process is called inference, in which we learn about models from experiments. The Bayesian method allows us to take into account previous inferences derived from experimental data, theoretical considerations, physical laws, and even expert opinion. Physics-based codes are built from (sub)models, each of which describes a specific physics behavior. The complexity of the UQ task may be reduced by adopting a hierarchical approach to comparing models in the code to progressively more complex experiments. Throughout this process, it is essential to consider uncertainties in the models and their impact on subsequent inferences, uncertainties in the initial conditions of the experiments, and the range of operating conditions. The goal is to develop a probabilistic characterization of the uncertainties in the models that is consistent with all the experiments. I will review some of the standard computational techniques and illustrate them with two examples: neutron cross-section evaluation for criticality calculations and material models used in simulating rapidly deformed metal.
Keywords: validation, uncertainty quantification, Bayesian methods, uncertainty estimation, inference, hierarchical experiments
Get full paper (pdf, 1063 KB)
Return to publication list
Send e-mail to author at firstname.lastname@example.org