## Bayesian Inference and Ill-Posed Inverse Problems

### David Schmidt (P-21)

### Abstract

Ill-posed inverse problems, in which there are many different
solutions that could have produced the given data, are common. For
example, think about the job your audio CD player has. Bayesian
inference is well suited to such problems because it provides a
well-defined way to incorporate prior information that can reduce the
ambiguity or range of likely solutions. A posterior probability
distribution over the space of possible solutions is the result, which
encapsulates all the information available and can be used to make
probabilistic inferences. Although prior information is sometimes
considered subjective and its use may be controversial, prior
information is essential for ill-posed inverse problem, in order to
reduce the range of likely solutions. Indeed, one should tailor the
prior distribution to incorporate all of the pertinent prior
information available for each problem in order to maximize the
specificity of the resulting posterior distribution. Even so, the
posterior distribution may be broad and the most likely solution may
not be representative of the full range of likely solutions. In such
cases it is important to consider the full range of likely solutions
when making inferences. Examples of the use of Bayesian inference
will be presented for a problem in human brain mapping and for the
problem of inferring the continuous distribution from which a finite
data sample have been drawn. Finally, I will discuss implications of
applying this approach to problems in complex modeling and simulation.