To be published in *Medical Imaging: Image Processing*, K.M. Hanson and M. Sonka, eds.,
*Proc. SPIE ***4322** (2001)

## Markov Chain Monte Carlo posterior sampling with the Hamiltonian method

Kenneth M. Hanson

*Los Alamos National Laboratory*
### Abstract

The Markov Chain Monte Carlo technique provides a means for drawing random samples from a target probability density function (pdf). MCMC allows one to assess the uncertainties in a Bayesian analysis described by a numerically calculated posterior distribution. This paper describes the Hamiltonian MCMC technique in which a momentum variable is introduced for each parameter of the target pdf. In analogy to a physical system, a Hamiltonian H is defined as a kinetic energy involving the momenta plus a potential energy phi, where phi is minus the logarithm of the target pdf. Hamiltonian dynamics allows one to move along trajectories of constant H, taking large jumps in the parameter space with relatively few evaluations of phi and its gradient. The Hamiltonian algorithm alternates between picking a new momentum vector and following such trajectories. The efficiency of the Hamiltonian method for multidimensional isotropic Gaussian pdfs is shown to remain constant at around 7% for up to several hundred dimensions. The Hamiltonian method handles correlations among the variables much better than the standard Metropolis algorithm. A new test, based on the gradient of phi, is proposed to measure the convergence of the MCMC sequence.

**Keywords:** Markov Chain Monte Carlo, Hamiltonian method, hybrid MCMC, Metropolis method, statistical efficiency, Bayesian analysis, posterior distribution, uncertainty estimation, convergence test

Viewgraphs for this talk (pdf, 90 KB)

Return to publication list

Send e-mail to author at kmh@hansonhub.com