Based on a presentation given at the recent Valencia 7 meeting, this talk brings together several themes that are important in simulation codes, namely, MCMC, hybrid MCMC, and adjoint differentiation of computer codes.
Hybrid Markov Chain Monte Carlo is an application of Hamiltonian dynamics to the task of sampling multidimensional probability density functions in situations where the pdf is only available through calculation. Hybrid MCMC affords a robust means for such sampling with high efficiency, provided one can calculate the gradient of phi = minus-log-probability in a time comparable to calculating the probability itself. The latter need is met using the technique of adjoint differentiation of the computer code used to calculate phi. Consider a calculation of the probability in terms of a sequence of operations, which can be called the forward calculation. Then the implementation of the adjoint differentiation technique amounts to carrying out the chain rule for differentiation on that sequence, but in the reverse direction. Adjoint differentiation is very useful when the forward calculation involves a sequence of modules, which individually are simple, but collectively perform a complex calculation. Such is often the case for complex modeling situations. See "Operation of the Bayes Inference Engine," K. M. Hanson et al, in Maximum Entropy and Bayesian Methods, pp. 309-318 (Kluwer Academic, 1999)
The gradients of phi may also be used to form a sensitive statistic to test the convergence of the MCMC sequence. The statistic is the ratio of sample estimates for the variance of the distribution calculated two different ways, one based on using integration by parts and the gradient of phi and the other based on the normal second-moment calculation. The efficiency of the hybrid MCMC technique and the usefulness of the convergence test will be demonstrated for simple multivariate normal distributions.