Journal
STOCHASTIC PROCESSES AND THEIR APPLICATIONS
Volume 118, Issue 12, Pages 2198-2222Publisher
ELSEVIER
DOI: 10.1016/j.spa.2007.12.005
Keywords
Weak convergence; Optimal scaling; Langevin diffusion; Generator; Markov chain Monte Carlo
Categories
Ask authors/readers for more resources
Recent optimal scaling theory has produced a condition for the asymptotically optimal acceptance rate of Metropolis algorithms to be the well-known 0.234 when applied to certain multi-dimensional target distributions. These d-dimensional target distributions are formed of independent components, each of which is scaled according to its own function of d. We show that when the condition is not met the limiting process of the algorithm is altered, yielding an asymptotically optimal acceptance rate which might drastically differ from the usual 0.234. Specifically, we prove that as d -> infinity the sequence of stochastic processes formed by say the i*th component of each Markov chain Usually converges to a Langevin diffusion process with a new speed measure v, except in particular cases where it converges to a one-dimensional Metropolis algorithm with acceptance rule alpha*. We also discuss the use of inhomogeneous proposals, which might prove to be essential in specific cases. (C) 2008 Elsevier B.V. All rights reserved.
Authors
I am an author on this paper
Click your name to claim this paper and add it to your profile.
Reviews
Recommended
No Data Available