Book contents
- Frontmatter
- Contents
- List of Figures
- List of Tables
- Preface
- Part I Fundamentals of Bayesian Inference
- Part II Simulation
- 5 Classical Simulation
- 6 Basics of Markov Chains
- 7 Simulation by MCMC Methods
- Part III Applications
- A Probability Distributions and Matrix Theorems
- B Computer Programs for MCMC Calculations
- Bibliography
- Author Index
- Subject Index
7 - Simulation by MCMC Methods
from Part II - Simulation
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Contents
- List of Figures
- List of Tables
- Preface
- Part I Fundamentals of Bayesian Inference
- Part II Simulation
- 5 Classical Simulation
- 6 Basics of Markov Chains
- 7 Simulation by MCMC Methods
- Part III Applications
- A Probability Distributions and Matrix Theorems
- B Computer Programs for MCMC Calculations
- Bibliography
- Author Index
- Subject Index
Summary
THE BASIS OF an MCMC algorithm is the construction of a transition kernel (see Section 6.3), p(x, y), that has an invariant density equal to the target density. Given such a kernel, the process can be started at x0 to yield a draw x1 from p(x0, x1), x2 from p(x1, x2), …, and xG from p(xG−1, xG), where G is the desired number of simulations. After a transient period, the distribution of the xg is approximately equal to the target distribution. The question is how to find a kernel that has the target as its invariant distribution. It is remarkable that there is a general principle for finding such kernels, the Metropolis–Hastings (MH) algorithm. We first discuss a special case – the Gibbs algorithm or Gibbs sampler – and then explain a more general version of the MH algorithm.
It is important to distinguish between the number of simulated values G and the number of observations n in the sample of data that is being analyzed. The former may be made very large – the only restriction comes from computer time and capacity, but the number of observations is fixed at the time the data are collected. Larger values of G lead to more accurate approximations. MCMC algorithms provide an approximation to the exact posterior distribution of a parameter; that is, they approximate the posterior distribution of the parameters, taking the number of observations to be fixed at n.
- Type
- Chapter
- Information
- Introduction to Bayesian Econometrics , pp. 90 - 108Publisher: Cambridge University PressPrint publication year: 2007