Book contents
- Frontmatter
- Contents
- List of Figures
- List of Tables
- Preface
- Part I Fundamentals of Bayesian Inference
- Part II Simulation
- 5 Classical Simulation
- 6 Basics of Markov Chains
- 7 Simulation by MCMC Methods
- Part III Applications
- A Probability Distributions and Matrix Theorems
- B Computer Programs for MCMC Calculations
- Bibliography
- Author Index
- Subject Index
5 - Classical Simulation
from Part II - Simulation
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Contents
- List of Figures
- List of Tables
- Preface
- Part I Fundamentals of Bayesian Inference
- Part II Simulation
- 5 Classical Simulation
- 6 Basics of Markov Chains
- 7 Simulation by MCMC Methods
- Part III Applications
- A Probability Distributions and Matrix Theorems
- B Computer Programs for MCMC Calculations
- Bibliography
- Author Index
- Subject Index
Summary
AS WE MENTIONED at the end of the previous chapter, simulation has greatly expanded the scope of Bayesian inference. In this chapter, we review methods for generating independent samples from probability distributions. The methods discussed here form the basis for the newer methods discussed in Chapter 7 that are capable of dealing with a greater variety of distributions but do not generate independent samples.
All major statistics packages contain routines for generating random variables from such standard distributions as those summarized in the appendix. The examples presented here are intended to illustrate methods of generating samples. We do not claim that the algorithms are the best that can be designed, and we do not study the methods in great detail. Our goal for the chapter is to present the standard techniques of simulation and explain the kinds of questions that simulated samples can help answer.
Many of the applications discussed later can be regarded as attempts to approximate a quantity such as E[g(X)] where X ~ f(X), but the necessary integral, ∫ g(x)f(x) dx, cannot be computed analytically. This problem includes the computation of expected values (where g(X) = X) and other moments, as well as P(c1 ≤ X ≤ c2), for which we set g(X) = 1(c1 ≤ X ≤ c2).
Probability Integral Transformation Method
The most basic method of generating samples takes advantage of the ability of computers to generate values that can be regarded as drawn independently from a uniform distribution on (0,1), U(0, 1).
- Type
- Chapter
- Information
- Introduction to Bayesian Econometrics , pp. 63 - 75Publisher: Cambridge University PressPrint publication year: 2007