Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- 1 Introduction
- 2 Stochastic Convergence
- 3 Delta Method
- 4 Moment Estimators
- 5 M–and Z-Estimators
- 6 Contiguity
- 7 Local Asymptotic Normality
- 8 Efficiency of Estimators
- 9 Limits of Experiments
- 10 Bayes Procedures
- 11 Projections
- 12 U -Statistics
- 13 Rank, Sign, and Permutation Statistics
- 14 Relative Efficiency of Tests
- 15 Efficiency of Tests
- 16 Likelihood Ratio Tests
- 17 Chi-Square Tests
- 18 Stochastic Convergence in Metric Spaces
- 19 Empirical Processes
- 20 Functional Delta Method
- 21 Quantiles and Order Statistics
- 22 L-Statistics
- 23 Bootstrap
- 24 Nonparametric Density Estimation
- 25 Semiparametric Models
- References
- Index
10 - Bayes Procedures
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- 1 Introduction
- 2 Stochastic Convergence
- 3 Delta Method
- 4 Moment Estimators
- 5 M–and Z-Estimators
- 6 Contiguity
- 7 Local Asymptotic Normality
- 8 Efficiency of Estimators
- 9 Limits of Experiments
- 10 Bayes Procedures
- 11 Projections
- 12 U -Statistics
- 13 Rank, Sign, and Permutation Statistics
- 14 Relative Efficiency of Tests
- 15 Efficiency of Tests
- 16 Likelihood Ratio Tests
- 17 Chi-Square Tests
- 18 Stochastic Convergence in Metric Spaces
- 19 Empirical Processes
- 20 Functional Delta Method
- 21 Quantiles and Order Statistics
- 22 L-Statistics
- 23 Bootstrap
- 24 Nonparametric Density Estimation
- 25 Semiparametric Models
- References
- Index
Summary
In this chapter Bayes estimators are studied from a frequentist perspective. Both posterior measures and Bayes point estimators in smooth parametric models are shown to be asymptotically normal.
Introduction
In Bayesian terminology the distribution of an observation in under a parameteris viewed as the conditional law of in given that a random variable en is equal to. The distribution n of the “random parameter” en is called the prior distribution, and the conditional distribution of en given in is the posterior distribution. If en possesses a density and admits a density (relative to given dominating measures), then the density of the posterior distribution is given by Bayes’ formula
This expression may define a probability density even ifis not a probability density itself. A prior distribution with infinite mass is called improper.
The calculation of the posterior measure can be considered the ultimate aim of a Bayesian analysis. Alternatively, one may wish to obtain a “point estimator” for the parameter using the posterior distribution. The posterior mean is often used for this purpose, but other location estimators are also reasonable.
A choice of point estimator may be motivated by a loss function. The Bayes risk of an estimator Tn relative to the loss function i and prior measure n is defined as
Here the expectation is the risk function of Tn in the usual set-up and is identical to the conditional risk in the Bayesian notation. The corresponding Bayes estimator is the estimator Tn that minimizes the Bayes risk. Because the Bayes risk can be written in the the value minimizes, for every fixed the “posterior risk”
Minimizing this expression may again be a well-defined problem even for prior densities of infinite total mass. For the loss function, the solution Tn is the posterior mean, the solution is the posterior median.
- Type
- Chapter
- Information
- Asymptotic Statistics , pp. 138 - 152Publisher: Cambridge University PressPrint publication year: 1998
- 1
- Cited by