Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- 1 Introduction
- 2 Stochastic Convergence
- 3 Delta Method
- 4 Moment Estimators
- 5 M–and Z-Estimators
- 6 Contiguity
- 7 Local Asymptotic Normality
- 8 Efficiency of Estimators
- 9 Limits of Experiments
- 10 Bayes Procedures
- 11 Projections
- 12 U -Statistics
- 13 Rank, Sign, and Permutation Statistics
- 14 Relative Efficiency of Tests
- 15 Efficiency of Tests
- 16 Likelihood Ratio Tests
- 17 Chi-Square Tests
- 18 Stochastic Convergence in Metric Spaces
- 19 Empirical Processes
- 20 Functional Delta Method
- 21 Quantiles and Order Statistics
- 22 L-Statistics
- 23 Bootstrap
- 24 Nonparametric Density Estimation
- 25 Semiparametric Models
- References
- Index
23 - Bootstrap
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- 1 Introduction
- 2 Stochastic Convergence
- 3 Delta Method
- 4 Moment Estimators
- 5 M–and Z-Estimators
- 6 Contiguity
- 7 Local Asymptotic Normality
- 8 Efficiency of Estimators
- 9 Limits of Experiments
- 10 Bayes Procedures
- 11 Projections
- 12 U -Statistics
- 13 Rank, Sign, and Permutation Statistics
- 14 Relative Efficiency of Tests
- 15 Efficiency of Tests
- 16 Likelihood Ratio Tests
- 17 Chi-Square Tests
- 18 Stochastic Convergence in Metric Spaces
- 19 Empirical Processes
- 20 Functional Delta Method
- 21 Quantiles and Order Statistics
- 22 L-Statistics
- 23 Bootstrap
- 24 Nonparametric Density Estimation
- 25 Semiparametric Models
- References
- Index
Summary
This chapter investigates the asymptotic properties of bootstrap estimators for distributions and confidence intervals. The consistency of the bootstrap for the sample mean implies the consistency for many other statistics by the delta method. A similar result is valid with the empirical process.
Introduction
In most estimation problems it is important to give an indication of the precision of a given estimate. A simple method is to provide an estimate of the bias and variance of the estimator; more accurate is a confidence interval for the parameter. In this chapter we concentrate on bootstrap confidence intervals and, more generally, discuss the bootstrap as a method of estimating the distribution of a given statistic.
Let be an estimator of some parameter attached to the distribution P of the observations. The distribution of the difference contains all the information needed for assessing the precision of In particular, if is the upper a-quantile of the distribution of then
Here may be arbitrary, but it is typically an estimate of the standard deviation of It follows that the interval is a confidence interval of level. Unfortunately, in most situations the quantiles and the distribution of depend on the unknown distribution P of the observations and cannot be used to assess the performance of They must be replaced by estimators.
If the sequence tends in distribution to a standard normal variable, then the normal N -distribution can be used as an estimator of the distribution of, and we can substitute the standard normal quantiles for the quantiles The weak convergence implies that the interval is a confidence interval of asymptotic level.
Bootstrap procedures yield an alternative. They are based on an estimate of the underlying distribution of the observations. The distribution of under can, in principle, be written as a function of The bootstrap estimator for this distribution is the “plug-in” estimator obtained by substituting for in this function.
- Type
- Chapter
- Information
- Asymptotic Statistics , pp. 326 - 340Publisher: Cambridge University PressPrint publication year: 1998