Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- 1 Introduction
- 2 Stochastic Convergence
- 3 Delta Method
- 4 Moment Estimators
- 5 M–and Z-Estimators
- 6 Contiguity
- 7 Local Asymptotic Normality
- 8 Efficiency of Estimators
- 9 Limits of Experiments
- 10 Bayes Procedures
- 11 Projections
- 12 U -Statistics
- 13 Rank, Sign, and Permutation Statistics
- 14 Relative Efficiency of Tests
- 15 Efficiency of Tests
- 16 Likelihood Ratio Tests
- 17 Chi-Square Tests
- 18 Stochastic Convergence in Metric Spaces
- 19 Empirical Processes
- 20 Functional Delta Method
- 21 Quantiles and Order Statistics
- 22 L-Statistics
- 23 Bootstrap
- 24 Nonparametric Density Estimation
- 25 Semiparametric Models
- References
- Index
7 - Local Asymptotic Normality
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- 1 Introduction
- 2 Stochastic Convergence
- 3 Delta Method
- 4 Moment Estimators
- 5 M–and Z-Estimators
- 6 Contiguity
- 7 Local Asymptotic Normality
- 8 Efficiency of Estimators
- 9 Limits of Experiments
- 10 Bayes Procedures
- 11 Projections
- 12 U -Statistics
- 13 Rank, Sign, and Permutation Statistics
- 14 Relative Efficiency of Tests
- 15 Efficiency of Tests
- 16 Likelihood Ratio Tests
- 17 Chi-Square Tests
- 18 Stochastic Convergence in Metric Spaces
- 19 Empirical Processes
- 20 Functional Delta Method
- 21 Quantiles and Order Statistics
- 22 L-Statistics
- 23 Bootstrap
- 24 Nonparametric Density Estimation
- 25 Semiparametric Models
- References
- Index
Summary
A sequence of statistical models is “locally asymptotically normal” if, asymptotically, their likelihood ratio processes are similar to those for a normal location parameter. Technically, this is if the likelihood ratio processes admit a certain quadratic expansion. An important example in which this arises is repeated sampling from a smooth parametric model. Local asymptotic normality implies convergence of the models to a Gaussian model after a rescaling of the parameter.
Introduction
Suppose we observe a sample X1,, Xn from a distribution on some measurable space (X, A) indexed by a parameter that ranges over an open subset e Then the full observation is a single observation from the produc of copies of, and the statistical model is completely described as the collection of probability measures on the sample space In the context of the present chapter we shall speak of a statistical experiment, rather than of a statistical model. In this chapter it is shown that many statistical experiments can be approximated by Gaussian experiments after a suitable reparametrization.
The reparametrization is centered around a fixed parameter which should be regarded as known. We define a local parameter, rewrite as and thus obtain an experiment with parameter In this chapter we show that, for large the experiments
are similar in statistical properties, whenever the original experiments are “smooth” in the parameter. The second experiment consists of observing a single observation from a normal distribution with mean and known covariance matrix (equal to the inverse of the Fisher information matrix). This is a simple experiment, which is easy to analyze, whence the approximation yields much information about the asymptotic properties of the original experiments. This information is extracted in several chapters to follow and concerns both asymptotic optimality theory and the behavior of statistical procedures such as the maximum likelihood estimator and the likelihood ratio test.
We have taken the local parameter set equal to which is not correct if the parameter set e is a true subset of IRk.
- Type
- Chapter
- Information
- Asymptotic Statistics , pp. 92 - 107Publisher: Cambridge University PressPrint publication year: 1998
- 1
- Cited by