Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- 1 Introduction
- 2 Stochastic Convergence
- 3 Delta Method
- 4 Moment Estimators
- 5 M–and Z-Estimators
- 6 Contiguity
- 7 Local Asymptotic Normality
- 8 Efficiency of Estimators
- 9 Limits of Experiments
- 10 Bayes Procedures
- 11 Projections
- 12 U -Statistics
- 13 Rank, Sign, and Permutation Statistics
- 14 Relative Efficiency of Tests
- 15 Efficiency of Tests
- 16 Likelihood Ratio Tests
- 17 Chi-Square Tests
- 18 Stochastic Convergence in Metric Spaces
- 19 Empirical Processes
- 20 Functional Delta Method
- 21 Quantiles and Order Statistics
- 22 L-Statistics
- 23 Bootstrap
- 24 Nonparametric Density Estimation
- 25 Semiparametric Models
- References
- Index
9 - Limits of Experiments
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- 1 Introduction
- 2 Stochastic Convergence
- 3 Delta Method
- 4 Moment Estimators
- 5 M–and Z-Estimators
- 6 Contiguity
- 7 Local Asymptotic Normality
- 8 Efficiency of Estimators
- 9 Limits of Experiments
- 10 Bayes Procedures
- 11 Projections
- 12 U -Statistics
- 13 Rank, Sign, and Permutation Statistics
- 14 Relative Efficiency of Tests
- 15 Efficiency of Tests
- 16 Likelihood Ratio Tests
- 17 Chi-Square Tests
- 18 Stochastic Convergence in Metric Spaces
- 19 Empirical Processes
- 20 Functional Delta Method
- 21 Quantiles and Order Statistics
- 22 L-Statistics
- 23 Bootstrap
- 24 Nonparametric Density Estimation
- 25 Semiparametric Models
- References
- Index
Summary
A sequence of experiments is defined to converge to a limit experiment if the sequence of likelihood ratio processes converges marginally in distribution to the likelihood ratio process of the limit experiment. A limit experiment serves as an approximation for the converging sequence of experiments. This generalizes the convergence of locally asymptotically normal sequences of experiments considered in Chapter 7. Several examples of nonnormallimit experiments are discussed.
Introduction
This chapter introduces a notion of convergence of statistical models or “experiments” to a limit experiment. In this notion a sequence of models, rather than just a sequence of estimators or tests, converges to a limit. The limit experiment serves two purposes. First, it provides an absolute standard for what can be achieved asymptotically by a sequence of tests or estimators, in the form of a “lower bound“: No sequence of statistical procedures can be asymptotically better than the “best” procedure in the limit experiment. For instance, the best limiting power function is the best power function in the limit experiment; a best sequence of estimators converges to a best estimator in the limit experiment. Statements of this type are true irrespective of the precise meaning of “best.” A second purpose of a limit experiment is to explain the asymptotic behaviour of sequences of statistical procedures. For instance, the asymptotic normality or (in)efficiency of maximum likelihood estimators.
Many sequences of experiments converge to normal limit experiments. In particular, the local experiments in a given locally asymptotically normal sequence of experiments, as considered in Chapter 7, converge to a normal location experiment. The asymptotic representation theorem given in the present chapter is therefore a generalization of Theorem 7.10 (for the LAN case) to the general situation. The importance of the general concept is illustrated by several examples of non-Gaussian limit experiments.
In the present context it is customary to speak: of “experiment” rather than model, although these terms are interchangeable. Formally an experiment is a measurable space the sample space, equipped with a collection of probability measures. The set of probability measures serves as a statistical model for the observation, written as X. In this chapter the parameter is denoted by (and not because the results are typically applied to “local” parameters (such as).
- Type
- Chapter
- Information
- Asymptotic Statistics , pp. 125 - 137Publisher: Cambridge University PressPrint publication year: 1998