Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- 1 Introduction
- 2 Stochastic Convergence
- 3 Delta Method
- 4 Moment Estimators
- 5 M–and Z-Estimators
- 6 Contiguity
- 7 Local Asymptotic Normality
- 8 Efficiency of Estimators
- 9 Limits of Experiments
- 10 Bayes Procedures
- 11 Projections
- 12 U -Statistics
- 13 Rank, Sign, and Permutation Statistics
- 14 Relative Efficiency of Tests
- 15 Efficiency of Tests
- 16 Likelihood Ratio Tests
- 17 Chi-Square Tests
- 18 Stochastic Convergence in Metric Spaces
- 19 Empirical Processes
- 20 Functional Delta Method
- 21 Quantiles and Order Statistics
- 22 L-Statistics
- 23 Bootstrap
- 24 Nonparametric Density Estimation
- 25 Semiparametric Models
- References
- Index
16 - Likelihood Ratio Tests
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- 1 Introduction
- 2 Stochastic Convergence
- 3 Delta Method
- 4 Moment Estimators
- 5 M–and Z-Estimators
- 6 Contiguity
- 7 Local Asymptotic Normality
- 8 Efficiency of Estimators
- 9 Limits of Experiments
- 10 Bayes Procedures
- 11 Projections
- 12 U -Statistics
- 13 Rank, Sign, and Permutation Statistics
- 14 Relative Efficiency of Tests
- 15 Efficiency of Tests
- 16 Likelihood Ratio Tests
- 17 Chi-Square Tests
- 18 Stochastic Convergence in Metric Spaces
- 19 Empirical Processes
- 20 Functional Delta Method
- 21 Quantiles and Order Statistics
- 22 L-Statistics
- 23 Bootstrap
- 24 Nonparametric Density Estimation
- 25 Semiparametric Models
- References
- Index
Summary
The critical values of the likelihood ratio test are usually based on an asymptotic approximation. We derive the asymptotic distribution of the likelihood ratio statistic and investigate its asymptotic quality through its asymptotic power function and its Bahadur efficiency.
Introduction
Suppose that we observe a sample from a density and wish to test the null hypothesis versus the alternative. If both the null and the alternative hypotheses consist of single points, then a most powerful test can be based on the log likelihood ratio, by the Neyman-Pearson theory. If the two points are and, respectively, then the optimal test statistic is given by
For certain special models and hypotheses, the most powerful test turns out not to depend on, and the test is uniformly most powerful for a composite hypothesis Sometimes the null hypothesis can be extended as well, and the testing problem has a fully satisfactory solution. Unfortunately, in many situations there is no single best test, not even in an asymptotic sense (see Chapter 15). A variety of ideas lead to reasonable tests. A sensible extension of the idea behind the Neyman-Pearson theory is to base a test on the log likelihood ratio
The single points are replaced by maxima over the hypotheses. As before, the null hypothesis is rejected for large values of the statistic.
Because the distributional properties of can be somewhat complicated, one usually replaces the supremum in the numerator by a supremum over the whole parameter set. This changes the test statistic only if, which is inessential, because in most cases the critical value will be positive. We study the asymptotic properties of the (log) likelihood ratio statistic
The most important conclusion of this chapter is that, under the null hypothesis, the sequence is asymptotically chi squared-distributed. The main conditions are that the model is differentiable in and that the null hypothesis and the full parameter set are (locally) equal to linear spaces.
- Type
- Chapter
- Information
- Asymptotic Statistics , pp. 227 - 241Publisher: Cambridge University PressPrint publication year: 1998