Book contents
- Frontmatter
- Contents
- Preface
- 1 Introduction
- 2 Variation
- 3 Uncertainty
- 4 Likelihood
- 5 Models
- 6 Stochastic Models
- 7 Estimation and Hypothesis Testing
- 8 Linear Regression Models
- 9 Designed Experiments
- 10 Nonlinear Regression Models
- 11 Bayesian Models
- 12 Conditional and Marginal Inference
- Appendix A Practicals
- Bibliography
- Name Index
- Example Index
- Index
7 - Estimation and Hypothesis Testing
Published online by Cambridge University Press: 29 March 2011
- Frontmatter
- Contents
- Preface
- 1 Introduction
- 2 Variation
- 3 Uncertainty
- 4 Likelihood
- 5 Models
- 6 Stochastic Models
- 7 Estimation and Hypothesis Testing
- 8 Linear Regression Models
- 9 Designed Experiments
- 10 Nonlinear Regression Models
- 11 Bayesian Models
- 12 Conditional and Marginal Inference
- Appendix A Practicals
- Bibliography
- Name Index
- Example Index
- Index
Summary
Chapter 4 introduced likelihood and explored associated concepts such as likelihood ratio statistics and maximum likelihood estimators, which were then extensively used for inference in Chapters 5 and 6. In this chapter we turn aside from the central theme of the book and discuss some more theoretical topics. Estimation is a fundamental statistical activity, and in Section 7.1 we consider what properties a good estimator should have, including a brief discussion of nonparametric density estimators and the mathematically appealing topic of minimum variance unbiased estimation. One of the most important approaches to constructing estimators is as solutions to systems of estimating equations. In Section 7.2 we discuss the implications of this, showing how it complements minimum variance unbiased estimation, and seeing its implications for robust estimation and for stochastic processes. We then give an account of some of the main ideas underlying another major statistical activity, the testing of hypotheses, discussing the construction of tests with good properties, and making the connection to estimation.
Estimation
Mean squared error
Suppose that we wish to estimate some aspect of a probability model f(y). In principle we might try and estimate almost any feature of f, but we largely confine ourselves to estimation of the unknown parameter θ or a function of it ψ(θ) in a parametric model f(y; θ). Suppose that our data Y comprise a random sample Y1, …, Yn from f, and let the statistic T = t(Y) be an estimator of ψ(θ).
- Type
- Chapter
- Information
- Statistical Models , pp. 300 - 352Publisher: Cambridge University PressPrint publication year: 2003