Book contents
- Frontmatter
- Contents
- Editor's foreword
- Preface
- Part I Principles and elementary applications
- 1 Plausible reasoning
- 2 The quantitative rules
- 3 Elementary sampling theory
- 4 Elementary hypothesis testing
- 5 Queer uses for probability theory
- 6 Elementary parameter estimation
- 7 The central, Gaussian or normal distribution
- 8 Sufficiency, ancillarity, and all that
- 9 Repetitive experiments: probability and frequency
- 10 Physics of ‘random experiments’
- Part II Advanced applications
- Appendix A Other approaches to probability theory
- Appendix B Mathematical formalities and style
- Appendix C Convolutions and cumulants
- References
- Bibliography
- Author index
- Subject index
6 - Elementary parameter estimation
from Part I - Principles and elementary applications
Published online by Cambridge University Press: 05 September 2012
- Frontmatter
- Contents
- Editor's foreword
- Preface
- Part I Principles and elementary applications
- 1 Plausible reasoning
- 2 The quantitative rules
- 3 Elementary sampling theory
- 4 Elementary hypothesis testing
- 5 Queer uses for probability theory
- 6 Elementary parameter estimation
- 7 The central, Gaussian or normal distribution
- 8 Sufficiency, ancillarity, and all that
- 9 Repetitive experiments: probability and frequency
- 10 Physics of ‘random experiments’
- Part II Advanced applications
- Appendix A Other approaches to probability theory
- Appendix B Mathematical formalities and style
- Appendix C Convolutions and cumulants
- References
- Bibliography
- Author index
- Subject index
Summary
A distinction without a difference has been introduced by certain writers who distinguish ‘Point estimation’, meaning some process of arriving at an estimate without regard to its precision, from ‘Interval estimation’ in which the precision of the estimate is to some extent taken into account.
R. A. Fisher (1956)Probability theory as logic agrees with Fisher in spirit; that is, it gives us automatically both point and interval estimates from a single calculation. The distinction commonly made between hypothesis testing and parameter estimation is considerably greater than that which concerned Fisher; yet it too is, from our point of view, not a real difference. When we have only a small number of discrete hypotheses {H1, …, Hn} to consider, we usually want to pick out a specific one of them as the most likely in that set, in the light of the prior information and data. The cases n = 2 and n = 3 were examined in some detail in Chapter 4, and larger n is in principle a straightforward and rather obvious generalization.
When the hypotheses become very numerous, however, a different approach seems called for. A set of discrete hypotheses can always be classified by assigning one or more numerical indices which identify them, as in Ht (1 ≤ t ≤ n), and if the hypotheses are very numerous one can hardly avoid doing this.
- Type
- Chapter
- Information
- Probability TheoryThe Logic of Science, pp. 149 - 197Publisher: Cambridge University PressPrint publication year: 2003