Book contents
- Frontmatter
- Contents
- Editor's foreword
- Preface
- Part I Principles and elementary applications
- Part II Advanced applications
- 11 Discrete prior probabilities: the entropy principle
- 12 Ignorance priors and transformation groups
- 13 Decision theory, historical background
- 14 Simple applications of decision theory
- 15 Paradoxes of probability theory
- 16 Orthodox methods: historical background
- 17 Principles and pathology of orthodox statistics
- 18 The Ap distribution and rule of succession
- 19 Physical measurements
- 20 Model comparison
- 21 Outliers and robustness
- 22 Introduction to communication theory
- Appendix A Other approaches to probability theory
- Appendix B Mathematical formalities and style
- Appendix C Convolutions and cumulants
- References
- Bibliography
- Author index
- Subject index
14 - Simple applications of decision theory
from Part II - Advanced applications
Published online by Cambridge University Press: 05 September 2012
- Frontmatter
- Contents
- Editor's foreword
- Preface
- Part I Principles and elementary applications
- Part II Advanced applications
- 11 Discrete prior probabilities: the entropy principle
- 12 Ignorance priors and transformation groups
- 13 Decision theory, historical background
- 14 Simple applications of decision theory
- 15 Paradoxes of probability theory
- 16 Orthodox methods: historical background
- 17 Principles and pathology of orthodox statistics
- 18 The Ap distribution and rule of succession
- 19 Physical measurements
- 20 Model comparison
- 21 Outliers and robustness
- 22 Introduction to communication theory
- Appendix A Other approaches to probability theory
- Appendix B Mathematical formalities and style
- Appendix C Convolutions and cumulants
- References
- Bibliography
- Author index
- Subject index
Summary
We now examine in detail two of the simplest applications of the general decision theory just formulated, and compare the first with the older Neyman–Pearson procedure. The problem of detection of signals in noise is really the same as Laplace's old problem of detecting the presence of unknown systematic influences in celestial mechanics, and Shewhart's (1931) more recent problem of detecting a systematic drift in machine characteristics, in industrial quality control. Statisticians would call the procedure a ‘significance test’. It is unfortunate that the basic identity of all these problems was not more widely recognized, because it forced workers in several different fields to rediscover the same things, with varying degrees of success, over and over again.
As is clear by now, all we really have to do to solve this problem is to take the principles of inference developed in Chapters 2 and 4, and supplement them with the loss function criterion for converting final probabilities into decisions (and, if needed, the maximum entropy principle for assigning priors). However, the literature of this field has been created largely from the standpoint of the original decision theory before this was realized. The existing literature therefore uses a different sort of vocabulary and set of concepts than we have been using up to now. Since it exists, we have no choice but to learn these terms and viewpoints if we want to read the literature of the field.
- Type
- Chapter
- Information
- Probability TheoryThe Logic of Science, pp. 426 - 450Publisher: Cambridge University PressPrint publication year: 2003