Book contents
- Frontmatter
- Contents
- Editor's foreword
- Preface
- Part I Principles and elementary applications
- Part II Advanced applications
- 11 Discrete prior probabilities: the entropy principle
- 12 Ignorance priors and transformation groups
- 13 Decision theory, historical background
- 14 Simple applications of decision theory
- 15 Paradoxes of probability theory
- 16 Orthodox methods: historical background
- 17 Principles and pathology of orthodox statistics
- 18 The Ap distribution and rule of succession
- 19 Physical measurements
- 20 Model comparison
- 21 Outliers and robustness
- 22 Introduction to communication theory
- Appendix A Other approaches to probability theory
- Appendix B Mathematical formalities and style
- Appendix C Convolutions and cumulants
- References
- Bibliography
- Author index
- Subject index
11 - Discrete prior probabilities: the entropy principle
from Part II - Advanced applications
Published online by Cambridge University Press: 05 September 2012
- Frontmatter
- Contents
- Editor's foreword
- Preface
- Part I Principles and elementary applications
- Part II Advanced applications
- 11 Discrete prior probabilities: the entropy principle
- 12 Ignorance priors and transformation groups
- 13 Decision theory, historical background
- 14 Simple applications of decision theory
- 15 Paradoxes of probability theory
- 16 Orthodox methods: historical background
- 17 Principles and pathology of orthodox statistics
- 18 The Ap distribution and rule of succession
- 19 Physical measurements
- 20 Model comparison
- 21 Outliers and robustness
- 22 Introduction to communication theory
- Appendix A Other approaches to probability theory
- Appendix B Mathematical formalities and style
- Appendix C Convolutions and cumulants
- References
- Bibliography
- Author index
- Subject index
Summary
At this point, we return to the job of designing the robot. We have part of its brain designed, and we have seen how it would reason in a few simple problems of hypothesis testing and estimation. In every problem it has solved thus far, the results have either amounted to the same thing as, or were usually demonstrably superior to, those offered in the ‘orthodox’ statistical literature. But it is still not a very versatile reasoning machine, because it has only one means by which it can translate raw information into numerical values of probabilities, the principle of indifference (2.95). Consistency requires it to recognize the relevance of prior information, and so in almost every problem it is faced at the onset with the problem of assigning initial probabilities, whether they are called technically prior probabilities or sampling probabilities. It can use indifference for this if it can break the situation up into mutually exclusive, exhaustive possibilities in such a way that no one of them is preferred to any other by the evidence. But often there will be prior information that does not change the set of possibilities but does give a reason for preferring one possibility to another. What do we do in this case?
Orthodoxy evades this problem by simply ignoring prior information for fixed parameters, and maintaining the fiction that sampling probabilities are known frequencies.
- Type
- Chapter
- Information
- Probability TheoryThe Logic of Science, pp. 343 - 371Publisher: Cambridge University PressPrint publication year: 2003
- 6
- Cited by