8 - Singular statistics
Published online by Cambridge University Press: 10 January 2011
Summary
In this chapter, we study statistical model evaluation and statistical hypothesis tests in singular learning machines. Firstly, we show that there is no universally optimal learning in general and that model evaluation and hypothesis tests are necessary in statistics. Secondly, we analyze two information criteria: stochastic complexity and generalization error in singular learning machines. Thirdly, we show a method to produce a statistical hypothesis test if the null hypothesis is a singularity of the alternative hypothesis. Then the methods by which the Bayes a posteriori distribution is generated are introduced. We discuss the Markov chain Monte Carlo and variational approximation. In the last part of this chapter, we compare regular and singular learning theories. Regular learning theory is based on the quadratic approximation of the log likelihood ratio function and the central limit theorem on the parameter space, whereas singular learning theory is based on the resolution of singularities and the central limit theorem on the functional space. Mathematically speaking, this book generalizes regular learning theory to singular statistical models.
Universally optimal learning
There are a lot of statistical estimation methods. One might expect that there is a universally optimal method, which always gives a smaller generalization error than any other method. However, in general, such a method does not exist.
Assumption. Assume that Φ(w) is the probability density function on ℝd, and that a parameter ω is chosen with respect to Φ(ω).
- Type
- Chapter
- Information
- Algebraic Geometry and Statistical Learning Theory , pp. 249 - 276Publisher: Cambridge University PressPrint publication year: 2009