13 - Summary and conclusions
Published online by Cambridge University Press: 05 June 2012
Summary
I wake to sleep, and take my waking slow.
I feel my fate in what I cannot fear.
I learn by going where I have to go.
Theodore RoethkeWhere have we been?
The world of learning machines continues to grow rapidly, expand out in novel directions, and extend the search for explanations of previous successes. We have seen that there are previously hidden links between machines that are being uncovered, and alternative versions of machines that markedly improve on earlier ones. The subject is far from being a mature technology. Our discussion of learning machines is, therefore, only a narrowly developed snapshot of what is known and available at the time of writing (Summer 2010). Yet some broad conclusions are nonetheless clear. And we discuss several kinds of interesting machines that are of very recent vintage, or, should be invented right away.
Statistical learning machines are computer-intensive schemes that allow the researcher to venture into data realms where most of the classical statistical assumptions no longer apply. They are often massively nonparametric, highly nonlinear, and yet benefit from fine-tuning when applied to specific datasets. Since the machines don't depend on familiar distributional assumptions of the data – which need not be multivariate normal, for example – the results from theory showing how well they can perform are necessarily deep and nontrivial technical results. This theory could be outlined only briefly in Section 2.11.
- Type
- Chapter
- Information
- Statistical Learning for Biomedical Data , pp. 255 - 262Publisher: Cambridge University PressPrint publication year: 2011