Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- Part One Machine Learning
- Executive Summary
- 1 Rudiments of Statistical Learning Theory
- 2 Vapnik–Chervonenkis Dimension
- 3 Learnability for Binary Classification
- 4 Support Vector Machines
- 5 Reproducing Kernel Hilbert Spaces
- 6 Regression and Regularization
- 7 Clustering
- 8 Dimension Reduction
- Part Two Optimal Recovery
- Part Three Compressive Sensing
- Part Four Optimization
- Part Five Neural Networks
- Appendices
- References
- Index
2 - Vapnik–Chervonenkis Dimension
from Part One - Machine Learning
Published online by Cambridge University Press: 21 April 2022
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- Part One Machine Learning
- Executive Summary
- 1 Rudiments of Statistical Learning Theory
- 2 Vapnik–Chervonenkis Dimension
- 3 Learnability for Binary Classification
- 4 Support Vector Machines
- 5 Reproducing Kernel Hilbert Spaces
- 6 Regression and Regularization
- 7 Clustering
- 8 Dimension Reduction
- Part Two Optimal Recovery
- Part Three Compressive Sensing
- Part Four Optimization
- Part Five Neural Networks
- Appendices
- References
- Index
Summary
In this chapter, the notion of the Vapnik--Chervonenkis dimension is formally defined and illustrated on several examples. The behavior of the related shatter function is elucidated through the Sauer lemma, proved with the help of the Pajor lemma.
- Type
- Chapter
- Information
- Mathematical Pictures at a Data Science Exhibition , pp. 10 - 15Publisher: Cambridge University PressPrint publication year: 2022