Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- Part One Machine Learning
- Part Two Optimal Recovery
- Part Three Compressive Sensing
- Part Four Optimization
- Part Five Neural Networks
- Executive Summary
- 24 First Encounter with ReLU Networks
- 25 Expressiveness of Shallow Networks
- 26 Various Advantages of Depth
- 27 Tidbits on Neural Network Training
- Appendices
- References
- Index
25 - Expressiveness of Shallow Networks
from Part Five - Neural Networks
Published online by Cambridge University Press: 21 April 2022
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- Part One Machine Learning
- Part Two Optimal Recovery
- Part Three Compressive Sensing
- Part Four Optimization
- Part Five Neural Networks
- Executive Summary
- 24 First Encounter with ReLU Networks
- 25 Expressiveness of Shallow Networks
- 26 Various Advantages of Depth
- 27 Tidbits on Neural Network Training
- Appendices
- References
- Index
Summary
In this chapter, it is proved that the set of multivariate functions generated by shallow networks is dense in the space of continuous functions on a compact set if and only if the activation function is not a polynomial. For the specific choice of the ReLU activation function, a two-sided estimate of the approximation rate of Lipschitz functions by shallow networks is also provided. The argument for the lower estimate makes use of an upper estimate on the VC-dimension of shallow ReLU networks.
- Type
- Chapter
- Information
- Mathematical Pictures at a Data Science Exhibition , pp. 216 - 225Publisher: Cambridge University PressPrint publication year: 2022