Book contents
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- Part One Machine Learning
- Part Two Optimal Recovery
- Part Three Compressive Sensing
- Part Four Optimization
- Part Five Neural Networks
- Executive Summary
- 24 First Encounter with ReLU Networks
- 25 Expressiveness of Shallow Networks
- 26 Various Advantages of Depth
- 27 Tidbits on Neural Network Training
- Appendices
- References
- Index
24 - First Encounter with ReLU Networks
from Part Five - Neural Networks
Published online by Cambridge University Press: 21 April 2022
- Frontmatter
- Dedication
- Contents
- Preface
- Notation
- Part One Machine Learning
- Part Two Optimal Recovery
- Part Three Compressive Sensing
- Part Four Optimization
- Part Five Neural Networks
- Executive Summary
- 24 First Encounter with ReLU Networks
- 25 Expressiveness of Shallow Networks
- 26 Various Advantages of Depth
- 27 Tidbits on Neural Network Training
- Appendices
- References
- Index
Summary
This chapter starts by introducing the key concepts attached to neural networks, such as architecture, weights, biases, and activation function. It proceeds with the specific choice of the rectified linear unit (ReLU) as activation function. In this case, neural networks generate continuous piecewise linear (CPwL) functions. It is then shown that, in the univariate setting, any CPwL function can generated by a shallow ReLU network. This is no longer true in the multivariate setting, for which it is nonetheless shown that any CPwL function can generated by a deep ReLU network.
- Type
- Chapter
- Information
- Mathematical Pictures at a Data Science Exhibition , pp. 208 - 215Publisher: Cambridge University PressPrint publication year: 2022