![](https://assets.cambridge.org/97813165/19332/cover/9781316519332.jpg)
Book contents
- Frontmatter
- Contents
- Preface
- 0 Initialization
- 1 Pretraining
- 2 Neural Networks
- 3 Effective Theory of Deep Linear Networks at Initialization
- 4 RG Flow of Preactivations
- 5 Effective Theory of Preactivations at Initialization
- 6 Bayesian Learning
- 7 Gradient-Based Learning
- 8 RG Flow of the Neural Tangent Kernel
- 9 Effective Theory of the NTK at Initialization
- 10 Kernel Learning
- 11 Representation Learning
- ∞ The End of Training
- ε Epilogue: Model Complexity from the Macroscopic Perspective
- A Information in Deep Learning
- B Residual Learning
- References
- Index
A - Information in Deep Learning
Published online by Cambridge University Press: 05 May 2022
- Frontmatter
- Contents
- Preface
- 0 Initialization
- 1 Pretraining
- 2 Neural Networks
- 3 Effective Theory of Deep Linear Networks at Initialization
- 4 RG Flow of Preactivations
- 5 Effective Theory of Preactivations at Initialization
- 6 Bayesian Learning
- 7 Gradient-Based Learning
- 8 RG Flow of the Neural Tangent Kernel
- 9 Effective Theory of the NTK at Initialization
- 10 Kernel Learning
- 11 Representation Learning
- ∞ The End of Training
- ε Epilogue: Model Complexity from the Macroscopic Perspective
- A Information in Deep Learning
- B Residual Learning
- References
- Index
Summary
![Image of the first page of this content. For PDF version, please use the ‘Save PDF’ preceeding this image.'](https://static.cambridge.org/content/id/urn%3Acambridge.org%3Aid%3Abook%3A9781009023405/resource/name/firstPage-9781316519332apx1_399-424.jpg)
- Type
- Chapter
- Information
- The Principles of Deep Learning TheoryAn Effective Theory Approach to Understanding Neural Networks, pp. 399 - 424Publisher: Cambridge University PressPrint publication year: 2022