Book contents
- Frontmatter
- Contents
- Preface
- 1 Introduction
- 2 Foundations of Smooth Optimization
- 3 Descent Methods
- 4 Gradient Methods Using Momentum
- 5 Stochastic Gradient
- 6 Coordinate Descent
- 7 First-Order Methods for Constrained Optimization
- 8 Nonsmooth Functions and Subgradients
- 9 Nonsmooth Optimization Methods
- 10 Duality and Algorithms
- 11 Differentiation and Adjoints
- Appendix
- Bibliography
- Index
6 - Coordinate Descent
Published online by Cambridge University Press: 31 March 2022
- Frontmatter
- Contents
- Preface
- 1 Introduction
- 2 Foundations of Smooth Optimization
- 3 Descent Methods
- 4 Gradient Methods Using Momentum
- 5 Stochastic Gradient
- 6 Coordinate Descent
- 7 First-Order Methods for Constrained Optimization
- 8 Nonsmooth Functions and Subgradients
- 9 Nonsmooth Optimization Methods
- 10 Duality and Algorithms
- 11 Differentiation and Adjoints
- Appendix
- Bibliography
- Index
Summary
This chapter describes the coordinate descent approach, in which a single variable (or a block of variables) is updated at each iteration, usually based on partial derivative information for those variables, while the remainder are left unchanged. We describe two problems in machine learning for which this approach has potential advantages relative to the approaches described in previous chapters (which make use of the full gradient), and present convergence analyses for the randomized and cyclic versions of this approach. We show that convergence rates of block coordinate descent methods can be analyzed in a similar fashion to the basic single-component methods.
- Type
- Chapter
- Information
- Optimization for Data Analysis , pp. 100 - 117Publisher: Cambridge University PressPrint publication year: 2022