Book contents
- Frontmatter
- Contents
- Preface
- 1 Introduction
- 2 Foundations of Smooth Optimization
- 3 Descent Methods
- 4 Gradient Methods Using Momentum
- 5 Stochastic Gradient
- 6 Coordinate Descent
- 7 First-Order Methods for Constrained Optimization
- 8 Nonsmooth Functions and Subgradients
- 9 Nonsmooth Optimization Methods
- 10 Duality and Algorithms
- 11 Differentiation and Adjoints
- Appendix
- Bibliography
- Index
7 - First-Order Methods for Constrained Optimization
Published online by Cambridge University Press: 31 March 2022
- Frontmatter
- Contents
- Preface
- 1 Introduction
- 2 Foundations of Smooth Optimization
- 3 Descent Methods
- 4 Gradient Methods Using Momentum
- 5 Stochastic Gradient
- 6 Coordinate Descent
- 7 First-Order Methods for Constrained Optimization
- 8 Nonsmooth Functions and Subgradients
- 9 Nonsmooth Optimization Methods
- 10 Duality and Algorithms
- 11 Differentiation and Adjoints
- Appendix
- Bibliography
- Index
Summary
Here, we describe methods for minimizing a smooth function over a closed convex set, using gradient information. We first state results that characterize optimality of points in a way that can be checked, and describe the vital operation of projection onto the feasible set. We next describe the projected gradient algorithm, which is in a sense the extension of the steepest-descent method to the constrained case, analyze its convergence, and describe several extensions. We next analyze the conditional-gradient method (also known as “Frank-Wolfe”) for the case in which the feasible set is compact and demonstrate sublinear convergence of this approach when the objective function is convex.
- Type
- Chapter
- Information
- Optimization for Data Analysis , pp. 118 - 131Publisher: Cambridge University PressPrint publication year: 2022