Book contents
- Frontmatter
- Contents
- Preface
- I An Introduction to the Techniques
- 1 An Introduction to Approximation Algorithms
- 2 Greedy Algorithms and Local Search
- 3 Rounding Data and Dynamic Programming
- 4 Deterministic Rounding of Linear Programs
- 5 Random Sampling and Randomized Rounding of Linear Programs
- 6 Randomized Rounding of Semidefinite Programs
- 7 The Primal-Dual Method
- 8 Cuts and Metrics
- II Further Uses of the Techniques
- Appendix A Linear Programming
- Appendix B NP-Completeness
- Bibliography
- Author Index
- Subject Index
2 - Greedy Algorithms and Local Search
from I - An Introduction to the Techniques
Published online by Cambridge University Press: 05 June 2012
- Frontmatter
- Contents
- Preface
- I An Introduction to the Techniques
- 1 An Introduction to Approximation Algorithms
- 2 Greedy Algorithms and Local Search
- 3 Rounding Data and Dynamic Programming
- 4 Deterministic Rounding of Linear Programs
- 5 Random Sampling and Randomized Rounding of Linear Programs
- 6 Randomized Rounding of Semidefinite Programs
- 7 The Primal-Dual Method
- 8 Cuts and Metrics
- II Further Uses of the Techniques
- Appendix A Linear Programming
- Appendix B NP-Completeness
- Bibliography
- Author Index
- Subject Index
Summary
In this chapter, we will consider two standard and related techniques for designing algorithms and heuristics, namely, greedy algorithms and local search algorithms. Both algorithms work by making a sequence of decisions that optimize some local choice, though these local choices might not lead to the best overall solution.
In a greedy algorithm, a solution is constructed step by step, and at each step of the algorithm the next part of the solution is constructed by making some decision that is locally the best possible. In Section 1.6, we gave an example of a greedy algorithm for the set cover problem that constructs a set cover by repeatedly choosing the set that minimizes the ratio of its weight to the number of currently uncovered elements it contains.
A local search algorithm starts with an arbitrary feasible solution to the problem, and then checks if some small, local change to the solution results in an improved objective function. If so, the change is made. When no further change can be made, we have a locally optimal solution, and it is sometimes possible to prove that such locally optimal solutions have value close to that of the optimal solution. Unlike other approximation algorithm design techniques, the most straightforward implementation of a local search algorithm typically does not run in polynomial time. The algorithm usually requires some restriction to the local changes allowed in order to ensure that enough progress is made during each improving step so that a locally optimal solution is found in polynomial time.
- Type
- Chapter
- Information
- The Design of Approximation Algorithms , pp. 27 - 56Publisher: Cambridge University PressPrint publication year: 2011