Book contents
- Frontmatter
- Contents
- Contributors
- Introduction
- Part 1 Graphical Structure
- Part 2 Language Restrictions
- Part 3 Algorithms and their Analysis
- 6 Tree-Reweighted Message Passing
- 7 Tractable Optimization in Machine Learning
- 8 Approximation Algorithms
- 9 Kernelization Methods for Fixed-Parameter Tractability
- Part 4 Tractability in Some Specific Areas
- Part 5 Heuristics
9 - Kernelization Methods for Fixed-Parameter Tractability
from Part 3 - Algorithms and their Analysis
Published online by Cambridge University Press: 05 February 2014
- Frontmatter
- Contents
- Contributors
- Introduction
- Part 1 Graphical Structure
- Part 2 Language Restrictions
- Part 3 Algorithms and their Analysis
- 6 Tree-Reweighted Message Passing
- 7 Tractable Optimization in Machine Learning
- 8 Approximation Algorithms
- 9 Kernelization Methods for Fixed-Parameter Tractability
- Part 4 Tractability in Some Specific Areas
- Part 5 Heuristics
Summary
Preprocessing or data reduction means reducing a problem to something simpler by solving an easy part of the input. This type of algorithm is used in almost every application. In spite of wide practical applications of preprocessing, a systematic theoretical study of such algorithms remains elusive. The framework of parameterized complexity can be used as an approach to analysing preprocessing algorithms. In this framework, the algorithms have, in the addition to the input, an extra parameter that is likely to be small. This has resulted in a study of preprocessing algorithms that reduce the size of the input to a pure function of the parameter (independent of the input size). Such types of preprocessing algorithms are called kernelization algorithms. In this survey we give an overview of some classical and new techniques in the design of such algorithms.
Introduction
Preprocessing (data reduction or kernelization) as a strategy for coping with hard problems is used in many situations. The history of this approach can be traced back to the 1950s [34], where truth functions were simplified using reduction rules. A natural question arises: how can we measure the quality of preprocessing rules proposed for a specific problem? For a long time the mathematical analysis of polynomial time preprocessing algorithms was neglected. The basic reason for this oversight was the following impossibility result: if, starting with an instance I of an NP-hard problem, we could compute in polynomial time an instance I′ equivalent to I and with |I′| < |I|, then it would follow that P=NP, thereby contradicting classical complexity assumptions.
- Type
- Chapter
- Information
- TractabilityPractical Approaches to Hard Problems, pp. 260 - 282Publisher: Cambridge University PressPrint publication year: 2014
- 4
- Cited by