Parameterized complexity

In computer science, parameterized complexity is a branch of computational complexity theory that focuses on classifying computational problems according to their inherent difficulty with respect to multiple parameters of the input or output. The complexity of a problem is then measured as a function of those parameters. This allows the classification of NP-hard problems on a finer scale than in the classical setting, where the complexity of a problem is only measured as a function of the number of bits in the input. This appears to have been first demonstrated in Gurevich, Stockmeyer & Vishkin (1984). The first systematic work on parameterized complexity was done by Downey & Fellows (1999).

Under the assumption that P ≠ NP, there exist many natural problems that require superpolynomial running time when complexity is measured in terms of the input size only but that are computable in a time that is polynomial in the input size and exponential or worse in a parameter k. Hence, if k is fixed at a small value and the growth of the function over k is relatively small then such problems can still be considered "tractable" despite their traditional classification as "intractable".

The existence of efficient, exact, and deterministic solving algorithms for NP-complete, or otherwise NP-hard, problems is considered unlikely, if input parameters are not fixed; all known solving algorithms for these problems require time that is exponential (so in particular superpolynomial) in the total size of the input. However, some problems can be solved by algorithms that are exponential only in the size of a fixed parameter while polynomial in the size of the input. Such an algorithm is called a fixed-parameter tractable (FPT) algorithm, because the problem can be solved efficiently (i.e., in polynomial time) for constant values of the fixed parameter.

Problems in which some parameter k is fixed are called parameterized problems. A parameterized problem that allows for such an FPT algorithm is said to be a fixed-parameter tractable problem and belongs to the class FPT, and the early name of the theory of parameterized complexity was fixed-parameter tractability.

Many problems have the following form: given an object x and a nonnegative integer k, does x have some property that depends on k? For instance, for the vertex cover problem, the parameter can be the number of vertices in the cover. In many applications, for example when modelling error correction, one can assume the parameter to be "small" compared to the total input size. Then it is challenging to find an algorithm that is exponential only in k, and not in the input size.

In this way, parameterized complexity can be seen as two-dimensional complexity theory. This concept is formalized as follows:

A parameterized problem is a language , where is a finite alphabet. The second component is called the parameter of the problem.
A parameterized problem L is fixed-parameter tractable if the question "?" can be decided in running time , where f is an arbitrary function depending only on k. The corresponding complexity class is called FPT.

For example, there is an algorithm that solves the vertex cover problem in time,[1] where n is the number of vertices and k is the size of the vertex cover. This means that vertex cover is fixed-parameter tractable with the size of the solution as the parameter.