SlideShare una empresa de Scribd logo
1 de 128
Descargar para leer sin conexión
On the Infeasibility of
Obtaining Polynomial Kernels




        why some problems are incompressible
PROBLEM KERNELS
                                               how why     what


when
                  no polynomial kernels


                         Examples
 satisfiability       vertex-disjoint cycles     disjoint factors




                     WorkarouNds
                         many kernels
PROBLEM KERNELS
                                          how why       what

when
                  no polynomial kernels


                         Examples
 satisfiability       vertex-disjoint cycles   disjoint factors




                     WorkarouNds
                         many kernels
ere are  people at a professor’s party.


e professor would like to locate a group of six people who are popular.


                i.e, everyone is a friend of at least one of the six.


  Being a busy man, he asks two of his students to find such a group.

                                                         (25)
        e first grimaces and starts making a list of       6    possibilities.


e second knows that no one in the party has more than three friends .



   
       Academicians tend to be lonely.
ere are  people at a professor’s party.


e professor would like to locate a group of six people who are popular.


                i.e, everyone is a friend of at least one of the six.


  Being a busy man, he asks two of his students to find such a group.

                                                         (25)
        e first grimaces and starts making a list of       6    possibilities.


e second knows that no one in the party has more than three friends .



   
       Academicians tend to be lonely.
ere is a graph on n vertices.


e professor would like to locate a group of six people who are popular.


                i.e, everyone is a friend of at least one of the six.


  Being a busy man, he asks two of his students to find such a group.

                                                         (25)
        e first grimaces and starts making a list of       6    possibilities.


e second knows that no one in the party has more than three friends .



   
       Academicians tend to be lonely.
ere is a graph on n vertices.


                   Is there a dominating set of size at most k?


               i.e, everyone is a friend of at least one of the six.


  Being a busy man, he asks two of his students to find such a group.

                                                        (25)
       e first grimaces and starts making a list of       6    possibilities.


e second knows that no one in the party has more than three friends .



  
      Academicians tend to be lonely.
ere is a graph on n vertices.


                   Is there a dominating set of size at most k?


         (Every vertex is a neighbor of at least one of the k vertices.)


  Being a busy man, he asks two of his students to find such a group.

                                                      (25)
       e first grimaces and starts making a list of     6    possibilities.


e second knows that no one in the party has more than three friends .



  
      Academicians tend to be lonely.
ere is a graph on n vertices.


                   Is there a dominating set of size at most k?


         (Every vertex is a neighbor of at least one of the k vertices.)


                          e problem is NP–complete,

                                                      (25)
       e first grimaces and starts making a list of     6    possibilities.


e second knows that no one in the party has more than three friends .



  
      Academicians tend to be lonely.
ere is a graph on n vertices.


                   Is there a dominating set of size at most k?


         (Every vertex is a neighbor of at least one of the k vertices.)


                          e problem is NP–complete,


               but is trivial on “large” graphs of bounded degree,


e second knows that no one in the party has more than three friends .



  
      Academicians tend to be lonely.
ere is a graph on n vertices.


        Is there a dominating set of size at most k?


(Every vertex is a neighbor of at least one of the k vertices.)


               e problem is NP–complete,


    but is trivial on “large” graphs of bounded degree,


           as you can say NO whenever n > kb.
Pre-processing is a humble strategy for coping with hard problems,
                    almost universally employed.

                          Mike Fellows
We would like to talk about the “preprocessing complexity” of a problem.


To what extent may we simplify/compress a problem before we begin to
                              solve it?


Note that any compression routine has to run efficiently to look attractive.
is makes it unreasonable for any NP–complete problem to admit
                    compression algorithms.


  However, (some) NP–complete problems can be compressed.


    We extend our notion (and notation) to understand them.
Notation

We denote a parameterized problem as a pair (Q, κ) consisting of a clas-
sical problem Q ⊆ {0, 1}∗ and a parameterization κ : {0, 1}∗ → N.
.
.

                 Data reduction


              involves pruning down


                  a large input
    K
    . ernel
               into an equivalent,


          significantly smaller object,
                                   S
                                   . ize of the Kernel

                    quickly.
.
.

                      Data reduction


    is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that


                        a large input
         K
         . ernel
                    into an equivalent,


                significantly smaller object,
                                         S
                                         . ize of the Kernel

                          quickly.
.
.

                      Data reduction


    is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that


                    for all (x, k), |x| = n
         K
         . ernel
                    into an equivalent,


                significantly smaller object,
                                         S
                                         . ize of the Kernel

                          quickly.
.
.

                        Data reduction


    is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that


                     for all (x, k), |x| = n
         K
         . ernel
                   f(x), k ∈ L iff (x, k) ∈ L,


                significantly smaller object,
                                         S
                                         . ize of the Kernel

                           quickly.
.
.

                        Data reduction


    is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that


                     for all (x, k), |x| = n
         K
         . ernel
                   f(x), k ∈ L iff (x, k) ∈ L,


                        |f(x)| = g(k),
                                           S
                                           . ize of the Kernel

                           quickly.
.
.

                        Data reduction


    is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that


                     for all (x, k), |x| = n
         K
         . ernel
                   f(x), k ∈ L iff (x, k) ∈ L,


                        |f(x)| = g(k),
                                           S
                                           . ize of the Kernel

               and f is polytime computable.
.
.

                   A kernelization procedure


    is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that


                     for all (x, k), |x| = n
         K
         . ernel
                   f(x), k ∈ L iff (x, k) ∈ L,


                        |f(x)| = g(k),
                                           S
                                           . ize of the Kernel

               and f is polytime computable.
.
.

                    A kernelization procedure


    is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that


                      for all (x, k), |x| = n
         K
         . ernel
                   (f(x), k ) ∈ L iff (x, k) ∈ L,


                         |f(x)| = g(k),
                                            S
                                            . ize of the Kernel

               and f is polytime computable.
.
.

                 A kernelization procedure


    is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that


                    for all (x, k), |x| = n


                 f(x), k ∈ L iff (x, k) ∈ L,


                       |f(x)| = g(k),
                                          S
                                          . ize of the Kernel

               and f is polytime computable.
eorem

      Having a kernelization procedure implies, and is implied by,
                      parameterized tractability.



Definition


A parameterized problem L is fixed-parameter tractable if there exists an
algorithm that decides in f (k) · nO(1) time whether (x, k) ∈ L, where
n := |x|, k := κ(x), and f is a computable function that does not depend
on n.
eorem

      Having a kernelization procedure implies, and is implied by,
                      parameterized tractability.



Definition


A parameterized problem L is fixed-parameter tractable if there exists an
algorithm that decides in f (k) · nO(1) time whether (x, k) ∈ L, where
n := |x|, k := κ(x), and f is a computable function that does not depend
on n.
eorem

               A problem admits a kernel if, and only if,
                    it is fixed–parameter tractable.



Definition


A parameterized problem L is fixed-parameter tractable if there exists an
algorithm that decides in f (k) · nO(1) time whether (x, k) ∈ L, where
n := |x|, k := κ(x), and f is a computable function that does not depend
on n.
eorem

              A problem admits a kernel if, and only if,
                   it is fixed–parameter tractable.




Given a kernel, a FPT algorithm is immediate (even brute–force on the
kernel will lead to such an algorithm).
eorem

                A problem admits a kernel if, and only if,
                     it is fixed–parameter tractable.




On the other hand, a FPT runtime of f(k)·nc gives us a f(k)–sized kernel.

We run the algorithm for nc+1 steps and either have a trivial kernel if the
algorithm stops, else:
                          nc+1 < f(k) · nc
eorem

                A problem admits a kernel if, and only if,
                     it is fixed–parameter tractable.




On the other hand, a FPT runtime of f(k)·nc gives us a f(k)–sized kernel.

We run the algorithm for nc+1 steps and either have a trivial kernel if the
algorithm stops, else:
                              n < f(k)
“Efficient” Kernelization


What is a reasonable notion of efficiency for kernelization?
“Efficient” Kernelization


What is a reasonable notion of efficiency for kernelization?
                 e smaller, the better.
“Efficient” Kernelization


    What is a reasonable notion of efficiency for kernelization?
                     e smaller, the better.


                           In particular,

Polynomial–sized kernels     <are than
                              better        Exponential–sized Kernels
e problem of finding Dominating Set of size k on graphs where the
degree is bounded by b, parameterized by k, has a linear kernel. is is an
example of a polynomial–sized kernel.
PROBLEM KERNELS
                                             how why     what


when
                  no polynomial kernels

                       Examples
 satisfiability     vertex-disjoint cycles     disjoint factors




                    WorkarouNds
                       many kernels
A composition algorithm A for a problem is designed to act as a fast
             Boolean OR of multiple problem-instances.


              It receives as input a sequence of instances.
        x = (x1 , . . . , xt ) with xi ∈ {0, 1}∗ for i ∈ [t], such that
                           (k1 = k2 = · · · = kt = k)


It produces as output a yes-instance with a small parameter if and only if
    at least one of the instances in the sequences is also a yes-instance.

                             κ(A(x)) = kO(1)

                Running time polynomial in Σi∈[t] |xi |
A composition algorithm A for a problem is designed to act as a fast
             Boolean OR of multiple problem-instances.


              It receives as input a sequence of instances.
        x = (x1 , . . . , xt ) with xi ∈ {0, 1}∗ for i ∈ [t], such that
                           k1 = k2 = · · · = kt = k


It produces as output a yes-instance with a small parameter if and only if
    at least one of the instances in the sequences is also a yes-instance.

                             κ(A(x)) = kO(1)

                Running time polynomial in Σi∈[t] |xi |
A composition algorithm A for a problem is designed to act as a fast
             Boolean OR of multiple problem-instances.


              It receives as input a sequence of instances.
        x = (x1 , . . . , xt ) with xi ∈ {0, 1}∗ for i ∈ [t], such that
                           k1 = k2 = · · · = kt = k


It produces as output a yes-instance with a small parameter if and only if
    at least one of the instances in the sequences is also a yes-instance.

                                k = kO(1)

                Running time polynomial in Σi∈[t] |xi |
A composition algorithm A for a problem is designed to act as a fast
             Boolean OR of multiple problem-instances.


              It receives as input a sequence of instances.
        x = (x1 , . . . , xt ) with xi ∈ {0, 1}∗ for i ∈ [t], such that
                           k1 = k2 = · · · = kt = k


It produces as output a yes-instance with a small parameter if and only if
    at least one of the instances in the sequences is also a yes-instance.

                                k = kO(1)

                 Running time polynomial in Σi∈[t] |xi |
e Recipe for Hardness


Composition Algorithm + Polynomial Kernel

                    ⇓

          Distillation Algorithm

                    ⇓

                PH = Σp
                      3
e Recipe for Hardness


             Composition Algorithm + Polynomial Kernel

                                   ⇓

                         Distillation Algorithm

                                   ⇓

                               PH = Σp
                                     3



eorem. Let (P, k) be a compositional parameterized problem such that
P is NP-complete. If P has a polynomial kernel, then P also has a distil-
lation algorithm.
Transformations

            Let (P, κ) and (Q, γ) be parameterized problems.

We say that there is a polynomial parameter transformation from P to Q if
there exists a polynomial time computable function f : {0, 1}∗ −→ {0, 1}∗ ,
and a polynomial p : N → N, such that, if f(x) = y, we have:



                       x ∈ P if and only if y ∈ Q,
Transformations

            Let (P, κ) and (Q, γ) be parameterized problems.

We say that there is a polynomial parameter transformation from P to Q if
there exists a polynomial time computable function f : {0, 1}∗ −→ {0, 1}∗ ,
and a polynomial p : N → N, such that, if f(x) = y, we have:



                       x ∈ P if and only if y ∈ Q,

                                    and

                             γ(y)    p(κ(x))
eorem: Suppose P is NP-complete, and Q ∈ NP. If f is a polynomial
time and parameter transformation from P to Q and Q has a polynomial
kernel, then P has a polynomial kernel.
eorem: Suppose P is NP-complete, and Q ∈ NP. If f is a polynomial
time and parameter transformation from P to Q and Q has a polynomial
kernel, then P has a polynomial kernel.



     .
              .      x.                                .
                                                        z∈P
           Instance of P


                                            .
     .                                           NP–completeness
         PPT Reduction
                                                    Reduction

          .
                  f(x) = y   .                     .
                                 Kernelization         K(y)
           Instance of Q
PROBLEM KERNELS
                                         how why    what


when
                 no polynomial kernels


                       Examples
satisfiability    vertex-disjoint cycles disjoint factors


                     WorkarouNds
                        many kernels
Recall


   A composition for a parameterized language (Q, κ) is required to
                          “merge” instances


                             x1 , x2 , . . . , xt ,

into a single instance x in polynomial time, such that κ(x) is polynomial
                          in k := κ(xi ) for any i.


           e output of the algorithm belongs to(Q, κ(x))
         .                      if, and .only if
         there exists at least one i ∈ [t] for which xi ∈ (Q, κ).
A      .
                        . Composition Tree


                                  . (a, b)
                                  ρ


               .
               a                                         .
                                                         b




. (x1 , x2 )
ρ                  . (x3 , x4 )
                   ρ                      . (xt−3 , xt−2 ) . (xt−1 , xt )
                                          ρ                ρ
                                      .

.1
x       .2
        x          .3
                   x       .4
                           x              . t−3
                                          x         . t−2 . t−1
                                                    x     x         .t
                                                                    x



               General Framework For Composition
A      .
                        . Composition Tree


                                  . (a, b)
                                  ρ


               .
               a                                         .
                                                         b




. (x1 , x2 )
ρ                  . (x3 , x4 )
                   ρ                      . (xt−3 , xt−2 ) . (xt−1 , xt )
                                          ρ                ρ
                                    ...

.1
x       .2
        x          .3
                   x       .4
                           x              . t−3
                                          x         . t−2 . t−1
                                                    x     x         .t
                                                                    x



               General Framework For Composition
A      .
                        . Composition Tree


                                  . (a, b)
                                  ρ


               .
               a                                        .
                                                        b




. (x1 , x2 )
ρ                  . (x3 , x4 )
                   ρ                     . (xt−3 , xt−2 ) . (xt−1 , xt )
                                         ρ                ρ
                                    …
                                    .

.1
x       .2
        x          .3
                   x       .4
                           x             . t−3
                                         x         . t−2 . t−1
                                                   x     x         .t
                                                                   x



               General Framework For Composition
Most composition algorithms can be stated
       in terms of a single operation, ρ,
                        .
.
  that describes the dynamic programming
  over this complete binary tree on t leaves.
Weighted Satisfiability
                     Given a CNF formula φ,
      Is there a satisfying assignment of weight at most k?

                         Parameter: b+k.
     When the length of the longest clause is bounded by b,
there is an easy branching algorithm with runtime O(bk · p(n)).
Weighted Satisfiability
                     Given a CNF formula φ,
      Is there a satisfying assignment of weight at most k?

                         Parameter: b+k.
     When the length of the longest clause is bounded by b,
there is an easy branching algorithm with runtime O(bk · p(n)).
Weighted Satisfiability
                     .Given a CNF formula φ,
      Is there a satisfying assignment of weight at most k?

                         Parameter: b+k.
     When the length of the longest clause is bounded by b,
there is an easy branching algorithm with runtime O(bk · p(n)).
                                                  |
                                                  .bp(n)|||d|)
Input: α1 , α2 , . . . , αt .
Input: α1 , α2 , . . . , αt .
κ(αi ) = b + k.
Input: α1 , α2 , . . . , αt .
κ(αi ) = b + k.
n := maxi∈[n] ni
Input: α1 , α2 , . . . , αt .
κ(αi ) = b + k.
n := maxi∈[n] ni

If t > bk , then solve every αi individually.

Total time = t · bk · p(n)

If not, t < bk – this gives us a bound on the number of instances.
Input: α1 , α2 , . . . , αt .
κ(αi ) = b + k.
n := maxi∈[n] ni

If t > bk , then solve every αi individually.

Total time = t · bk · p(n) Total time = t · bk · p(n)

If not, t < bk – this gives us a bound on the number of instances.
Input: α1 , α2 , . . . , αt .
κ(αi ) = b + k.
n := maxi∈[n] ni

If t > bk , then solve every αi individually.

Total time = t · bk · p(n) Total time < t · t · p(n)

If not, t < bk – this gives us a bound on the number of instances.
Input: α1 , α2 , . . . , αt .
κ(αi ) = b + k.
n := maxi∈[n] ni

If t > bk , then solve every αi individually.

Total time = t · bk · p(n) Total time < t · t · p(n)

If not, t < bk – this gives us a bound on the number of instances.
L .
                                . evel 1




. β1 ∨ b0 ) ∧ . . . ∧ (βn ∨ b0 )
(                                      . β1 ∨ b0 ) ∧ . . . ∧ (βm ∨ b0 )
                                       (      ¯                    ¯

is is the scene at the leaves, where the βj s are the clauses in αi for
               some i and the βj s are clauses of αi+1 .
L .
                                . evel j




 . β1 ∨ b) ∧ . . . ∧ (βn ∨ b)
 (                                         . β1 ∨ b) ∧ . . . ∧ (βm ∨ b)
                                           (      ¯                  ¯

is is the scene at the leaves, where the βj s are the clauses in αi for
               some i and the βj s are clauses of αi+1 .
(β1 ∨ b) ∧ (β2 ∨ b) .∧ . . . ∧ (βn ∨ b) ∧
          .
            (β1 ∨ b) ∧ (β2 ∨ b) ∧ . . . ∧ (βm ∨ b)
                  ¯          ¯                   ¯




. β1 ∨ b) ∧ . . . ∧ (βn ∨ b)
(                                   . β1 ∨ b) ∧ . . . ∧ (βm ∨ b)
                                    (      ¯                  ¯

 Take the conjunction of the formulas stored at the child nodes.
 Take the conjunction of the formulas stored at the child nodes.
(β1 ∨ b ∨ bj ) ∧ (β2 ∨ b ∨ bj ) ∧ . . . ∧ (βn ∨ b ∨ bj )∧
                               .
   .
     (β1 ∨ b ∨ bj ) ∧ (β2 ∨ b ∨ bj ) ∧ . . . ∧ (βm ∨ b ∨ bj )
           ¯                ¯                        ¯




. β1 ∨ b) ∧ . . . ∧ (βn ∨ b)
(                                    . β1 ∨ b) ∧ . . . ∧ (βm ∨ b)
                                     (      ¯                  ¯

     where the βj s are the clauses in αi for some i and the.
                  if the parent is a “left child”.
(β1 ∨ b ∨ bj ) ∧ (β2 ∨ b ∨ bj ) ∧ . . . ∧ (βn ∨ b ∨ bj )∧
              ¯               ¯
                              .
                                                       ¯
   .
     (β1 ∨ b
           ¯ ∨ bj ) ∧ (β ∨ b ∨ bj ) ∧ . . . ∧ (βm ∨ b ∨ bj )
               ¯
                        2
                           ¯ ¯                      ¯ ¯




. β1 ∨ b) ∧ . . . ∧ (βn ∨ b)
(                                    . β1 ∨ b) ∧ . . . ∧ (βm ∨ b)
                                     (      ¯                  ¯

     where the βj s are the clauses in αi for some i and the.
                if the parent is a “right child”.
adding a suffix to “control the weight”


                  α
                  ∧
     (c0 ∨ b0 ) ∧ (c0 ∨ b0 )∧
      ¯     ¯
     (c1 ∨ b1 ) ∧ (c1 ∨ b1 )∧
      ¯     ¯
                 ...
      (ci ∨ bi ) ∧ (ci ∨ bi )∧
       ¯    ¯
                 ...
 (cl−1 ∨ b
  ¯       ¯l−1 ) ∧ (cl−1 ∨ bl−1 )
Claim


e composed instance α has a satisfying assignment of weight 2k

                               ⇐⇒

at least one of the input instances admit a satisfying assignment of
                              weight k.
Proof of Correctness



. . . ∧ (α ∨ (b0 ∨. b1 ∨ b2 )) ∧ . . .
                         ¯

                               . . . α ∨ b0 ∨ b1 ∨b2 . . .
                                                  ¯

                    . . . α ∨ b0 ∨ b1 . . .

                 . . . α ∨ b0 . . .
Disjoint Factors
Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}.
Disjoint Factors
     Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}.

A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with
                                  k                               k
      1 i < j r, which starts and ends with the same letter.


                           123235443513
Disjoint Factors
     Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}.

A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with
                                  k                               k
      1 i < j r, which starts and ends with the same letter.


                            123235443513
            Disjoint factors do not overlap in the word.
                    Are there k disjoint factors?
       Does the word have all the k factors, mutually disjoint?
Disjoint Factors
     Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}.

A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with
                                  k                               k
      1 i < j r, which starts and ends with the same letter.


                            123235443513
            Disjoint factors do not overlap in the word.
                    Are there k disjoint factors?
       Does the word have all the k factors, mutually disjoint?
Disjoint Factors
     Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}.

A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with
                                  k                               k
      1 i < j r, which starts and ends with the same letter.


                            123235443513
            Disjoint factors do not overlap in the word.
                    Are there k disjoint factors?
       Does the word have all the k factors, mutually disjoint?
Disjoint Factors
     Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}.

A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with
                                  k                               k
      1 i < j r, which starts and ends with the same letter.


                             123235443513
            Disjoint factors do not overlap in the word.
                    Are there k disjoint factors?
       Does the word have all the k factors, mutually disjoint?
Disjoint Factors
     Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}.

A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with
                                  k                               k
      1 i < j r, which starts and ends with the same letter.


                             123235443513
            Disjoint factors do not overlap in the word.
                    Are there k disjoint factors?
       Does the word have all the k factors, mutually disjoint?
Disjoint Factors
     Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}.

A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with
                                  k                               k
      1 i < j r, which starts and ends with the same letter.


                             123235443513
            Disjoint factors do not overlap in the word.
                    Are there k disjoint factors?
       Does the word have all the k factors, mutually disjoint?
Disjoint Factors
     Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}.

A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with
                                  k                               k
      1 i < j r, which starts and ends with the same letter.


                              123235443513
             Disjoint factors do not overlap in the word.
                    Are there k disjoint factors?
       Does the word have all the k factors, mutually disjoint?
Disjoint Factors
     Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}.

A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with
                                  k                               k
      1 i < j r, which starts and ends with the same letter.


                              123235443513
             Disjoint factors do not overlap in the word.
       Does the word have all the k factors, mutually disjoint?
       Does the word have all the k factors, mutually disjoint?
Disjoint Factors
     Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}.

A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with
                                  k                               k
      1 i < j r, which starts and ends with the same letter.


                              123235443513
            Disjoint factors do not overlap in the word.
       Does the word have all the k factors, mutually disjoint?
                            Parameter: k
A k! · O(n) algorithm is immediate.
A k! · O(n) algorithm is immediate.
A 2k · p(n) algorithm can be obtained by Dynamic Programming.
A k! · O(n) algorithm is immediate.
A 2k · p(n) algorithm can be obtained by Dynamic Programming.

Let t be the number of instances input to the composition algorithm.
Again, the non-trivial case is when t < 2k .
A k! · O(n) algorithm is immediate.
A 2k · p(n) algorithm can be obtained by Dynamic Programming.

Let t be the number of instances input to the composition algorithm.
Again, the non-trivial case is when t < 2k .

Let w1 , w2 , . . . , wt be words over L∗ .
                                        k
L     .
                        . eaf Nodes




. 0 w1 b0
b           . 0 w2 b0
            b               ...       . 0 wt−1 b0 . 0 wt b0
                                      b           b
Level j.


                   .
       . j bj−1 ubj−1 vbj−1 bj
       b

. j−1 ubj−1
b                         . j−1 vbj−1
                          b
Claim


      e composed word has all the 2k disjoint factors

                              ⇐⇒

at least one of the input instances has all the k disjoint factors.
Proof of Correctness



                                         .
     . 2 b1 b0 pb0 qb0 b1 b0 rb0 sb0 b1 b2 b1 b0 wb0 xb0 b1 b0 yb0 zb0 b1 b2
     b

 . 2 b1 b0 pb0 qb0 b1 b0 rb0 sb0 b1 b2
 b                                          . 2 b1 b0 wb0 xb0 b1 b0 yb0 zb0 b1 b2
                                            b

. 1 b0 pb0 qb0 b1
b                     . 1 b0 rb0 sb0 b1
                      b                    . 1 b0 wb0 xb0 b1
                                           b                   . 1 b0 pyb0 zb0 b1
                                                               b

. 0 pb0
b          . 0 qb0
           b          . 0 rb0
                      b          . 0 sb0
                                 b         . 0 wb0
                                           b         . 0 xb0
                                                     b          . 0 yb0
                                                                b         . 0 zb0
                                                                          b

Input words: p, q, r, s, w, x, y, z.
Proof of Correctness



                                         .
     . 2 b1 b0 pb0 qb0 b1 b0 rb0 sb0 b1 b2 b1 b0 wb0 xb0 b1 b0 yb0 zb0 b1 b2
     b

 . 2 b1 b0 pb0 qb0 b1 b0 rb0 sb0 b1 b2
 b                                          . 2 b1 b0 wb0 xb0 b1 b0 yb0 zb0 b1 b2
                                            b

. 1 b0 pb0 qb0 b1
b                     . 1 b0 rb0 sb0 b1
                      b                    . 1 b0 wb0 xb0 b1
                                           b                   . 1 b0 pyb0 zb0 b1
                                                               b

. 0 pb0
b          . 0 qb0
           b          . 0 rb0
                      b          . 0 sb0
                                 b         . 0 wb0
                                           b         . 0 xb0
                                                     b          . 0 yb0
                                                                b         . 0 zb0
                                                                          b

Input words: p, q, r, s, w, x, y, z.
Proof of Correctness



                                         .
     . 2 b1 b0 pb0 qb0 b1 b0 rb0 sb0 b1 b2 b1 b0 wb0 xb0 b1 b0 yb0 zb0 b1 b2
     b

 . 2 b1 b0 pb0 qb0 b1 b0 rb0 sb0 b1 b2
 b                                          . 2 b1 b0 wb0 xb0 b1 b0 yb0 zb0 b1 b2
                                            b

. 1 b0 pb0 qb0 b1
b                     . 1 b0 rb0 sb0 b1
                      b                    . 1 b0 wb0 xb0 b1
                                           b                   . 1 b0 pyb0 zb0 b1
                                                               b

. 0 pb0
b          . 0 qb0
           b          . 0 rb0
                      b          . 0 sb0
                                 b         . 0 wb0
                                           b         . 0 xb0
                                                     b          . 0 yb0
                                                                b         . 0 zb0
                                                                          b

Input words: p, q, r, s, w, x, y, z.
Proof of Correctness



                                         .
     . 2 b1 b0 pb0 qb0 b1 b0 rb0 sb0 b1 b2 b1 b0 wb0 xb0 b1 b0 yb0 zb0 b1 b2
     b

 . 2 b1 b0 pb0 qb0 b1 b0 rb0 sb0 b1 b2
 b                                          . 2 b1 b0 wb0 xb0 b1 b0 yb0 zb0 b1 b2
                                            b

. 1 b0 pb0 qb0 b1
b                     . 1 b0 rb0 sb0 b1
                      b                    . 1 b0 wb0 xb0 b1
                                           b                   . 1 b0 pyb0 zb0 b1
                                                               b

. 0 pb0
b          . 0 qb0
           b          . 0 rb0
                      b          . 0 sb0
                                 b         . 0 wb0
                                           b         . 0 xb0
                                                     b          . 0 yb0
                                                                b         . 0 zb0
                                                                          b

Input words: p, q, r, s, w, x, y, z.
Vertex-Disjoint Cycles
Input: G = (V, E)
Vertex-Disjoint Cycles
Input: G = (V, E)
Question: Are there k vertex–disjoint cycles?
Vertex-Disjoint Cycles
Input: G = (V, E)
Question: Are there k vertex–disjoint cycles?
Parameter: k.
Vertex-Disjoint Cycles
Input: G = (V, E)
Question: Are there k vertex–disjoint cycles?
Parameter: k.

Related problems:
FVS, has a O(k2 ) kernel.
Edge–Disjoint Cycles, has a O(k2 log2 k) kernel.
Vertex-Disjoint Cycles
Input: G = (V, E)
Question: Are there k vertex–disjoint cycles?
Parameter: k.

Related problems:
FVS, has a O(k2 ) kernel.
Edge–Disjoint Cycles, has a O(k2 log2 k) kernel.



 In contrast, Disjoint Factors transforms into Vertex–Disjoint Cycles in
                            polynomial time.
. = 1123343422
                          w



     .
     1    .
          1    .
               2      .
                      3       .
                              3      .
                                     4       .
                                             3      .
                                                    4      .
                                                           2     .
                                                                 2
    .1
    v    .2
         v    .3
              v      .4
                     v       .5
                             v      .6
                                    v       .7
                                            v      .8
                                                   v      .9
                                                          v    . 10
                                                               v




.
                      .
                      1      .
                             2      .
                                    3        .
                                             4



              Disjoint Factors    ppt   Disjoint Cycles
. = 1123343422
                             w



     .
     1      .
            1     .
                  2      .
                         3       .
                                 3    .
                                      4        .
                                               3    .
                                                    4      .
                                                           2      .
                                                                  2
    .1
    v      .2
           v     .3
                 v      .4
                        v       .5
                                v    .6
                                     v        .7
                                              v    .8
                                                   v      .9
                                                          v     . 10
                                                                v




.
                         .
                         1      .
                                2     .
                                      3       .
                                              4



                                 Claim
    w has all k disjoint factors ⇐⇒ Gw has k vertex–disjoint cycles.
PROBLEM KERNELS
                                               how why     what


when
                  no polynomial kernels


                         Examples
 satisfiability       vertex-disjoint cycles     disjoint factors




                   WorkarouNds
                       many kernels
No polynomial kernel?
No polynomial kernel?

Look for the best exponential or subexponential kernels...
No polynomial kernel?

Look for the best exponential or subexponential kernels...

... or build many polynomial kernels.
Many polynomial kernels give us better algorithms:
                               ( 2)
                                 k
                                  k

                                2klog k

                                versus

                               p(n)ck

                                   (  )
                                    ck
                             p(n) ·
                                    k
We say that a subdigraph T of a digraph D is an out–tree if T is an
oriented tree with only one vertex r of in-degree zero (called the root).

e D M L O-B problem is to find an
out-branching in a given digraph with the maximum number of leaves.

Parameter: k
We say that a subdigraph T of a digraph D is an out-branching if T is an
oriented spanning tree with only one vertex r of in-degree zero.

e D M L O-B problem is to find an
out-branching in a given digraph with the maximum number of leaves.

Parameter: k
We say that a subdigraph T of a digraph D is an out-branching if T is an
oriented spanning tree with only one vertex r of in-degree zero.

e D M L O-B problem is to find an
out-branching in a given digraph with the maximum number of leaves.

Parameter: k
We say that a subdigraph T of a digraph D is an out-branching if T is an
oriented spanning tree with only one vertex r of in-degree zero.

e D k–L O-B problem is to find an
out-branching in a given digraph with at least k leaves.

Parameter: k
We say that a subdigraph T of a digraph D is an out-branching if T is an
oriented spanning tree with only one vertex r of in-degree zero.

e D k–L O-B problem is to find an
out-branching in a given digraph with at least k leaves.

Parameter: k
R k–L O–B admits a kernel of size O(k3 ).

k–L O–B does not admit a polynomial kernel (proof
deferred).

However, it clearly admits n polynomial kernels (try all possible choices
for root, and apply the kernel for R k–L O-B.
R k–L O–B admits a kernel of size O(k3 ).

k–L O–B does not admit a polynomial kernel (proof
deferred).

However, it clearly admits n polynomial kernels (try all possible choices
for root, and apply the kernel for R k–L O-B.
R k–L O–B admits a kernel of size O(k3 ).

k–L O–B does not admit a polynomial kernel (proof
deferred).

However, it clearly admits n polynomial kernels (try all possible choices
for root, and apply the kernel for R k–L O–B.
R k–L O–T admits a kernel of size O(k3 ).

k–L O–B does not admit a polynomial kernel (proof
deferred).

However, it clearly admits n polynomial kernels (try all possible choices
for root, and apply the kernel for R k–L O-B.
R k–L O–T admits a kernel of size O(k3 ).

k–L O–T does not admit a polynomial kernel (composition via
disjoint union).

However, it clearly admits n polynomial kernels (try all possible choices
for root, and apply the kernel for R k–L O-B.
R k–L O–T admits a kernel of size O(k3 ).

k–L O–T does not admit a polynomial kernel (composition via
disjoint union).

However, it clearly admits n polynomial kernels (try all possible choices
for root, and apply the kernel for R k–L O–T.
. D,. k)
                                   (

                   Decompose k–Leaf Out–Tree into n
                  .
                  instances of Rooted k–Leaf Out–Tree

  . D1 , v1 , k) . D2 , v2 , k) . D3 , v3 , k)
  (              (              (                ·
                                                 .··    . Dn , vn , k)
                                                        (

                         Apply Kernelization
                  .
                  (Known for Rooted k–Leaf Out-Tree)

  . D1 , v1 , k) . D2 , v2 , k) . D3 , v3 , k)
  (              (              (                ·
                                                 .··    . Dn , vn , k)
                                                        (

Reduce to instances of k–Leaf Out–Branching on Nice Willow Graphs
.
                      (Using NP–completeness)


   . W1 , b 1 )
   (              . W2 , b 2 )
                  (               . W3 , b 3 )
                                  (              ·
                                                 .··     . Wn , b n )
                                                         (
. D,. k)
                                   (

                  Decompose k–Leaf Out–Tree into n
                 .
                 instances of Rooted k–Leaf Out–Tree

  . D1 , v1 , k) . D2 , v2 , k) . D3 , v3 , k)
  (              (              (                ·
                                                 .··   . Dn , vn , k)
                                                       (

                        Apply Kernelization
                 .
                 (Known for Rooted k–Leaf Out-Tree)

  . D1 , v1 , k) . D2 , v2 , k) . D3 , v3 , k)
  (              (              (                ·
                                                 .··   . Dn , vn , k)
                                                       (

Reduce to instances of k–Leaf Out–Branching on Nice Willow Graphs
.
                      (Using NP–completeness)


  . W1 , bmax ) . W2 , bmax ) . W3 , bmax )
  (             (             (                  ·
                                                 .··   . Wn , bmax )
                                                       (
Composition
   .
   (produces an instance of k–Leaf Out Branching)
                  . D , bmax + 1)
                  (


                   Kernelization
              .
              (Given By Assumption)
                     .D , k )
                     (

NP–completeness reduction from k–Leaf Out Branching
.
                to k–Leaf Out Tree)


                     . D∗ , k∗ )
                     (
A polynomial pseudo–kernelization K pro-
duces kernels whose size can be bounded by:

               h(k) · n1−ε


where k is the parameter of the problem,
and h is polynomial.




                                              concluding
                                                remarks
Analogous to the composition framework,
there are algorithms (called Linear OR)
whose existence helps us rule out polynomial
pseudo–kernels.




                                               concluding
                                                 remarks
Analogous to the composition framework,
there are algorithms (called Linear OR)
whose existence helps us rule out polynomial
pseudo–kernels.

Most compositional algorithms can be ex-
tended to fit the definition of Linear OR.




                                               concluding
                                                 remarks
A strong kernel is one where the parameter
of the kernelized instance is at most the pa-
rameter of the original instance.




                                                concluding
                                                  remarks
A strong kernel is one where the parameter
of the kernelized instance is at most the pa-
rameter of the original instance.

Mechanisms for ruling out strong kernels are
simpler and rely on self–reductions and the
hypothesis that P = NP.




                                                concluding
                                                  remarks
A strong kernel is one where the parameter
of the kernelized instance is at most the pa-
rameter of the original instance.

Mechanisms for ruling out strong kernels are
simpler and rely on self–reductions and the
hypothesis that P = NP.

Example: k–Path is not likely to admit a
strong kernel.




                                                concluding
                                                  remarks
ere are no known ways of inferring that a problem is unlikely to have a
                kernel of size kc for some specific c.




      Open P
           roblems
Can lower bound theorems be proved under some “well-believed”
conjectures of parameterized complexity - for instance - FPT = XP, or,
                    FPT = W[t] for some t ∈ N+ ?




     Open P
          roblems
A lower bound framework for ruling out p(k) · f(l)–sized kernels for
       problems with two parameters (k, l) would be useful.




    Open P
         roblems
“Many polynomial kernels” have been found only for the directed
outbranching problem. It would be interesting to apply the technique to
 other problems that are not expected to have polynomial–sized kernels.




      Open P
           roblems
FIN

Más contenido relacionado

Similar a On the Infeasibility of Obtaining Polynomial Kernels for Some Problems

UMAP - Mathematics and implementational details
UMAP - Mathematics and implementational detailsUMAP - Mathematics and implementational details
UMAP - Mathematics and implementational detailsUmberto Lupo
 
IVR - Chapter 7 - Patch models and dictionary learning
IVR - Chapter 7 - Patch models and dictionary learningIVR - Chapter 7 - Patch models and dictionary learning
IVR - Chapter 7 - Patch models and dictionary learningCharles Deledalle
 
Meta-learning and the ELBO
Meta-learning and the ELBOMeta-learning and the ELBO
Meta-learning and the ELBOYoonho Lee
 
The Graph Minor Theorem: a walk on the wild side of graphs
The Graph Minor Theorem: a walk on the wild side of graphsThe Graph Minor Theorem: a walk on the wild side of graphs
The Graph Minor Theorem: a walk on the wild side of graphsMarco Benini
 
Expressiveness and Model of the Polymorphic λ Calculus
Expressiveness and Model of the Polymorphic λ CalculusExpressiveness and Model of the Polymorphic λ Calculus
Expressiveness and Model of the Polymorphic λ Calculusevastsdsh
 
Rational points on elliptic curves
Rational points on elliptic curvesRational points on elliptic curves
Rational points on elliptic curvesmmasdeu
 
Lecture4 kenrels functions_rkhs
Lecture4 kenrels functions_rkhsLecture4 kenrels functions_rkhs
Lecture4 kenrels functions_rkhsStéphane Canu
 
Las funciones L en teoría de números
Las funciones L en teoría de númerosLas funciones L en teoría de números
Las funciones L en teoría de númerosmmasdeu
 
Cuckoo Search Algorithm: An Introduction
Cuckoo Search Algorithm: An IntroductionCuckoo Search Algorithm: An Introduction
Cuckoo Search Algorithm: An IntroductionXin-She Yang
 
Wide-Coverage CCG Parsing with Quantifier Scope
Wide-Coverage CCG Parsing with Quantifier ScopeWide-Coverage CCG Parsing with Quantifier Scope
Wide-Coverage CCG Parsing with Quantifier Scopedimkart
 
The Probability that a Matrix of Integers Is Diagonalizable
The Probability that a Matrix of Integers Is DiagonalizableThe Probability that a Matrix of Integers Is Diagonalizable
The Probability that a Matrix of Integers Is DiagonalizableJay Liew
 
Density theorems for Euclidean point configurations
Density theorems for Euclidean point configurationsDensity theorems for Euclidean point configurations
Density theorems for Euclidean point configurationsVjekoslavKovac1
 
Dirichlet processes and Applications
Dirichlet processes and ApplicationsDirichlet processes and Applications
Dirichlet processes and ApplicationsSaurav Jha
 

Similar a On the Infeasibility of Obtaining Polynomial Kernels for Some Problems (20)

UMAP - Mathematics and implementational details
UMAP - Mathematics and implementational detailsUMAP - Mathematics and implementational details
UMAP - Mathematics and implementational details
 
Astaño 4
Astaño 4Astaño 4
Astaño 4
 
IVR - Chapter 7 - Patch models and dictionary learning
IVR - Chapter 7 - Patch models and dictionary learningIVR - Chapter 7 - Patch models and dictionary learning
IVR - Chapter 7 - Patch models and dictionary learning
 
Meta-learning and the ELBO
Meta-learning and the ELBOMeta-learning and the ELBO
Meta-learning and the ELBO
 
The Graph Minor Theorem: a walk on the wild side of graphs
The Graph Minor Theorem: a walk on the wild side of graphsThe Graph Minor Theorem: a walk on the wild side of graphs
The Graph Minor Theorem: a walk on the wild side of graphs
 
Expressiveness and Model of the Polymorphic λ Calculus
Expressiveness and Model of the Polymorphic λ CalculusExpressiveness and Model of the Polymorphic λ Calculus
Expressiveness and Model of the Polymorphic λ Calculus
 
Rational points on elliptic curves
Rational points on elliptic curvesRational points on elliptic curves
Rational points on elliptic curves
 
Lecture4 kenrels functions_rkhs
Lecture4 kenrels functions_rkhsLecture4 kenrels functions_rkhs
Lecture4 kenrels functions_rkhs
 
Las funciones L en teoría de números
Las funciones L en teoría de númerosLas funciones L en teoría de números
Las funciones L en teoría de números
 
LPS talk notes
LPS talk notesLPS talk notes
LPS talk notes
 
Cuckoo Search Algorithm: An Introduction
Cuckoo Search Algorithm: An IntroductionCuckoo Search Algorithm: An Introduction
Cuckoo Search Algorithm: An Introduction
 
Wide-Coverage CCG Parsing with Quantifier Scope
Wide-Coverage CCG Parsing with Quantifier ScopeWide-Coverage CCG Parsing with Quantifier Scope
Wide-Coverage CCG Parsing with Quantifier Scope
 
The Probability that a Matrix of Integers Is Diagonalizable
The Probability that a Matrix of Integers Is DiagonalizableThe Probability that a Matrix of Integers Is Diagonalizable
The Probability that a Matrix of Integers Is Diagonalizable
 
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
 
Problemas de Smale
Problemas de SmaleProblemas de Smale
Problemas de Smale
 
RSA Cryptosystem
RSA CryptosystemRSA Cryptosystem
RSA Cryptosystem
 
RSA Cryptosystem
RSA CryptosystemRSA Cryptosystem
RSA Cryptosystem
 
RSA Cryptosystem
RSA CryptosystemRSA Cryptosystem
RSA Cryptosystem
 
Density theorems for Euclidean point configurations
Density theorems for Euclidean point configurationsDensity theorems for Euclidean point configurations
Density theorems for Euclidean point configurations
 
Dirichlet processes and Applications
Dirichlet processes and ApplicationsDirichlet processes and Applications
Dirichlet processes and Applications
 

Más de Neeldhara Misra

Efficient algorithms for hard problems on structured electorates
Efficient algorithms for hard problems on structured electoratesEfficient algorithms for hard problems on structured electorates
Efficient algorithms for hard problems on structured electoratesNeeldhara Misra
 
On the Parameterized Complexity of Party Nominations
On the Parameterized Complexity of Party NominationsOn the Parameterized Complexity of Party Nominations
On the Parameterized Complexity of Party NominationsNeeldhara Misra
 
Graph Modification Problems: A Modern Perspective
Graph Modification Problems: A Modern PerspectiveGraph Modification Problems: A Modern Perspective
Graph Modification Problems: A Modern PerspectiveNeeldhara Misra
 
Deleting to Structured Trees
Deleting to Structured TreesDeleting to Structured Trees
Deleting to Structured TreesNeeldhara Misra
 
Elicitation for Preferences Single Peaked on Trees
Elicitation for Preferences Single Peaked on Trees Elicitation for Preferences Single Peaked on Trees
Elicitation for Preferences Single Peaked on Trees Neeldhara Misra
 
An FPT Algorithm for Maximum Edge Coloring
An FPT Algorithm for Maximum Edge ColoringAn FPT Algorithm for Maximum Edge Coloring
An FPT Algorithm for Maximum Edge ColoringNeeldhara Misra
 
Cheat Sheets for Hard Problems
Cheat Sheets for Hard ProblemsCheat Sheets for Hard Problems
Cheat Sheets for Hard ProblemsNeeldhara Misra
 
Kernelization Complexity of Colorful Motifs
Kernelization Complexity of Colorful MotifsKernelization Complexity of Colorful Motifs
Kernelization Complexity of Colorful MotifsNeeldhara Misra
 
Expansions for Reductions
Expansions for ReductionsExpansions for Reductions
Expansions for ReductionsNeeldhara Misra
 
Connected Dominating Set and Short Cycles
Connected Dominating Set and Short CyclesConnected Dominating Set and Short Cycles
Connected Dominating Set and Short CyclesNeeldhara Misra
 

Más de Neeldhara Misra (12)

Efficient algorithms for hard problems on structured electorates
Efficient algorithms for hard problems on structured electoratesEfficient algorithms for hard problems on structured electorates
Efficient algorithms for hard problems on structured electorates
 
On the Parameterized Complexity of Party Nominations
On the Parameterized Complexity of Party NominationsOn the Parameterized Complexity of Party Nominations
On the Parameterized Complexity of Party Nominations
 
Graph Modification Problems: A Modern Perspective
Graph Modification Problems: A Modern PerspectiveGraph Modification Problems: A Modern Perspective
Graph Modification Problems: A Modern Perspective
 
Deleting to Structured Trees
Deleting to Structured TreesDeleting to Structured Trees
Deleting to Structured Trees
 
Elicitation for Preferences Single Peaked on Trees
Elicitation for Preferences Single Peaked on Trees Elicitation for Preferences Single Peaked on Trees
Elicitation for Preferences Single Peaked on Trees
 
Wg qcolorable
Wg qcolorableWg qcolorable
Wg qcolorable
 
An FPT Algorithm for Maximum Edge Coloring
An FPT Algorithm for Maximum Edge ColoringAn FPT Algorithm for Maximum Edge Coloring
An FPT Algorithm for Maximum Edge Coloring
 
Research in CS
Research in CSResearch in CS
Research in CS
 
Cheat Sheets for Hard Problems
Cheat Sheets for Hard ProblemsCheat Sheets for Hard Problems
Cheat Sheets for Hard Problems
 
Kernelization Complexity of Colorful Motifs
Kernelization Complexity of Colorful MotifsKernelization Complexity of Colorful Motifs
Kernelization Complexity of Colorful Motifs
 
Expansions for Reductions
Expansions for ReductionsExpansions for Reductions
Expansions for Reductions
 
Connected Dominating Set and Short Cycles
Connected Dominating Set and Short CyclesConnected Dominating Set and Short Cycles
Connected Dominating Set and Short Cycles
 

Último

Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application ) Sakshi Ghasle
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityGeoBlogs
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesFatimaKhan178732
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppCeline George
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxmanuelaromero2013
 
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxContemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxRoyAbrique
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAssociation for Project Management
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Celine George
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxGaneshChakor2
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 

Último (20)

Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
Hybridoma Technology ( Production , Purification , and Application )
Hybridoma Technology  ( Production , Purification , and Application  ) Hybridoma Technology  ( Production , Purification , and Application  )
Hybridoma Technology ( Production , Purification , and Application )
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
Paris 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activityParis 2024 Olympic Geographies - an activity
Paris 2024 Olympic Geographies - an activity
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and Actinides
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website App
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptx
 
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptxContemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
Contemporary philippine arts from the regions_PPT_Module_12 [Autosaved] (1).pptx
 
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Bikash Puri  Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Bikash Puri Delhi reach out to us at 🔝9953056974🔝
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across Sectors
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
 
CARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptxCARE OF CHILD IN INCUBATOR..........pptx
CARE OF CHILD IN INCUBATOR..........pptx
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 

On the Infeasibility of Obtaining Polynomial Kernels for Some Problems

  • 1. On the Infeasibility of Obtaining Polynomial Kernels why some problems are incompressible
  • 2. PROBLEM KERNELS how why what when no polynomial kernels Examples satisfiability vertex-disjoint cycles disjoint factors WorkarouNds many kernels
  • 3. PROBLEM KERNELS how why what when no polynomial kernels Examples satisfiability vertex-disjoint cycles disjoint factors WorkarouNds many kernels
  • 4. ere are  people at a professor’s party. e professor would like to locate a group of six people who are popular. i.e, everyone is a friend of at least one of the six. Being a busy man, he asks two of his students to find such a group. (25) e first grimaces and starts making a list of 6 possibilities. e second knows that no one in the party has more than three friends .  Academicians tend to be lonely.
  • 5. ere are  people at a professor’s party. e professor would like to locate a group of six people who are popular. i.e, everyone is a friend of at least one of the six. Being a busy man, he asks two of his students to find such a group. (25) e first grimaces and starts making a list of 6 possibilities. e second knows that no one in the party has more than three friends .  Academicians tend to be lonely.
  • 6. ere is a graph on n vertices. e professor would like to locate a group of six people who are popular. i.e, everyone is a friend of at least one of the six. Being a busy man, he asks two of his students to find such a group. (25) e first grimaces and starts making a list of 6 possibilities. e second knows that no one in the party has more than three friends .  Academicians tend to be lonely.
  • 7. ere is a graph on n vertices. Is there a dominating set of size at most k? i.e, everyone is a friend of at least one of the six. Being a busy man, he asks two of his students to find such a group. (25) e first grimaces and starts making a list of 6 possibilities. e second knows that no one in the party has more than three friends .  Academicians tend to be lonely.
  • 8. ere is a graph on n vertices. Is there a dominating set of size at most k? (Every vertex is a neighbor of at least one of the k vertices.) Being a busy man, he asks two of his students to find such a group. (25) e first grimaces and starts making a list of 6 possibilities. e second knows that no one in the party has more than three friends .  Academicians tend to be lonely.
  • 9. ere is a graph on n vertices. Is there a dominating set of size at most k? (Every vertex is a neighbor of at least one of the k vertices.) e problem is NP–complete, (25) e first grimaces and starts making a list of 6 possibilities. e second knows that no one in the party has more than three friends .  Academicians tend to be lonely.
  • 10. ere is a graph on n vertices. Is there a dominating set of size at most k? (Every vertex is a neighbor of at least one of the k vertices.) e problem is NP–complete, but is trivial on “large” graphs of bounded degree, e second knows that no one in the party has more than three friends .  Academicians tend to be lonely.
  • 11. ere is a graph on n vertices. Is there a dominating set of size at most k? (Every vertex is a neighbor of at least one of the k vertices.) e problem is NP–complete, but is trivial on “large” graphs of bounded degree, as you can say NO whenever n > kb.
  • 12. Pre-processing is a humble strategy for coping with hard problems, almost universally employed. Mike Fellows
  • 13. We would like to talk about the “preprocessing complexity” of a problem. To what extent may we simplify/compress a problem before we begin to solve it? Note that any compression routine has to run efficiently to look attractive.
  • 14. is makes it unreasonable for any NP–complete problem to admit compression algorithms. However, (some) NP–complete problems can be compressed. We extend our notion (and notation) to understand them.
  • 15. Notation We denote a parameterized problem as a pair (Q, κ) consisting of a clas- sical problem Q ⊆ {0, 1}∗ and a parameterization κ : {0, 1}∗ → N.
  • 16. . . Data reduction involves pruning down a large input K . ernel into an equivalent, significantly smaller object, S . ize of the Kernel quickly.
  • 17. . . Data reduction is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that a large input K . ernel into an equivalent, significantly smaller object, S . ize of the Kernel quickly.
  • 18. . . Data reduction is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that for all (x, k), |x| = n K . ernel into an equivalent, significantly smaller object, S . ize of the Kernel quickly.
  • 19. . . Data reduction is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that for all (x, k), |x| = n K . ernel f(x), k ∈ L iff (x, k) ∈ L, significantly smaller object, S . ize of the Kernel quickly.
  • 20. . . Data reduction is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that for all (x, k), |x| = n K . ernel f(x), k ∈ L iff (x, k) ∈ L, |f(x)| = g(k), S . ize of the Kernel quickly.
  • 21. . . Data reduction is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that for all (x, k), |x| = n K . ernel f(x), k ∈ L iff (x, k) ∈ L, |f(x)| = g(k), S . ize of the Kernel and f is polytime computable.
  • 22. . . A kernelization procedure is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that for all (x, k), |x| = n K . ernel f(x), k ∈ L iff (x, k) ∈ L, |f(x)| = g(k), S . ize of the Kernel and f is polytime computable.
  • 23. . . A kernelization procedure is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that for all (x, k), |x| = n K . ernel (f(x), k ) ∈ L iff (x, k) ∈ L, |f(x)| = g(k), S . ize of the Kernel and f is polytime computable.
  • 24. . . A kernelization procedure is a function f : {0, 1}∗ × N → {0, 1}∗ × N, such that for all (x, k), |x| = n f(x), k ∈ L iff (x, k) ∈ L, |f(x)| = g(k), S . ize of the Kernel and f is polytime computable.
  • 25. eorem Having a kernelization procedure implies, and is implied by, parameterized tractability. Definition A parameterized problem L is fixed-parameter tractable if there exists an algorithm that decides in f (k) · nO(1) time whether (x, k) ∈ L, where n := |x|, k := κ(x), and f is a computable function that does not depend on n.
  • 26. eorem Having a kernelization procedure implies, and is implied by, parameterized tractability. Definition A parameterized problem L is fixed-parameter tractable if there exists an algorithm that decides in f (k) · nO(1) time whether (x, k) ∈ L, where n := |x|, k := κ(x), and f is a computable function that does not depend on n.
  • 27. eorem A problem admits a kernel if, and only if, it is fixed–parameter tractable. Definition A parameterized problem L is fixed-parameter tractable if there exists an algorithm that decides in f (k) · nO(1) time whether (x, k) ∈ L, where n := |x|, k := κ(x), and f is a computable function that does not depend on n.
  • 28. eorem A problem admits a kernel if, and only if, it is fixed–parameter tractable. Given a kernel, a FPT algorithm is immediate (even brute–force on the kernel will lead to such an algorithm).
  • 29. eorem A problem admits a kernel if, and only if, it is fixed–parameter tractable. On the other hand, a FPT runtime of f(k)·nc gives us a f(k)–sized kernel. We run the algorithm for nc+1 steps and either have a trivial kernel if the algorithm stops, else: nc+1 < f(k) · nc
  • 30. eorem A problem admits a kernel if, and only if, it is fixed–parameter tractable. On the other hand, a FPT runtime of f(k)·nc gives us a f(k)–sized kernel. We run the algorithm for nc+1 steps and either have a trivial kernel if the algorithm stops, else: n < f(k)
  • 31. “Efficient” Kernelization What is a reasonable notion of efficiency for kernelization?
  • 32. “Efficient” Kernelization What is a reasonable notion of efficiency for kernelization? e smaller, the better.
  • 33. “Efficient” Kernelization What is a reasonable notion of efficiency for kernelization? e smaller, the better. In particular, Polynomial–sized kernels <are than better Exponential–sized Kernels
  • 34. e problem of finding Dominating Set of size k on graphs where the degree is bounded by b, parameterized by k, has a linear kernel. is is an example of a polynomial–sized kernel.
  • 35. PROBLEM KERNELS how why what when no polynomial kernels Examples satisfiability vertex-disjoint cycles disjoint factors WorkarouNds many kernels
  • 36. A composition algorithm A for a problem is designed to act as a fast Boolean OR of multiple problem-instances. It receives as input a sequence of instances. x = (x1 , . . . , xt ) with xi ∈ {0, 1}∗ for i ∈ [t], such that (k1 = k2 = · · · = kt = k) It produces as output a yes-instance with a small parameter if and only if at least one of the instances in the sequences is also a yes-instance. κ(A(x)) = kO(1) Running time polynomial in Σi∈[t] |xi |
  • 37. A composition algorithm A for a problem is designed to act as a fast Boolean OR of multiple problem-instances. It receives as input a sequence of instances. x = (x1 , . . . , xt ) with xi ∈ {0, 1}∗ for i ∈ [t], such that k1 = k2 = · · · = kt = k It produces as output a yes-instance with a small parameter if and only if at least one of the instances in the sequences is also a yes-instance. κ(A(x)) = kO(1) Running time polynomial in Σi∈[t] |xi |
  • 38. A composition algorithm A for a problem is designed to act as a fast Boolean OR of multiple problem-instances. It receives as input a sequence of instances. x = (x1 , . . . , xt ) with xi ∈ {0, 1}∗ for i ∈ [t], such that k1 = k2 = · · · = kt = k It produces as output a yes-instance with a small parameter if and only if at least one of the instances in the sequences is also a yes-instance. k = kO(1) Running time polynomial in Σi∈[t] |xi |
  • 39. A composition algorithm A for a problem is designed to act as a fast Boolean OR of multiple problem-instances. It receives as input a sequence of instances. x = (x1 , . . . , xt ) with xi ∈ {0, 1}∗ for i ∈ [t], such that k1 = k2 = · · · = kt = k It produces as output a yes-instance with a small parameter if and only if at least one of the instances in the sequences is also a yes-instance. k = kO(1) Running time polynomial in Σi∈[t] |xi |
  • 40. e Recipe for Hardness Composition Algorithm + Polynomial Kernel ⇓ Distillation Algorithm ⇓ PH = Σp 3
  • 41. e Recipe for Hardness Composition Algorithm + Polynomial Kernel ⇓ Distillation Algorithm ⇓ PH = Σp 3 eorem. Let (P, k) be a compositional parameterized problem such that P is NP-complete. If P has a polynomial kernel, then P also has a distil- lation algorithm.
  • 42. Transformations Let (P, κ) and (Q, γ) be parameterized problems. We say that there is a polynomial parameter transformation from P to Q if there exists a polynomial time computable function f : {0, 1}∗ −→ {0, 1}∗ , and a polynomial p : N → N, such that, if f(x) = y, we have: x ∈ P if and only if y ∈ Q,
  • 43. Transformations Let (P, κ) and (Q, γ) be parameterized problems. We say that there is a polynomial parameter transformation from P to Q if there exists a polynomial time computable function f : {0, 1}∗ −→ {0, 1}∗ , and a polynomial p : N → N, such that, if f(x) = y, we have: x ∈ P if and only if y ∈ Q, and γ(y) p(κ(x))
  • 44. eorem: Suppose P is NP-complete, and Q ∈ NP. If f is a polynomial time and parameter transformation from P to Q and Q has a polynomial kernel, then P has a polynomial kernel.
  • 45. eorem: Suppose P is NP-complete, and Q ∈ NP. If f is a polynomial time and parameter transformation from P to Q and Q has a polynomial kernel, then P has a polynomial kernel. . . x. . z∈P Instance of P . . NP–completeness PPT Reduction Reduction . f(x) = y . . Kernelization K(y) Instance of Q
  • 46. PROBLEM KERNELS how why what when no polynomial kernels Examples satisfiability vertex-disjoint cycles disjoint factors WorkarouNds many kernels
  • 47. Recall A composition for a parameterized language (Q, κ) is required to “merge” instances x1 , x2 , . . . , xt , into a single instance x in polynomial time, such that κ(x) is polynomial in k := κ(xi ) for any i. e output of the algorithm belongs to(Q, κ(x)) . if, and .only if there exists at least one i ∈ [t] for which xi ∈ (Q, κ).
  • 48. A . . Composition Tree . (a, b) ρ . a . b . (x1 , x2 ) ρ . (x3 , x4 ) ρ . (xt−3 , xt−2 ) . (xt−1 , xt ) ρ ρ . .1 x .2 x .3 x .4 x . t−3 x . t−2 . t−1 x x .t x General Framework For Composition
  • 49. A . . Composition Tree . (a, b) ρ . a . b . (x1 , x2 ) ρ . (x3 , x4 ) ρ . (xt−3 , xt−2 ) . (xt−1 , xt ) ρ ρ ... .1 x .2 x .3 x .4 x . t−3 x . t−2 . t−1 x x .t x General Framework For Composition
  • 50. A . . Composition Tree . (a, b) ρ . a . b . (x1 , x2 ) ρ . (x3 , x4 ) ρ . (xt−3 , xt−2 ) . (xt−1 , xt ) ρ ρ … . .1 x .2 x .3 x .4 x . t−3 x . t−2 . t−1 x x .t x General Framework For Composition
  • 51. Most composition algorithms can be stated in terms of a single operation, ρ, . . that describes the dynamic programming over this complete binary tree on t leaves.
  • 52. Weighted Satisfiability Given a CNF formula φ, Is there a satisfying assignment of weight at most k? Parameter: b+k. When the length of the longest clause is bounded by b, there is an easy branching algorithm with runtime O(bk · p(n)).
  • 53. Weighted Satisfiability Given a CNF formula φ, Is there a satisfying assignment of weight at most k? Parameter: b+k. When the length of the longest clause is bounded by b, there is an easy branching algorithm with runtime O(bk · p(n)).
  • 54. Weighted Satisfiability .Given a CNF formula φ, Is there a satisfying assignment of weight at most k? Parameter: b+k. When the length of the longest clause is bounded by b, there is an easy branching algorithm with runtime O(bk · p(n)). | .bp(n)|||d|)
  • 55. Input: α1 , α2 , . . . , αt .
  • 56. Input: α1 , α2 , . . . , αt . κ(αi ) = b + k.
  • 57. Input: α1 , α2 , . . . , αt . κ(αi ) = b + k. n := maxi∈[n] ni
  • 58. Input: α1 , α2 , . . . , αt . κ(αi ) = b + k. n := maxi∈[n] ni If t > bk , then solve every αi individually. Total time = t · bk · p(n) If not, t < bk – this gives us a bound on the number of instances.
  • 59. Input: α1 , α2 , . . . , αt . κ(αi ) = b + k. n := maxi∈[n] ni If t > bk , then solve every αi individually. Total time = t · bk · p(n) Total time = t · bk · p(n) If not, t < bk – this gives us a bound on the number of instances.
  • 60. Input: α1 , α2 , . . . , αt . κ(αi ) = b + k. n := maxi∈[n] ni If t > bk , then solve every αi individually. Total time = t · bk · p(n) Total time < t · t · p(n) If not, t < bk – this gives us a bound on the number of instances.
  • 61. Input: α1 , α2 , . . . , αt . κ(αi ) = b + k. n := maxi∈[n] ni If t > bk , then solve every αi individually. Total time = t · bk · p(n) Total time < t · t · p(n) If not, t < bk – this gives us a bound on the number of instances.
  • 62. L . . evel 1 . β1 ∨ b0 ) ∧ . . . ∧ (βn ∨ b0 ) ( . β1 ∨ b0 ) ∧ . . . ∧ (βm ∨ b0 ) ( ¯ ¯ is is the scene at the leaves, where the βj s are the clauses in αi for some i and the βj s are clauses of αi+1 .
  • 63. L . . evel j . β1 ∨ b) ∧ . . . ∧ (βn ∨ b) ( . β1 ∨ b) ∧ . . . ∧ (βm ∨ b) ( ¯ ¯ is is the scene at the leaves, where the βj s are the clauses in αi for some i and the βj s are clauses of αi+1 .
  • 64. (β1 ∨ b) ∧ (β2 ∨ b) .∧ . . . ∧ (βn ∨ b) ∧ . (β1 ∨ b) ∧ (β2 ∨ b) ∧ . . . ∧ (βm ∨ b) ¯ ¯ ¯ . β1 ∨ b) ∧ . . . ∧ (βn ∨ b) ( . β1 ∨ b) ∧ . . . ∧ (βm ∨ b) ( ¯ ¯ Take the conjunction of the formulas stored at the child nodes. Take the conjunction of the formulas stored at the child nodes.
  • 65. (β1 ∨ b ∨ bj ) ∧ (β2 ∨ b ∨ bj ) ∧ . . . ∧ (βn ∨ b ∨ bj )∧ . . (β1 ∨ b ∨ bj ) ∧ (β2 ∨ b ∨ bj ) ∧ . . . ∧ (βm ∨ b ∨ bj ) ¯ ¯ ¯ . β1 ∨ b) ∧ . . . ∧ (βn ∨ b) ( . β1 ∨ b) ∧ . . . ∧ (βm ∨ b) ( ¯ ¯ where the βj s are the clauses in αi for some i and the. if the parent is a “left child”.
  • 66. (β1 ∨ b ∨ bj ) ∧ (β2 ∨ b ∨ bj ) ∧ . . . ∧ (βn ∨ b ∨ bj )∧ ¯ ¯ . ¯ . (β1 ∨ b ¯ ∨ bj ) ∧ (β ∨ b ∨ bj ) ∧ . . . ∧ (βm ∨ b ∨ bj ) ¯ 2 ¯ ¯ ¯ ¯ . β1 ∨ b) ∧ . . . ∧ (βn ∨ b) ( . β1 ∨ b) ∧ . . . ∧ (βm ∨ b) ( ¯ ¯ where the βj s are the clauses in αi for some i and the. if the parent is a “right child”.
  • 67. adding a suffix to “control the weight” α ∧ (c0 ∨ b0 ) ∧ (c0 ∨ b0 )∧ ¯ ¯ (c1 ∨ b1 ) ∧ (c1 ∨ b1 )∧ ¯ ¯ ... (ci ∨ bi ) ∧ (ci ∨ bi )∧ ¯ ¯ ... (cl−1 ∨ b ¯ ¯l−1 ) ∧ (cl−1 ∨ bl−1 )
  • 68. Claim e composed instance α has a satisfying assignment of weight 2k ⇐⇒ at least one of the input instances admit a satisfying assignment of weight k.
  • 69. Proof of Correctness . . . ∧ (α ∨ (b0 ∨. b1 ∨ b2 )) ∧ . . . ¯ . . . α ∨ b0 ∨ b1 ∨b2 . . . ¯ . . . α ∨ b0 ∨ b1 . . . . . . α ∨ b0 . . .
  • 70. Disjoint Factors Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}.
  • 71. Disjoint Factors Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}. A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with k k 1 i < j r, which starts and ends with the same letter. 123235443513
  • 72. Disjoint Factors Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}. A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with k k 1 i < j r, which starts and ends with the same letter. 123235443513 Disjoint factors do not overlap in the word. Are there k disjoint factors? Does the word have all the k factors, mutually disjoint?
  • 73. Disjoint Factors Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}. A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with k k 1 i < j r, which starts and ends with the same letter. 123235443513 Disjoint factors do not overlap in the word. Are there k disjoint factors? Does the word have all the k factors, mutually disjoint?
  • 74. Disjoint Factors Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}. A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with k k 1 i < j r, which starts and ends with the same letter. 123235443513 Disjoint factors do not overlap in the word. Are there k disjoint factors? Does the word have all the k factors, mutually disjoint?
  • 75. Disjoint Factors Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}. A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with k k 1 i < j r, which starts and ends with the same letter. 123235443513 Disjoint factors do not overlap in the word. Are there k disjoint factors? Does the word have all the k factors, mutually disjoint?
  • 76. Disjoint Factors Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}. A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with k k 1 i < j r, which starts and ends with the same letter. 123235443513 Disjoint factors do not overlap in the word. Are there k disjoint factors? Does the word have all the k factors, mutually disjoint?
  • 77. Disjoint Factors Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}. A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with k k 1 i < j r, which starts and ends with the same letter. 123235443513 Disjoint factors do not overlap in the word. Are there k disjoint factors? Does the word have all the k factors, mutually disjoint?
  • 78. Disjoint Factors Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}. A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with k k 1 i < j r, which starts and ends with the same letter. 123235443513 Disjoint factors do not overlap in the word. Are there k disjoint factors? Does the word have all the k factors, mutually disjoint?
  • 79. Disjoint Factors Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}. A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with k k 1 i < j r, which starts and ends with the same letter. 123235443513 Disjoint factors do not overlap in the word. Does the word have all the k factors, mutually disjoint? Does the word have all the k factors, mutually disjoint?
  • 80. Disjoint Factors Let Lk be the alphabet consisting of the letters {1, 2, . . . , k}. A factor of a word w1 · · · wr ∈ L∗ is a substring wi · · · wj ∈ L∗ , with k k 1 i < j r, which starts and ends with the same letter. 123235443513 Disjoint factors do not overlap in the word. Does the word have all the k factors, mutually disjoint? Parameter: k
  • 81. A k! · O(n) algorithm is immediate.
  • 82. A k! · O(n) algorithm is immediate. A 2k · p(n) algorithm can be obtained by Dynamic Programming.
  • 83. A k! · O(n) algorithm is immediate. A 2k · p(n) algorithm can be obtained by Dynamic Programming. Let t be the number of instances input to the composition algorithm. Again, the non-trivial case is when t < 2k .
  • 84. A k! · O(n) algorithm is immediate. A 2k · p(n) algorithm can be obtained by Dynamic Programming. Let t be the number of instances input to the composition algorithm. Again, the non-trivial case is when t < 2k . Let w1 , w2 , . . . , wt be words over L∗ . k
  • 85. L . . eaf Nodes . 0 w1 b0 b . 0 w2 b0 b ... . 0 wt−1 b0 . 0 wt b0 b b
  • 86. Level j. . . j bj−1 ubj−1 vbj−1 bj b . j−1 ubj−1 b . j−1 vbj−1 b
  • 87. Claim e composed word has all the 2k disjoint factors ⇐⇒ at least one of the input instances has all the k disjoint factors.
  • 88. Proof of Correctness . . 2 b1 b0 pb0 qb0 b1 b0 rb0 sb0 b1 b2 b1 b0 wb0 xb0 b1 b0 yb0 zb0 b1 b2 b . 2 b1 b0 pb0 qb0 b1 b0 rb0 sb0 b1 b2 b . 2 b1 b0 wb0 xb0 b1 b0 yb0 zb0 b1 b2 b . 1 b0 pb0 qb0 b1 b . 1 b0 rb0 sb0 b1 b . 1 b0 wb0 xb0 b1 b . 1 b0 pyb0 zb0 b1 b . 0 pb0 b . 0 qb0 b . 0 rb0 b . 0 sb0 b . 0 wb0 b . 0 xb0 b . 0 yb0 b . 0 zb0 b Input words: p, q, r, s, w, x, y, z.
  • 89. Proof of Correctness . . 2 b1 b0 pb0 qb0 b1 b0 rb0 sb0 b1 b2 b1 b0 wb0 xb0 b1 b0 yb0 zb0 b1 b2 b . 2 b1 b0 pb0 qb0 b1 b0 rb0 sb0 b1 b2 b . 2 b1 b0 wb0 xb0 b1 b0 yb0 zb0 b1 b2 b . 1 b0 pb0 qb0 b1 b . 1 b0 rb0 sb0 b1 b . 1 b0 wb0 xb0 b1 b . 1 b0 pyb0 zb0 b1 b . 0 pb0 b . 0 qb0 b . 0 rb0 b . 0 sb0 b . 0 wb0 b . 0 xb0 b . 0 yb0 b . 0 zb0 b Input words: p, q, r, s, w, x, y, z.
  • 90. Proof of Correctness . . 2 b1 b0 pb0 qb0 b1 b0 rb0 sb0 b1 b2 b1 b0 wb0 xb0 b1 b0 yb0 zb0 b1 b2 b . 2 b1 b0 pb0 qb0 b1 b0 rb0 sb0 b1 b2 b . 2 b1 b0 wb0 xb0 b1 b0 yb0 zb0 b1 b2 b . 1 b0 pb0 qb0 b1 b . 1 b0 rb0 sb0 b1 b . 1 b0 wb0 xb0 b1 b . 1 b0 pyb0 zb0 b1 b . 0 pb0 b . 0 qb0 b . 0 rb0 b . 0 sb0 b . 0 wb0 b . 0 xb0 b . 0 yb0 b . 0 zb0 b Input words: p, q, r, s, w, x, y, z.
  • 91. Proof of Correctness . . 2 b1 b0 pb0 qb0 b1 b0 rb0 sb0 b1 b2 b1 b0 wb0 xb0 b1 b0 yb0 zb0 b1 b2 b . 2 b1 b0 pb0 qb0 b1 b0 rb0 sb0 b1 b2 b . 2 b1 b0 wb0 xb0 b1 b0 yb0 zb0 b1 b2 b . 1 b0 pb0 qb0 b1 b . 1 b0 rb0 sb0 b1 b . 1 b0 wb0 xb0 b1 b . 1 b0 pyb0 zb0 b1 b . 0 pb0 b . 0 qb0 b . 0 rb0 b . 0 sb0 b . 0 wb0 b . 0 xb0 b . 0 yb0 b . 0 zb0 b Input words: p, q, r, s, w, x, y, z.
  • 93. Vertex-Disjoint Cycles Input: G = (V, E) Question: Are there k vertex–disjoint cycles?
  • 94. Vertex-Disjoint Cycles Input: G = (V, E) Question: Are there k vertex–disjoint cycles? Parameter: k.
  • 95. Vertex-Disjoint Cycles Input: G = (V, E) Question: Are there k vertex–disjoint cycles? Parameter: k. Related problems: FVS, has a O(k2 ) kernel. Edge–Disjoint Cycles, has a O(k2 log2 k) kernel.
  • 96. Vertex-Disjoint Cycles Input: G = (V, E) Question: Are there k vertex–disjoint cycles? Parameter: k. Related problems: FVS, has a O(k2 ) kernel. Edge–Disjoint Cycles, has a O(k2 log2 k) kernel. In contrast, Disjoint Factors transforms into Vertex–Disjoint Cycles in polynomial time.
  • 97. . = 1123343422 w . 1 . 1 . 2 . 3 . 3 . 4 . 3 . 4 . 2 . 2 .1 v .2 v .3 v .4 v .5 v .6 v .7 v .8 v .9 v . 10 v . . 1 . 2 . 3 . 4 Disjoint Factors ppt Disjoint Cycles
  • 98. . = 1123343422 w . 1 . 1 . 2 . 3 . 3 . 4 . 3 . 4 . 2 . 2 .1 v .2 v .3 v .4 v .5 v .6 v .7 v .8 v .9 v . 10 v . . 1 . 2 . 3 . 4 Claim w has all k disjoint factors ⇐⇒ Gw has k vertex–disjoint cycles.
  • 99. PROBLEM KERNELS how why what when no polynomial kernels Examples satisfiability vertex-disjoint cycles disjoint factors WorkarouNds many kernels
  • 101. No polynomial kernel? Look for the best exponential or subexponential kernels...
  • 102. No polynomial kernel? Look for the best exponential or subexponential kernels... ... or build many polynomial kernels.
  • 103. Many polynomial kernels give us better algorithms: ( 2) k k 2klog k versus p(n)ck ( ) ck p(n) · k
  • 104. We say that a subdigraph T of a digraph D is an out–tree if T is an oriented tree with only one vertex r of in-degree zero (called the root). e D M L O-B problem is to find an out-branching in a given digraph with the maximum number of leaves. Parameter: k
  • 105. We say that a subdigraph T of a digraph D is an out-branching if T is an oriented spanning tree with only one vertex r of in-degree zero. e D M L O-B problem is to find an out-branching in a given digraph with the maximum number of leaves. Parameter: k
  • 106. We say that a subdigraph T of a digraph D is an out-branching if T is an oriented spanning tree with only one vertex r of in-degree zero. e D M L O-B problem is to find an out-branching in a given digraph with the maximum number of leaves. Parameter: k
  • 107. We say that a subdigraph T of a digraph D is an out-branching if T is an oriented spanning tree with only one vertex r of in-degree zero. e D k–L O-B problem is to find an out-branching in a given digraph with at least k leaves. Parameter: k
  • 108. We say that a subdigraph T of a digraph D is an out-branching if T is an oriented spanning tree with only one vertex r of in-degree zero. e D k–L O-B problem is to find an out-branching in a given digraph with at least k leaves. Parameter: k
  • 109. R k–L O–B admits a kernel of size O(k3 ). k–L O–B does not admit a polynomial kernel (proof deferred). However, it clearly admits n polynomial kernels (try all possible choices for root, and apply the kernel for R k–L O-B.
  • 110. R k–L O–B admits a kernel of size O(k3 ). k–L O–B does not admit a polynomial kernel (proof deferred). However, it clearly admits n polynomial kernels (try all possible choices for root, and apply the kernel for R k–L O-B.
  • 111. R k–L O–B admits a kernel of size O(k3 ). k–L O–B does not admit a polynomial kernel (proof deferred). However, it clearly admits n polynomial kernels (try all possible choices for root, and apply the kernel for R k–L O–B.
  • 112. R k–L O–T admits a kernel of size O(k3 ). k–L O–B does not admit a polynomial kernel (proof deferred). However, it clearly admits n polynomial kernels (try all possible choices for root, and apply the kernel for R k–L O-B.
  • 113. R k–L O–T admits a kernel of size O(k3 ). k–L O–T does not admit a polynomial kernel (composition via disjoint union). However, it clearly admits n polynomial kernels (try all possible choices for root, and apply the kernel for R k–L O-B.
  • 114. R k–L O–T admits a kernel of size O(k3 ). k–L O–T does not admit a polynomial kernel (composition via disjoint union). However, it clearly admits n polynomial kernels (try all possible choices for root, and apply the kernel for R k–L O–T.
  • 115. . D,. k) ( Decompose k–Leaf Out–Tree into n . instances of Rooted k–Leaf Out–Tree . D1 , v1 , k) . D2 , v2 , k) . D3 , v3 , k) ( ( ( · .·· . Dn , vn , k) ( Apply Kernelization . (Known for Rooted k–Leaf Out-Tree) . D1 , v1 , k) . D2 , v2 , k) . D3 , v3 , k) ( ( ( · .·· . Dn , vn , k) ( Reduce to instances of k–Leaf Out–Branching on Nice Willow Graphs . (Using NP–completeness) . W1 , b 1 ) ( . W2 , b 2 ) ( . W3 , b 3 ) ( · .·· . Wn , b n ) (
  • 116. . D,. k) ( Decompose k–Leaf Out–Tree into n . instances of Rooted k–Leaf Out–Tree . D1 , v1 , k) . D2 , v2 , k) . D3 , v3 , k) ( ( ( · .·· . Dn , vn , k) ( Apply Kernelization . (Known for Rooted k–Leaf Out-Tree) . D1 , v1 , k) . D2 , v2 , k) . D3 , v3 , k) ( ( ( · .·· . Dn , vn , k) ( Reduce to instances of k–Leaf Out–Branching on Nice Willow Graphs . (Using NP–completeness) . W1 , bmax ) . W2 , bmax ) . W3 , bmax ) ( ( ( · .·· . Wn , bmax ) (
  • 117. Composition . (produces an instance of k–Leaf Out Branching) . D , bmax + 1) ( Kernelization . (Given By Assumption) .D , k ) ( NP–completeness reduction from k–Leaf Out Branching . to k–Leaf Out Tree) . D∗ , k∗ ) (
  • 118. A polynomial pseudo–kernelization K pro- duces kernels whose size can be bounded by: h(k) · n1−ε where k is the parameter of the problem, and h is polynomial. concluding remarks
  • 119. Analogous to the composition framework, there are algorithms (called Linear OR) whose existence helps us rule out polynomial pseudo–kernels. concluding remarks
  • 120. Analogous to the composition framework, there are algorithms (called Linear OR) whose existence helps us rule out polynomial pseudo–kernels. Most compositional algorithms can be ex- tended to fit the definition of Linear OR. concluding remarks
  • 121. A strong kernel is one where the parameter of the kernelized instance is at most the pa- rameter of the original instance. concluding remarks
  • 122. A strong kernel is one where the parameter of the kernelized instance is at most the pa- rameter of the original instance. Mechanisms for ruling out strong kernels are simpler and rely on self–reductions and the hypothesis that P = NP. concluding remarks
  • 123. A strong kernel is one where the parameter of the kernelized instance is at most the pa- rameter of the original instance. Mechanisms for ruling out strong kernels are simpler and rely on self–reductions and the hypothesis that P = NP. Example: k–Path is not likely to admit a strong kernel. concluding remarks
  • 124. ere are no known ways of inferring that a problem is unlikely to have a kernel of size kc for some specific c. Open P roblems
  • 125. Can lower bound theorems be proved under some “well-believed” conjectures of parameterized complexity - for instance - FPT = XP, or, FPT = W[t] for some t ∈ N+ ? Open P roblems
  • 126. A lower bound framework for ruling out p(k) · f(l)–sized kernels for problems with two parameters (k, l) would be useful. Open P roblems
  • 127. “Many polynomial kernels” have been found only for the directed outbranching problem. It would be interesting to apply the technique to other problems that are not expected to have polynomial–sized kernels. Open P roblems
  • 128. FIN