Monday, November 2nd, 2015

9:30 am10:30 am

We discuss lower and upper bounds on exact algorithms for different graph problems. 

11:00 am11:30 am
We give two recent examples of algorithms that are the fastest known for their respective problem and parametrization when there are few solutions. However, as the number of solutions grows the algorithms get slower and other algorithms give the best bounds. Is this evidence that there are hard problems where uniqueness helps, or does it just mean that we haven't yet found the best algorithms for these problems? The two problems considered are directed hamiltonicity parametrized by the number of vertices, and vertex coloring in bounded degree graphs parametrized by pathwidth. These results are in stark contrast to cnf satisfiability where we know that uniqueness doesn't help, a faster exponential time algorithm for unique SAT implies a faster algorithm for general SAT.
11:30 am12:00 pm

In this talk we discuss parameterized and promised streaming.

2:00 pm3:00 pm
We introduce the Nondeterministic Strong Exponential Time Hypothesis  (NSETH) as a natural extension of the Strong Exponential Time Hypothesis (SETH). We show that both refuting and proving 
NSETH would have interesting consequences. 
 
In particular we show that disproving NSETH would give new nontrivial circuit lower bounds. On the other hand, NSETH implies non-reducibility results, i.e. the absence of (deterministic) fine-grained reductions from SAT to a number of problems. As a consequence we conclude that unless this hypothesis fails, problems such as 3-Sum, APSP and model checking of a large class of first-order graph properties  cannot be shown to be SETH-hard using deterministic or zero-error probabilistic reductions.
 
Joint work with Marco L. Carmosino, Jiawei Gao, Russell Impagliazzo, Ivan Mikhailin and Ramamohan Paturi
3:30 pm4:00 pm
FInite automata (FA) are among the basic structures every first-year student in Computer Science gets to know. Somehow, the impression is that many decision problems concerning FA are easy to solve. However, there are quite a number of questions that are indeed computationally difficult. We show various examples how the (strong) exponential time hypothesis helps show the limits of exponential-time algorithms for these cases.
4:00 pm4:30 pm

The first part of the talk focuses on algorithms for Max 2-CSP. A new lower bound for the algorithm by Scott and Sorkin (2007) shows that the existing running time analysis is tight. The lower bound seems to apply to a broad class of branching algorithms, but can be overcome by exploiting global properties of an instance. In particular, the running time upper bound has been improved (Gaspers, Sorkin, 2015) by using graph separators to guide the branching.

The second part of the talk discusses the time complexity of some enumeration problems. It surveys results for enumerating extremal vertex sets in graphs and presents new results for the number of minimal separators in graphs (Gaspers, Mackenzie, 2015) and the number of minimal transversals of rank-k hypergraphs (Cochefert, Couturier, Gaspers, Kratsch, 2015).

Tuesday, November 3rd, 2015

9:30 am10:30 am

This talk offers an introduction to the fine-grained complexity of counting problems. We review some known results from the polynomial time world, and we survey some known exact algorithms as well as hardness results under the counting version of the exponential time hypothesis (#ETH). Some problems we discuss are the permanent, computing the number of perfect matchings, counting satisfying assignments of k-CNF formulas, evaluating the chromatic and the Tutte polynomial, and counting solutions in constraint satisfaction problems (CSPs). On the hardness side, we delineate the block interpolation framework due to Curticapean (2015), which allows us to use polynomial interpolation to prove #ETH-hardness results.

11:00 am11:30 am

The Subset Sum problem asks whether a given set of n positive integers contains a subset of elements that sum up to a given target t. It is an outstanding open question whether the O^*(2^{n/2})-time algorithm for Subset Sum by Horowitz and Sahni [J. ACM 1974] can be beaten in the worst-case setting by a "truly faster", O^*(2^{(0.5-\delta)n})-time algorithm, with some constant \delta > 0. We report some partial progress on this question as outlined in http://arxiv.org/pdf/1508.06019v1.pdf.

Specifically, we study Subset Sum parameterized by the maximum bin size \beta, defined as the largest number of subsets of the n input integers that yield the same sum. For every \epsilon > 0 we give a truly faster algorithm for instances with \beta \leq 2^{(0.5-\epsilon)n}, as well as instances with \beta \geq 2^{0.661n}. Consequently, we also obtain a characterization in terms of the popular density parameter n/\log_2 t: if all instances of density at least 1.003 admit a truly faster algorithm, then so does every instance.

Joint work with Per Austrin, Petteri Kaski and Mikko Koivisto.

11:30 am12:00 pm

Given a collection of n subsets over a finite set of elements {1,2,…, m}, the goal of Maximum k Subset Intersection problem is to select k subsets whose intersection has maximum cardinality.  In this talk, we show that, under ETH, this problem has no f(k)n^{o{\sqrt{k}}}-time approximation algorithm within ratio n^{o(1/\sqrt{k})} for any computable function f.

2:00 pm3:00 pm

We discuss several challenges concerning (missing) lower bounds for fixed-parameter tractability results for problems arising in voting and related areas.

3:30 pm4:00 pm
We will present several lower bounds on the running times for both exact and approximation algorithms based on the exponential time hypothesis (ETHmainly for scheduling and packing problems (including some open problems).
4:00 pm4:30 pm

An instance of the constraint satisfaction problem with $n$ variables over a domain of $d$ values can be solved by brute-force in $d^n$ steps (omitting a polynomial factor). In this talk, we investigate the existence of subexponential-time algorithms, that is, algorithms running in $d^{o(n)}$ steps, for various natural restrictions of the constraint satisfaction problem. We discuss both the constraint satisfaction problem in which all the constraints are given extensionally as tables, and that in which all the constraints are given intensionally in the form of global constraints.  We provide tight characterizations of the subexponential-time complexity of the aforementioned problems with respect to several natural structural parameters, which allows us to draw a detailed landscape of the subexponential-time complexity of the constraint satisfaction problem. Our analysis provides results indicating whether and when one can significantly improve on the brute-force search approach for solving the constraint satisfaction problem.

Wednesday, November 4th, 2015

9:30 am10:30 am

In a vertex subset problem we are given as input a universe U of size n, and a family F of subsets of the universe defined implicitly from the input. The task is to find a subset S in F of smallest possible size. For an example the Vertex Cover problem is a subset problem where input is a graph G, the universe is the vertex set of G, and the family F is the family of all vertex covers of G. Here a vertex set S is a vertex cover of G if every edge of G has at least one endpoint in S. Many problems, such as Vertex Cover, Feedback Vertex Set, Hitting Set and Minimum Weight Satisfiability can be formalized as vertex subset problems.  The trivial algorithm for such problems runs in time 2^n. We show that (essentially) any vertex subset problem that admits an FPT algorithm with running time c^kn^O(1), where c is a constant and k is the size of the optimal solution, also admits an algorithm with running time (2-1/c)^n. In one stroke this theorem improves the best known exact exponential time algorithms for a number of problems, and gives tighter combinatorial bounds  for several well-studied objects. The most natural variant of our algorithm is randomized, we also show how to get a deterministic algorithm with the same running time bounds, up to a sub-exponential factor in the running time. Our de-randomization relies on a new pseudo-random construction that may be of independent interest.

(Joint work with Serge Gaspers, Fedor Fomin and Saket Saurabh)
 

11:00 am11:30 am
We present a special case of the Directed Steiner Network problem with  maximum demand p that can be solved in tine O(n^O(p)). We use the Exponential time Hypothesis to show that this problem can not be  solved in time  f(p)*n^o(p)  for any function f..
 
Then we consider approximation algorithm. As an example consider the Directed Steiner tree  problem. We show that it can  can be approximated within \ln n/2 in time 2^O(sqrt{n}) but not in time 2^o(\sqrt{n}). This gives a tight (up  low order factors in the exponent) running time for this kind of approximation. In fact, we give a tight running time for c\ln n approximation For the Directed Steiner tree  problem for any c<1..
 
The ETH can be useful in different scenarios. We study a well known problem in network design that admits an O(log n) approximation and log log n lower bound under Pneq NP. We show that assuming the ETH. We can give a tight hardness Omega( log n).
11:30 am12:00 pm
In 2010 Bjorklund [FOCS'10] published a surprising result, which states that Hamiltonian cycles in undirected graphs can be found in 1.66^n time, breaking the 2^n barrier. It was soon extended by Bjorklund, Husfeldt, Kaski and Koivisto to an algorithm that finds a path of length k in 1.66^k n^{O(1)} running time. 
 
Raible et al [Algorithmica'13] posed an open problem whether the approach of Bjorklund et al extends to finding trees of size k. In this work we show that the answer to this question is positive, provided that we restrict ourselves to trees with few leaves, namely, we show an algorithm running in time 1.66^k * 2^{l/2} n^{O(1)}, where l is the number of leaves. This has further consequences for the k-internal spanning tree problem, including breaking the 2^n running time barrier.   
Moreover, the Hamiltonicity problem was known to be easier when the input graph has low maximum degree (Eppstein [JGAA'07], Gebauer [ANALCO'08]). It is natural to ask whether the approach of Bjorklund et al can also get more efficient in these restricted cases. We give a positive answer, through the more general setting of vertex colored graphs. It turns out that, apart from the ordinary proper vertex coloring, also applying fractional coloring or vector coloring gives improved running time bounds in some graph classes.
 
Joint work with Andreas Bjorklund, Vikram Kamat and Meirav Zehavi.
2:00 pm3:00 pm

The study of counting problems has become a classical subfield of computational complexity theory since Valiant's seminal paper introduced the complexity class #P and proved that counting perfect matchings is complete for this class. Since then, counting problems have been refined along various dimensions, including approximate counting, counting on restricted graph classes, and counting modulo a fixed number. In this talk, we consider some of the most recent refinements, namely, the parameterized complexity and the exponential-time complexity of counting problems.

First, we will consider various parameterizations of the problem of counting perfect matchings, together with lower bounds under the counting exponential-time hypotheses #ETH and #SETH.

Then we turn our attention to counting general subgraphs H in a host graph G, parameterized by the size of H. This gives rise to the problems #Sub(C) for fixed graph classes C: For inputs H and G, where H is from C, we wish to count H-copies in G. We show that #Sub(C) is either polynomial-time solvable or #W[1]-complete.

Finally, we present block interpolation, a general framework for proving lower bounds under #ETH. This gives tight lower bounds for counting (perfect) matchings, vertex-covers, evaluating the Tutte polynomial, and various other problems.

3:30 pm4:00 pm
Let P be a fixed hereditary graph class. In the P-Completion problem, given a graph G and an integer k, we ask whether it is possible to add at most k edges to G to obtain a member of P. In the recent years completion problems received significant attention from the perspective of parameterized complexity, with the standard parameterization by k.
 
In our talk we first survey the history of the study of parameterized complexity of completion problems, including the breakthrough paper of Villanger, Heggernes, Paul, and Telle (SIAM J. Comp. 38(5):2007-2020, 2009) that settles fixed-parameter tractability of Interval Completion, as well as recent advancements on polynomial kernelization. Then, we move to the main topic, namely subexponential parameterized algorithms.
 
First fixed-parameter algorithms for completion problems focused mostly on the 'forbidden induced subgraphs' definition of the graph class P in question. In 2012 Fomin and Villanger (SIAM J. Comput. 42(6):2197-2216, 2013) came up with a novel idea to instead focus on some structural definition of the class P, trying to build the modified output graph by dynamic programming. Slightly simplifying, we may say that the main technical contribution of their work is a subexponential in k bound of reasonable 'partial chordal graphs' for an input instance (G,k) of Chordal Completion. Consequently, Chordal Completion can be solved in subexponential FPT time.
 
Following the approach of Fomin and Villanger, in the past three years subexponential parameterized algorithms were shown for the class of chain, split, threshold, trivially perfect, pseudosplit, and, most recently, proper interval and interval graphs. Moreover, a few lower bounds for related graph classes were found.
 
In the talk I will survey the above results, and sketch the main ideas behind them.
4:00 pm4:30 pm
In 2011, Fomin and Villanger presented a surprising parameterized algorithm for the Minimum Fill-in problem (add at most k edges to a graph in order to make it chordal) that works in subexponential parameterized time; more precisely in time 2^{O(sqrt{k} log k)} * poly(n). Following their approach, several algorithms with similar running times were designed for related graph modification problems. In all these cases, the tackled problem is completion to some subclass Pi of chordal graphs, i.e., to add as few edges as possible in order to make the given graph belong to Pi. Numerous examples show that diverging from this template even slightly makes it possible to refute the existence of such an efficient parameterized algorithms under ETH; this makes the considered completion problems curious singularity points on the complexity landscape. Also, it is currently unknown whether finding even  faster algorithms, e.g. with running time 2^{o(sqrt{k})} * poly(n), is possible for Minimum Fill-in and related problems.
 
In this talk we will briefly survey the known algorithmic results on subexponential parameterized algorithms for completion problems to subclasses of chordal graphs. Then we will sketch a new work that attempts to answer the question of finding even faster algorithms for these problems by (a) augmenting the classic reductions from ETH by tools related to the PCP theorem and (b) basing the lower bounds on a certain conjecture regarding inapproximability of the Minimum Bisection problem.
 
The talk will be based on a joint work with Ivan Bliznets, Marek Cygan, Pawel Komosa, and Lukas Mach.

Thursday, November 5th, 2015

9:30 am10:30 am

In this talk we propose a new framework for analyzing the performance of preprocessing algorithms. Our framework builds on the notion of kernelization from parametierized com-plexity. However, as opposed to the original notion of kernelization, our definitions combine nicely with approximation algorithms and heuristics. The key new definition is that of a polynomial size a-approximate kernel. 

During the talk we will put up the framework and exemplify its utility via several examples.  
 
 
11:00 am11:30 am

Narrow sieves, representative sets and divide-and-color are three breakthrough techniques related to color coding, which led to the design of extremely fast parameterized algorithms. In this talk, I will discuss the power and limitations of these techniques. I will also briefly address some recent developments related to these techniques, including general schemes for mixing them.

11:30 am12:00 pm

Dynamic programming is often useful to speed up FPT algorithms. It uses, however, often a lot of memory. We show that the high space complexity is unavoidable by tight lower bounds on several problems.

2:30 pm3:00 pm

In the Graph Motif problem, we are given as input a vertex-colored graph $H$ (the host graph) and a multiset of colors $M$ (the motif). Our task is to decide whether $H$ has a connected set of vertices whose multiset of colors exactly agrees with $M$.

The worst-case time complexity of the Graph Motif problem is well-understood in a fairly tight sense:

1. Assuming that the host graph has $m$ edges and that the motif has size $k$, there exists a randomized algorithm that solves the problem in time $O(2^k k^2 (log k)^2 m)$.

2. Unless there is a breakthrough in the time complexity of the Set Cover problem, the Graph Motif problem does not admit an algorithm that runs in time $O^*((2-\epsilon)^k)$ for any constant $\epsilon>0$.

This talk reviews theory and engineering effort that resulted in an open-source algorithm implementation that scales in practice (as long as $k$ remains small) to graphs with hundreds of millions of edges on a single compute node.

Joint work with Andreas Björklund, Łukasz Kowalik, and Juho Lauri.

3:30 pm4:00 pm
Most interesting (Max)-CSPs are APX-hard. In fact, very often the (trivial) approximation ratio achievable in polynomial time cannot be improved even if we allow slightly sub-exponential time, unless the ETH is false. Nevertheless, it has been known since the '90s that density helps in approximating Max-CSPs, and in particular, the work of Arora, Karger and Karpinski established that Max-k-CSP admits a PTAS for instances with \Omega(n^k) constraints.
 
Given that sub-exponential time does not help us approximate Max-CSP in general, the topic of this talk is to examine whether it can help us relax the (strict) density requirement needed for obtaining an approximation scheme. We give an answer that is partly positive and partly negative. On the algorithmic side, we show that by adapting the known exhaustive sampling technique (and using an appropriately larger sample) we can obtain an approximation scheme for Max-k-CSP instances with \Omega(n^{k-1+\delta}) constraints running in (sub-exponential) time 2^{n^{1-\delta}}. For CSPs with arity 2, such as Max CUT, this implies a sub-exponential approximation scheme even for almost sparse instances. On the other hand, we show that this trade-off cannot be extended to instances with fewer than n^{k-1} constraints, and that the running time we obtain is essentially best possible (under the ETH).
 
This is joint work with Dimitris Fotakis and Vangelis Paschos.

Friday, November 6th, 2015

9:30 am10:30 am
Most of the classical NP-hard problems remain NP-hard when restricted to planar graphs, and only exponential-time algorithms are known for the exact solution of these planar problems. However, in many cases, the exponential-time algorithms on planar graphs are significantly faster than the algorithms for general graphs: for example, 3-Coloring can be solved in time 2^O(sqrt(n) in an n-vertex planar graph, whereas only 2^O(n)-time algorithms are known for general graphs. For various planar problems, we often see a square root appearing in the running time of the best algorithms, e.g., the running time is often of the form 2^O(sqrt(n)), n^O(sqrt(k)), or 2^O(sqrt(k)) * n. By now, we have a good understanding of why this square root appears. On the algorithmic side, most of these algorithms rely on the notion of treewidth and its relation to grid minors in planar graphs (but sometimes this connection is not obvious and takes some work to exploit). On the lower bound side, under a complexity assumption called Exponential Time Hypothesis (ETH), we can show that these algorithms are essentially best possible, and therefore the square root has to appear in the running time.
 
In the talk, I will present a survey of the basic algorithmic and complexity results, and discuss some of the very recent developments in the area. 
11:00 am11:30 am

We give improved deterministic algorithms solving MAX-SAT on instances with cn clauses, achieving running time 2^{n-\Omega(n/c)} with a polynomial-space algorithm, and running time 2^{n-\Omega(n/\sqrt{c}} with an exponential-space algorithm.

11:30 am12:00 pm

We discuss application of the ETH to obtain tight hardness results for a range of hard graph problems such as vertex-partitioning problems.

2:00 pm2:30 pm

In this talk, I will explain the miniaturization mapping between parameterized problems and classical problems. And I will show that it establishes the equivalence not only between parameterized time complexity and exponential time complexity but also between parameterized space complexity and the so-called linear space complexity. Therefore many lower bounds in parameterized complexity can be translated to classical complexity, and vice versa.

2:30 pm3:00 pm
A substantial work has been done about tight algorithmic lower and upper bounds for various graph problem parameterized by the tree-width of an input graph but, in comparison, a great deal less is known about another important graph parameter clique-width introduced by Courcelle and Olariu. 
 
By the celebrated meta-theorem of Courcelle, Makowsky, and Rotics, all problems expressible in MS_1-logic are FPT when parameterized by the clique-width. On the other hand, if we restrict ourselves to graphs of clique-width at most $w$, then there are many natural problems for which the running time of the  best known algorithms  is of the form $n^{f(w)}$, where $n$ is the input length and $f$ is some function.  We survey algorithmic bounds for problems of this type. In particular, we discuss  the asymptotically tight   bounds for  the Max-Cut and Edge Dominating Set problem that  i) can be solved in time $n^{O(w)}$; and ii) cannot be solved in time $f(w) n^{o(w)}$, unless Exponential Time Hypothesis (ETH) collapses; where $f$ is an arbitrary function of $w$, on input of size $n$ and clique-width at most $w$.
 
Further we consider tight algorithmic lower and upper bounds for some FPT-problems when the clique-width of the input graph is one of the parameters. We show that  for $n$-vertex graphs of clique-width at most $w$, i) $d$-Regular Induced Subgraph and Minimum Subgraph of Minimum Degree at least $d$ can be solved in time $d^{O(w)}\cdot n$, but they cannot be solved in time $2^{o(w\log d)}n^{O(1)}$ unless ETH fails; ii) the $k$-Dense (Sparse) Subgraph problem can be solved in time $k^{O(w)}n$, but it cannot be solved in time $2^{o(w\log k)}n^{O(1)}$ unless ETH fails.
 
The talk is based on the joint work with Fedor Fomin, Daniel Lokshtanov,  Saket Saurabh, Hajo Broersma and Viresh Patel.
3:30 pm4:00 pm

In this talk, I will report satisfiability algorithms for (1) systems of degree-k polynomial equations over finite fields, (2) systems of polynomial equations over GF[2] and (3) bounded-depth unbounded-fan-in circuits with AND, OR, NOT, MOD p gates (ACC^0[p] circuits). (1), (2) and (3) are generalizations of the satisfiability problems of k-CNF, CNF and AC^0 respectively and the running time of each algorithm is still almost the same as that of the current best algorithm for each corresponding problem.

Joint work with Daniel Lokshtanov, Ramamohan Paturi and Ryan Williams.

4:00 pm4:30 pm

Channel Assignment is a problem of assigning frequencies (integer numbers) to radio emitters (vertices of a graph) such that the emitters are not interfering each other (weight on edges represent minimum allowed frequency differences) and the span of assigned frequencies is minimum.

Subgraph Isomorphism is a well known problem of checking if one graph is isomorphic with a subgraph of another.

Both of them have brute force O*(n!) algorithms. In both cases this is optimal. It turns out that the proofs of the lower bounds of these quite unrelated problems can share some techniques.