Monday, November 30th, 2015

9:15 am10:00 am

The 3SUM Conjecture has proven to be a valuable tool for proving conditional lower bounds on dynamic data structures and graph problems. This line of work was initiated by Patrascu (STOC 2010) who reduced 3SUM to an offline SetDisjointness problem. However, the reduction introduced by Patrascu suffers from several inefficiencies, making it difficult to obtain tight conditional lower bounds from the 3SUM conjecture.

In this talk I'll discuss the deficiencies of Patrascu's framework, then give new and efficient reductions from 3SUM to offline SetDisjointness and offline SetIntersection (the reporting version of SetDisjointness) which leads to polynomially higher lower bounds on several problems.

2:00 pm2:45 pm

Patrascu reduced 3SUM to listing triangles. We give a reduction that goes in the opposite direction, and discuss some consequences. Joint work with Zahra Jafargholi

3:15 pm3:40 pm

The class P attempts to capture the efficiently solvable computational tasks. It is full of practically relevant problems, with varied and fascinating combinatorial structure. A line of work that has been gaining a lot of attention attempts to obtain a better understanding of the time complexity of the problems in P by proving reductions between them.

In this talk, I will present the most current landscape of P, summarizing the exciting progress of recent years (most of which will be covered in this workshop). Then, I will highlight directions for future work that are yet to be explored and discuss the challenges that are involved.

Tuesday, December 1st, 2015

9:15 am10:00 am

Given a context free language, and an input string, language edit distance measures minimum number of insertions, deletions and substitutions required to map the string to any valid member of the language. In this talk, I will tell you what I have learnt playing with this problem: upper bounds, lower bounds, and their implications.

10:00 am10:45 am

I will briefly discuss some of the main techniques involved in showing time lower bounds for streaming problems, focusing on one in particular, the information transfer method. I will then go on to present some of the barriers to making further progress, presenting a simple looking mathematical conjecture with a probabilistic combinatorics flavour that derives from this work. The conjecture is related to the classic "coin weighing with a spring scale" puzzle but has so far resisted our best efforts at resolution.

11:15 am12:00 pm

We show how a "worst case to random case" reduction for high-dimensional pointsets leads to optimal hashing for approximate nearest neighbor search. 

Joint work with Ilya Razenshteyn.

2:00 pm2:45 pm

We will discuss a SETH-based lower bound for the longest common subsequence problem that matches the simple quadratic time dynamic programming solution. Beyond the simple algorithm, longest common subsequence has a rich literature on special cases, more precisely, on algorithms that are fast when certain input parameters are small. We show how to extend the conditional lower bounds to this parameterized setting. This is joined work with Marvin Künnemann.

2:45 pm3:30 pm

We prove conditional hardness results for a number of string problems. Amongst others, this includes hardness for regular expression pattern matching, dynamic exact mathing with wildcards and two pattern indexing. The results are based on ongoing work with Clifford and Grønlund, and previous results with Munro, Nielsen and Thankachan.

4:00 pm4:25 pm

Regular expressions are a fundamental notion in formal language theory, and are frequently used in computer science to define search patterns. In particular, regular expression matching is a widely used computational primitive, employed in many programming languages and text processing utilities. A classic algorithm for regular expression matching constructs and simulates a non-deterministic finite automaton corresponding to the expression, resulting in an $O(m n)$ running time (where $m$ is the length of the pattern and $n$ is the length of the text). This running time can be improved slightly (by a logarithmic factor), but no significantly faster solutions are known. At the same time, much faster algorithms exist for various special cases of regular expressions, including dictionary matching, wild-card matching, subset matching, etc.

In this paper we show that the complexity of regular expression matching can be characterized based on its *depth* (when interpreted as a formula). Very roughly, our results state that for expressions involving concatenation, or, and "Kleene plus", the following dichotomy holds:

(a) Matching regular expressions of depth two (involving any combination of the above operators) can be solved in near-linear time. In particular, this case covers the aforementioned variants of regular expression matching amenable to fast algorithms, and introduces new ones.

(b) Matching regular expressions of depth three (involving any combination of the above operators) that are not reducible to some depth-two expressions cannot be solved in sub-quadratic time unless Strong Exponential Time Hypothesis is false.

Wednesday, December 2nd, 2015

9:15 am10:00 am

I will describe some subcubic reductions between APSP, Diameter, and some graph centrality problems such as Radius, Median, and (Approximate) Betweenness Centrality. This is joint work with Amir Abboud and Virginia Vassilevska Williams (SODA'15).

10:00 am10:45 am

Game graphs are used in model checking to verify the existence of desirable properties of a system. We will give an introduction to the graph algorithmic problems that arise on game graphs and present the state of the art in upper and (conditional) lower bounds on the running time for solving these problems. This is joint work with Krishnendu Chatterjee, Wolfgang Dvorak and Veronika Loitzenbauer.

11:15 am12:00 pm
A recent and very active line of work achieves tight lower bounds for fundamental problems under the Strong Exponential Time Hypothesis (SETH). A celebrated result of Backurs and Indyk (STOC'15) proves that the Edit Distance of two sequences of equal length cannot be computed in strongly subquadratic time under SETH. The result was extended by follow-up works to simpler looking problems like finding the Longest Common Subsequence (LCS).
 
SETH is a very strong assumption: it asserts that even linear size CNF formulas cannot be analyzed for satisfiability with an exponential speedup over exhaustive search. We consider much safer assumptions, e.g. that such a speedup is impossible for SAT on much more expressive representations, like subexponential-size NC circuits. Intuitively, this seems much more plausible: NC circuits can implement complex cryptographic primitives, while CNFs cannot even approximately compute an XOR of bits.
 
Our main result is a surprising reduction from SAT on Branching Programs to fundamental problems in P like Edit Distance, LCS, and many others. Truly subquadratic algorithms for these problems therefore have consequences that we consider to be far more remarkable than merely faster CNF SAT algorithms. 
For example, SAT on arbitrary o(n)-depth bounded fan-in circuits (and therefore also NC-Circuit-SAT) can be solved in (2-eps)^n time.
 
An interesting feature of our work is that we can prove major consequences even from mildly subquadratic algorithms for Edit Distance or LCS.
For example, we show that if we can shave an arbitrarily large polylog factor from n^2 for Edit Distance then E^NP does not have non-uniform NC^1 circuits.
2:00 pm2:45 pm

In joint work with Timothy Chan (to appear in SODA'16), we give deterministic algorithms for APSP, Orthogonal Vectors, Batch Partial Match, etc., that essentially match the randomized running times. The talk will give algorithms researchers even more reason to know complexity theory.

2:45 pm3:30 pm

Breaking the the worst case update time of Frederickson [STOC 1983] and Eppstein~et~al. [FOCS 1992] for the dynamic minimum spanning tree (dynamic MST) problem has long been an elusive question. In this talk I will discuss some of the barriers to this goal in the emergency planning setting as formulated by Patrascu and Thorup [FOCS 2007].

4:00 pm4:25 pm

Despite the recent success of conditional lower bounds, sometimes the objection is voiced that such lower bounds are only as useful as the underlying hypotheses are justified. Researchers disbelieving in the Strong Exponential Time Hypothesis (SETH) might feel that lower bounds conditional on SETH are irrelevant to them. However, fine-grained reductions yield much more insights and have surprising uses. One particularly interesting example is the reoccurring phenomenon: Conditional lower bounds can inspire algorithmic improvements. In this talk we will discuss and illustrate this phenomenon, and as a case in point show how to obtain a faster algorithm for approximating the Fréchet distance on realistic curves by examining why the SETH-based lower bound of [Bringmann, FOCS'14] could not be strengthened.

Thursday, December 3rd, 2015

9:15 am10:00 am

We present a deterministic algorithm that computes the edge-connectivity of a graph in near-linear time. This is for a simple undirected unweighted graph G with n vertices and m edges. This is the first o(mn) time deterministic algorithm for the problem. Our algorithm is easily extended to find a concrete minimum edge-cut. In fact, we can construct the classic cactus representation of all minimum cuts in near-linear time.

The previous fastest deterministic algorithm by Gabow from STOC'91 took ~O(m+k^2 n), where k is the edge connectivity, but k could be as big as n-1.

At STOC'96 Karger presented a randomized near linear time Monte Carlo algorithm for the minimum cut problem. As he points out, there is no better way of certifying the minimality of the returned cut than to use Gabow's slower deterministic algorithm and compare sizes.

Our main technical contribution is a near-linear time algorithm that contract vertex sets of a simple input graph G with minimum degree d, producing a multigraph with ~O(m/d) edges which preserves all minimum cuts
of G with at least 2 vertices on each side.

In our deterministic near-linear time algorithm, we will decompose the problem via low-conductance cuts found using PageRank a la Brin and Page (1998), as analyzed by Andersson, Chung, and Lang at FOCS'06. Normally such algorithms for low-conductance cuts are randomized Monte Carlo algorithms, because they rely on guessing a good start vertex. However, in our case, we have so much structure that no guessing is needed.

10:00 am10:45 am
The problem of finding multiple simple shortest paths in a weighted directed graph G=(V,E) has many applications, and is considerably more difficult than the corresponding problem when cycles are allowed in the paths. Even for a single source-sink pair, it is known that two simple shortest paths cannot be found in time polynomially smaller than n^3 (where n=|V|) unless the All-Pairs Shortest Paths problem can be solved in a similar time bound. We consider the all-pairs version of the problem, and we give a new algorithm to find k simple shortest paths for all pairs of vertices. For k=2, our algorithm runs in O(mn + n^2 log n) time (where m=|E|), which is almost the same bound as for the single pair case, and for k=3 we improve earlier bounds. Our results are based on forming suitable path extensions to find simple shortest paths; this method is different from the `detour finding' technique used in most of the prior work on simple shortest paths, replacement paths, and distance sensitivity oracles. We also present algorithms to find $k$ simple cycles through a vertex and $k$ simple cycles through all vertices.
 
Enumerating simple cycles is a classical well-studied problem. We present a new \tilde-O(mn) time algorithm to enumerate each successive simple shortest cycle in non-decreasing order of weight.
 
We show that all of the above problems are at least as hard as finding a minimum weight cycle in an O(n)-node, O(m)-edge graph. In contrast to this result, we give a new algorithm, again through suitable path extensions, that enumerates each successive simple path in nondecreasing order of weight in near linear time.
 
Joint work with Udit Agarwal.
 
11:15 am12:00 pm

No abstract available.

2:00 pm2:45 pm

Despite the success of complexity of low-polynomial time for combinatorial problems, little is known about the exact polynomial time complexity of linear algebra problems. I will discuss a few variants of the low rank approximation problem, including Frobenius, robust, weighted, and spectral low rank approximation. I will show how sketching techniques can be used to obtain optimal input sparsity time algorithms for Frobenius and robust versions of low rank approximation. I will also discuss several lower bound techniques used for robust and weighted low rank approximation. The main open question is whether there is an input sparsity time algorithm for spectral low rank approximation. I will describe what is known about this problem. 

2:45 pm3:10 pm

We show that, assuming the (deterministic) Exponential Time Hypothesis, distinguishing between a graph with an induced k-clique and a graph in which all k-subgraphs have density at most 1-\epsilon, requires n^{\tilde \Omega(log n)} time. Our result essentially matches the quasi-polynomial algorithms of Feige and Seltser [FS97] and Barman [Bar15] for this problem, and is the first one to rule out an additive PTAS for Densest k-Subgraph. We further strengthen this result by showing that our lower bound continues to hold when, in the soundness case, even subgraphs smaller by a near-polynomial factor (k' = k 2^{-\tilde \Omega (log n)}) are assumed to be at most (1-\epsilon)-dense.

Our reduction is very simple and inspired by recent applications of the ``birthday repetition" technique [AIM14,BKW15]. Our analysis relies on information theoretical machinery and is similar in spirit to analyzing a parallel repetition of two-prover games in which the provers may choose to answer some challenges multiple times, while completely ignoring other challenges.

3:10 pm3:35 pm

We consider the stable matching problem when the preference lists are not given explicitly but are represented in a succinct way and ask whether the problem becomes computationally easier. We give subquadratic algorithms for finding a stable matching in special cases of two very natural succinct representations of the problem, the d-attribute and d-list models. We also give algorithms for verifying a stable matching in the same models. We further show that for d = \omega(\log n) both finding and verifying a stable matching in the d-attribute model requires quadratic time assuming the Strong Exponential Time Hypothesis. The d-attribute model is therefore as hard as the general case for large enough d.

Joint work with Stefan Schneider and Ramamohan Paturi.

4:05 pm4:30 pm

The radius and diameter are fundamental graph parameters, with several natural definitions for directed graphs. All versions can be solved via reduction to all-pairs shortest paths (APSP). 

However, solving APSP on $n$ node subgraphs requires $\Omega(n^2)$ time, even in sparse graphs. When can radius and diameter in sparse graphs be solved in truly subquadratic time, and when is such an algorithm unlikely?

Using the Orthogonal Vectors conjecture, and a new differently-quantified variant we present as the Hitting Set conjecture, we give inapproximability results for truly subquadratic algorithms. Motivated by these conditional lower bounds, we also find truly subquadratic algorithms with matching approximation guarantees for most versions. For example, there is a $2$-approximation algorithm for directed Radius with one-way distances that runs in $\tilde{O}(m\sqrt{n})$ time, while a $(2-\delta)$-approximation algorithm in $O(n^{2-\eps})$ time is unlikely.

Using these conjectures, we can also rule out an $(3/2-\delta)$ approximation algorithm for any version that runs on graphs of treewidth $k$ in $2^{o(k)}n^{2-\eps}$ time, while all versions can be solved in $2^{O(k\log{k})}n^{1+o(1)}$ time. This shows an interesting tradeoff between the dependence on k and the polynomial factor of an FPT algorithm, and serves as a first result for our Fixed Parameter Tractability in P framework.

4:30 pm4:55 pm

The k-Orthogonal Vectors problem defined on sparse structures is a special case of model checking for (k + 1)-quantifier first-order sentences. Under Strong Exponential Time Hypothesis (SETH), it requires time Ω ̃(m^k). This paper shows k-Orthogonal Vectors defined on sparse structures is complete in the model checking problems of sentences quantified by k existential quantifiers and one universal quantifier, under probabilistic reductions. Moreover, if any one of the 2-Orthogonal Vectors, Sperner Family and 2-Set Covering problems is decidable in subquadratic time, then model checking of any first-order sentence with (k + 1) ≥ 3 quantifiers will have O(m^{k−ε}) probabilistic algorithms for some ε > 0.

Joint work with Russell Impagliazzo.