Monday, December 1st, 2014

9:30 am10:30 am

We present a fast construction algorithm for the hierarchical tree decompositions that lie at the heart of oblivious routing strategies and that form the basis for approximation and online algorithms for various cut problems in graphs.

Given an undirected graph $G=(V,E,c)$ with edge capacities, we compute a single tree $T=(V_T,E_T,c_T)$, where the leaf nodes of $T$ correspond to nodes in $G$, such that the tree approximates the cut-structure of $G$ up to a factor of $O(\log^4 n)$. The best existing construction by Harrelson, Hildrum, and Rao [HHR03] just guarantees a polynomial running time but offers a better approximation guarantee of $O(\log^2 n\log\log n)$.

Phrasing our results in terms of vertex sparsifiers, we obtain the following result. For a graph $G=(V,E)$ with a subset $S$ of terminals, we compute a tree $T$ with at most $2|S|$ vertices (and the leafs of $T$ correspond to nodes in $S$) such that $T$ is a flow-sparsifier for $S$ in $G$ with quality $O(\log^2 n \log^2 k)$, where $|V|=n$ and $|S|=k$.

The running time of our algorithm is $O(\polylog n\cdot T(m,1/\log^3 n))$ where $T(m,\varepsilon)$ is the time for computing an approximate maxflow. The latter is almost linear due to the recent results of Sherman [She13] and Kelner et al. [KOLS13].

This is joint work with Hanjo Täubig and Chintan Shah.

11:00 am12:00 pm

In this talk, I will describe a framework for approximately solving flow problems in capacitated, undirected graphs, and I will apply it to find approximately maximum s-t flows in almost-linear time, improving on the best previous bound of Õ(mn^{1/3}).

In describing the algorithm, I will discuss several technical tools that may be of independent interest:

  • a non-Euclidean generalization of gradient descent, bounds on its performance, and a way to use this to reduce approximate maximum flow and maximum concurrent flow to oblivious routing;
  • the definition and efficient construction of a new type of flow sparsifier that ameliorates the longstanding gap between the algorithmic efficacy of sparsification on flow and cut problems; and
  • the first almost-linear-time construction of an O(m^{o(1)})-competitive oblivious routing scheme.

This framework is quite flexible, and it readily generalizes to a broader range of graph problems. If time permits, I will briefly discuss how it can be applied, without any substantial modification, to obtain similar speedups for multicommodity flow problems as well.

This is joint work with Lorenzo Orecchia, Aaron Sidford, and Yin Tat Lee.

1:30 pm2:30 pm

In recent years, there was an emergence of new type of fast algorithms for various fundamental graph problems. A key object employed in these algorithms are electrical flows, which correspond to solutions to Laplacian linear systems.

In this talk, I will discuss this central role of electrical flows in all these developments, as well as sketch their potential further applications in graph algorithms. 

3:00 pm4:00 pm

We present an algorithm for recovering planted solutions in two well-known models, the stochastic block model and planted constraint satisfaction problems, via a common generalization in terms of random bipartite graphs. The algorithm matches up to a constant factor the best-known bounds for the number of edges (or constraints) needed for perfect recovery and its running time is linear in the number of edges used. The time complexity is significantly better than both spectral and SDP-based approaches. For unequal sizes of the bipartition (corresponding to odd-sized clauses), the algorithm succeeds at a significantly lower density than the spectral approaches, surpassing a barrier based on the spectral norm of a random matrix.

Joint work with Vitaly Feldman and Will Perkins.

Tuesday, December 2nd, 2014

9:30 am10:30 am

I'll highlight recent advances in algorithms for numerical linear algebra that have come from the technique of linear sketching, whereby given an n x d matrix A, one first compresses A to an m x d matrix S*A, where S is a certain m x n random matrix with m much less than n. Much of the expensive computation is then performed on S*A, thereby accelerating the solution for the original problem involving A. I'll discuss recent advances in least squares as well as robust regression, including least absolute deviation and M-estimators. I'll also discuss low rank approximation, and a number of variants of these problems. Finally, I'll mention limitations of the method.

11:00 am12:00 pm

We study the problem of sketching an input graph, so that, given the sketch, one can estimate the value (capacity) of any cut in the graph up to a small approximation, 1+epsilon. Classical constructions imply sketches of size essentially proportional to n and the square of 1/epsilon [Benczur, Karger 1996].

We construct sketches whose size is only linear in 1/epsilon (and n*polylog n). The improved size comes at a cost of a slightly weaker guarantee: our construction is a randomized scheme which, given accuracy epsilon and an n-vertex graph G=(V,E), produces a sketch from which the capacity of any cut (S,V\S) can be reported, with high probability, within approximation factor (1+epsilon).

We also show the improved dependence on epsilon is possible only under the weaker guarantee: namely, if a sketch succeeds in estimating the capacity of all cuts (S,V\S) in the graph simultaneously, it must be of size Omega(n/epsilon^2) bits.

1:30 pm2:30 pm

We give a simple algorithm to efficiently sample the rows of a matrix while preserving the L_1 norm of its product with vectors. Given an n * d matrix A, we find with high probability and in input sparsity time an A' consisting of about d log d rescaled rows of A such that |Ax|_1 is close to |A'x|_1 for all vectors x. We also show similar results giving nearly optimal sample bounds for all L_p in input sparsity time. Our results are based on sampling by Lewis weights, which can be viewed as statistical leverage scores of a reweighted matrix. We also give an elementary proof of the guarantees of this sampling process for L_1.

Joint work with Michael Cohen.

3:00 pm3:30 pm

The Laplacian Solvers World Championships are coming! Outperform the competition and become a world champion!

We are in the process of organizing the first Laplacian World Championships, where solvers for Laplacian linear systems of equations will compete against one another. Our aim is to promote implementation efforts, engineering of high-performance codes, and meaningful comparisons of different algorithms and codes.

The talk will justify the need for this and will describe our proposed structure of the championships. I am hoping for a lively discussion with feedback and suggestions as to how to structure this effort.

3:30 pm4:00 pm

We continue in the theme of the Laplacian World Championships. This interactive session will invite discussion of the structure and ground rules for the Championships, as well as more general discussion of implementations and empirical measurements of Laplacian linear solvers.

Joint work with Erik Boman (Sandia National Labs) and Kevin Deweese (UC Santa Barbara).

Wednesday, December 3rd, 2014

9:30 am10:30 am

We will see that the cover time of a graph (i.e., any finite state reversible Markov chain) can be computed in near-linear time using Laplacian solvers. In fact, the algorithm is simple enough to be contained in this abstract: Calculate |E|*max(sqrt{L+} (g_1, ..., g_n))^2 where L+ is the pseudo-inverse of the combinatorial Laplacian, {g_i} are i.i.d. N(0,1) random variables, and |E| is the number of edges.

In joint work with Jian Ding and Yuval Peres, we discovered this connection and proved that the estimate is accurate up to an O(1) factor. Recent work of Alex Zhai (a graduate student at Stanford) shows that the approximation is accurate up to a 1+o(1) factor and completes a beautiful picture relating the set of uncovered vertices to the extrema of the underlying Gaussian free field.

11:00 am12:00 pm

Many randomized algorithms employ Markov chains, and the mixing time of the chain is often the main term in the running time. A sequence of Markov chains is said to exhibit cutoff if the total variation distance to stationarity decreases from near 1 to near zero within a short time window (negligible to the mixing time). E.g. lazy walk on the hypercube exhibits cutoff, but lazy walk on the cycle does not. A necessary condition for cutoff is that the product of the spectral gap and the mixing time tends to infinity along the sequence; we prove that this condition is also sufficient for a lazy reversible chain, provided that the hitting times of certain large sets are concentrated. The proof is based on a maximal inequality that also yields sharp bounds for surprise probabilities in reversible chains.

Talk based on joint works with Riddhi Basu and Jonathan Hermon http://arxiv.org/abs/1409.3250 and with James Norris and Alex Zhai http://arxiv.org/abs/1408.0822, both to appear in SODA 2015.

1:30 pm2:30 pm

I will talk about ideas and techniques from approximation theory for approximating functions such as $x^s,$ $x^{-1}$ and $e^{-x}$, and demonstrate how these results play a crucial role in the design of fast algorithms for problems which are increasingly relevant. The key lies in the fact that such results imply faster ways to compute primitives such as $A^sv,$ $A^{-1}v,$ $\exp({-A})v,$ eigenvalues, and eigenvectors, which are fundamental to many spectral algorithms. Indeed, many fast algorithms reduce to the computation of such primitives, which have proved useful for speeding up several fundamental computations such as random walk simulation, graph partitioning, and solving systems of linear equations.

Based on a recent monograph with Sushant Sachdeva.

3:00 pm4:00 pm

In the generalized sparsest cut problem, given two undirected graphs G and H sharing the same set of vertices V, the goal is to find a a cut (S,V−S) that minimizes the ratio between the G-edges that are cut and the H-edges that are cut. It is known that Cheeger-like approximations to the cut are not possible assuming the Unique Games Conjecture. Despite that, we claim that generalized eigenvectors of the pair can still provide valuable information about the cut. We discuss appropriate generalizations of the Cheeger inequality that lend theoretical support to this claim. We also present experimental evidence: we formulate 'constrained clustering' problems as generalized cut problems and solve them through their spectral relaxations. The resulting algorithms provide large gains in speed and output quality relative to known methods.

.This is joint work with Mihai Cucuringu and Sanjay Chawla.

Thursday, December 4th, 2014

9:30 am10:30 am

We present an effcient algorithm for solving a linear system arising from the 1-Laplacian of a collapsible simplicial complex with a known collapsing sequence. When combined with a result of Chillingworth, our algorithm is applicable to convex simplicial complexes embedded in R3. The running time of our algorithm is nearly-linear in the size of the complex and is logarithmic on its numerical properties.

11:00 am12:00 pm

Fast iterative methods developed in Convex Optimization play a crucial role in a number of recent breakthroughs in the design of nearly-linear-time algorithms for fundamental combinatorial problems, such as maximum flow and graph partitioning. Multiplicative Weight Updates, Gradient Descent, and Nesterov’s Method have become a mainstay in the construction and analysis of fast algorithms. The power of these approaches raises a number of questions: What is the exact relation between Multiplicative Weight Updates and Gradient Descent? Why do Multiplicative Weight Updates show up in so many settings? What is the intuition behind Nesterov’s iterative algorithm that achieves the asymptotically optimal iteration count for smooth convex functions?

In this survey talk, I will provide some answers by presenting a unified framework that reveals the power and limitations of each method and provides much needed geometric intuition. Among other insights, I will explain how Multiplicative Weight Updates are a particular instance of a dual iterative method, known as Mirror Descent, and how Nesterov’s algorithm can be naturally derived as a combination of Mirror and Gradient Descent.

As an example of the application of these insights, I will also discuss their crucial role in recent progress in the nearly-linear-time solution of packing linear programs and in the construction of linear-sized sparsifiers.

The material in this talk is joint work with Zeyuan Allen-Zhu (MIT) and Zhenyu Liao (BU).

1:30 pm2:30 pm

In this talk, I will present a new algorithm for solving linear programs. Given a linear program with n variables, m > n constraints, and bit complexity L, our algorithm runs in Õ(sqrt(n) L) iterations each consisting of solving Õ(1) linear systems and additional nearly linear time computation. Our method improves upon the convergence rate of previous state-of-the-art linear programming methods which required solving either Õ(sqrt(m)L) linear systems [R88] or consisted of Õ((mn)^(1/4)) steps of more expensive linear algebra [VA93].

Interestingly, our algorithm not only nearly matches the convergence rate of the universal barrier of Nesterov and Nemirovskii [NN94], but in the special case of the linear programming formulation of various flow problems our methods converge at a rate faster than that predicted by any self-concordant barrier. In particular, we achieve a running time of Õ(|E| sqrt(|V|) log^2 U) for solving the maximum flow problem on a directed graph with |E| edges, |V| vertices, and capacity ratio U, thereby improving upon the previous fastest running time for solving this problem when |E| > Ω(|V|^(1+epsilon)) for any constant epsilon.

This talk will assume little exposure to linear programming algorithms, convex optimization, or graph theory.

This talk reflects joint work with Aaron Sidford.
See http://arxiv.org/abs/1312.6677 and http://arxiv.org/abs/1312.6713.

3:00 pm4:00 pm

In this talk, I will present a new algorithm for solving linear programs. Given a linear program with n variables, m > n constraints, and bit complexity L, our algorithm runs in Õ(sqrt(n) L) iterations each consisting of solving Õ(1) linear systems and additional nearly linear time computation. Our method improves upon the convergence rate of previous state-of-the-art linear programming methods which required solving either Õ(sqrt(m)L) linear systems [R88] or consisted of Õ((mn)^(1/4)) steps of more expensive linear algebra [VA93].

Interestingly, our algorithm not only nearly matches the convergence rate of the universal barrier of Nesterov and Nemirovskii [NN94], but in the special case of the linear programming formulation of various flow problems our methods converge at a rate faster than that predicted by any self-concordant barrier. In particular, we achieve a running time of Õ(|E| sqrt(|V|) log^2 U) for solving the maximum flow problem on a directed graph with |E| edges, |V| vertices, and capacity ratio U, thereby improving upon the previous fastest running time for solving this problem when |E| > Ω(|V|^(1+epsilon)) for any constant epsilon.

This talk will assume little exposure to linear programming algorithms, convex optimization, or graph theory.

This talk reflects joint work with Yin Tat Lee.
See http://arxiv.org/abs/1312.6677 and http://arxiv.org/abs/1312.6713.

Friday, December 5th, 2014

9:30 am10:30 am

The maximum flow problem, and its dual, the minimum cut problem, are fundamental to combinatorial optimization and operations research, with applications to scheduling, VLSI layout, network routing, and transportation. In this talk I will discuss a recently discovered general technique to covert weakly approximate cut-finding algorithms into (1+epsilon)-approximate flow algorithms with only nearly-linear overhead.

11:00 am12:00 pm

We show a closer algorithmic connection between constructing cut-approximating hierarchical tree decompositions and computing approximate maximum flows in undirected graphs. Our approach is based on invoking known algorithms for these problems recursively, while reducing problem sizes using ultra-sparsifiers. This leads to the first O(m polylog(n)) time algorithms for both problems.