Monday, February 22nd, 2016

9:30 am10:15 am
One of the most important recent developments in the complexity of approximate counting is the classification of the complexity of approximating the partition functions of antiferromagnetic 2-spin systems on bounded-degree graphs. This classification is based on a beautiful connection to the so-called uniqueness phase transition from statistical physics on the infinite Delta-regular tree.  In this talk we study the impact of this classification on unweighted 2-spin models on k-uniform hypergraphs. As has already been indicated by Yin and Zhao, the connection between the uniqueness phase transition  and the complexity of approximate counting  breaks down in the hypergraph setting.
 
Nevertheless, we show that for every non-trivial symmetric k-ary Boolean function  f there exists a degree bound Delta_0 so that for all Delta >= Delta_0 the following problem is NP-hard: given a k-uniform hypergraph with maximum degree at most Delta,  approximate the partition function of the hypergraph 2-spin model associated with f. It is NP-hard to approximate this partition function even within an exponential factor. By contrast, if f is a trivial symmetric Boolean function (e.g., any function f that is excluded from our result), then the partition function of the corresponding hypergraph 2-spin model can be computed exactly in polynomial time. (joint work with Andreas Galanis)
10:25 am11:10 am

The random-cluster model has been widely studied as a unifying framework for random graphs, spin systems and electrical networks, but its dynamics have so far largely resisted analysis. In this talk we present recent results concerning the mixing behavior of natural Markov chains for the random-cluster model in two canonical cases: the mean-field model and the two dimensional lattice graph Z^2. In the mean-field case, we identify a critical regime of the model parameter p in which several natural dynamics undergo an exponential slowdown. In Z^2, we provide tight mixing time bounds for the heat-bath dynamics for all non-critical values of p. These results hold for all values of the second model parameter q > 1.

Based on joint works with Alistair Sinclair.

11:40 am12:25 pm

 

I will consider two types of problems on large locally tree-like graphs:

(1) Finding a small subset of vertices that are better connected than the others; (2) Finding a small bisection. In the first case, I will prove that no local algorithm can solve the problem optimally, and instead a large gap exists between the optimum and the best solution that can be constructed locally. In the second case, I will provide evidence that --surprisingly--this gap vanishes. In particular, Glauber dynamics appears to approach the optimum with an arbitrary small error. Proving that this is indeed the case is an open problem.

Based on joint work with Song Mei

2:00 pm2:45 pm
Recently, the ``information percolation'' framework was introduced as a way to obtain sharp estimates on mixing for spin systems at high temperatures, and in particular, to establish cutoff for the Ising model in three dimensions up to criticality from a worst starting state. I will describe how this method can be used to understand the effect of different initial states on the mixing time, both random (''warm start'') and deterministic.
Joint work with Allan Sly.
2:55 pm3:40 pm
Abstract: Lubotzky, Phillips, and Sarnak (1988)  defined a connected $d$-regular graph $G$  (with $d>2$) to be Ramanujan iff for every eigenvalue  of its adjacency matrix, its absolute value is either $d$, or at most $2(d-1)^{1/2}$. We show that on every Ramanujan graph $G$ with $n$ vertices, simple random walk exhibits cutoff:  the mixing time in total-variation is asymptotic to $[d/(d-2)] \log_{d-1} n$. Furthermore, for any $p$ which is at least 1,  the $d$-regular Ramanujan graphs minimize the asymptotic $L^p$-mixing time among all  $d$-regular graphs. In particular, this gives the first examples of transitive expanders with cutoff. Our proof also shows that, for every vertex in an $n$-vertex Ramanujan graph $G$, its distance from all but a negligible fraction of the other vertices in $G$ is asymptotically $\log_{d-1} n$. 
The key to our proofs is a precise spectral analysis of the non-backtracking walk. (Joint work with Eyal Lubetzky, NYU   http://arxiv.org/abs/1507.04725 )

Tuesday, February 23rd, 2016

9:30 am10:15 am
We study the problem of approximately counting independent sets in hypergraphs with maximum degree Delta whose hyperedges have arity at least k>=3. In the graphical case, where the arity of every edge is 2, it is known that for Delta>=6 the problem is hard and for Delta<=5 there is an FPTAS. This approximation algorithm uses correlation decay techniques that rely on the fact that strong spatial mixing occurs.
 
Surprisingly, in the hypergraph case we give a correlation decay based algorithm even though strong spatial mixing fails. Our main idea is introducing amortization in the correlation decay proof. In particular, in the correlation decay proof we track not just the decay but also combinatorial properties of the intermediate instances. This enables us to give an FPTAS even when strong spatial mixing fails, for example, when Delta=6 and k>=3 and also for all Delta and all sufficiently large k>=1.66*Delta. We further demonstrate that in the hypergraph independent set model, approximating the partition function is NP-hard even within the uniqueness regime.
 
Joint work with Ivona Bezakova, Leslie Ann Goldberg, Heng Guo, and Daniel Stefankovic.
10:25 am11:10 am

How difficult is it to sample configurations of a spin model on Delta-regular graphs? Do Markov chains mix rapidly on random Delta-regular graphs?

Mean-field models provide a simplification that can lead to (qualitative) insights on the above questions. We study the q-state ferromagnetic Potts model on the n-vertex complete graph known as the mean-field (Curie-Weiss) model and analyze the Swendsen-Wang algorithm which is a Markov chain that utilizes the random cluster representation for the ferromagnetic Potts model to recolor large sets of vertices in one step and potentially overcomes obstacles that inhibit single-site Glauber dynamics.

Joint work with Andreas Galainis and Eric Vigoda.

11:40 am12:25 pm

A finite Markov chain exhibits cutoff if its distance from equilibrium remains close to the initial value over a certain number of iterations and then abruptly drops to near zero on a much shorter time scale. Originally discovered in the context of card shuffling (Aldous-Diaconis, 1986), this remarkable phenomenon is now rigorously established for many reversible chains. In this talk we consider the non-reversible case of random walks on sparse random directed graphs, for which even the equilibrium measure is far from being understood. We establish the cutoff phenomenon, determine its precise window and prove that the cutoff profile approaches a simple, universal shape. This is joint work with Charles Bordenave and Pietro Caputo.

2:00 pm2:45 pm
I will give an introduction to random lattice triangulations. These are triangulations of the integer points inside the square [0,n] x [0,n] where each triangulation T is obtained with probability proportional to \lambda^|T|, where \lambda is a positive real parameter and |T| is the total length of the edges in T. I will discuss structural and dynamical properties of such triangulationssuch as phase transitions with respect to \lambda, decay of correlations, local limits, and mixing time of Glauber dynamics. This is based on joint works with Pietro Caputo, Fabio Martinelli and Alistair Sinclair.
2:55 pm3:40 pm
We consider random lattice triangulations of n×k rectangular regions with weight λ|σ|, where λ > 0 is a parameter and |σ| denotes the total edge length of the triangulation. When λ ∈ (0, 1) and k is fixed, we prove a tight upper bound of order n^2 for the mixing time of the edge-flip Glauber dynamics. Combined with the previously known lower bound of order exp(Ω(n^2)) for λ > 1, this establishes the existence of a dynamical phase transition for thin rectangles with critical point at λ = 1.
 
Work in collaboration with P. Caputo, A. Sinclair and A. Stauffer
4:10 pm4:55 pm
Counting the number of satisfying truth assignments of a given Boolean formula or sampling such assignments uniformly at random are fundamental computational problems in computer science with numerous
applications.  While the theory of these problems has been thoroughly investigated in the 1980s, approximation algorithms developed by theoreticians do not scale up to industrial-sized instances.  Algorithms used by the industry offer better scalability, but give up certain correctness guarantees to achieve scalability. We describe a novel approach, based on universal hashing and SAT solving, that scales to formulas with hundreds of thousands of variable without giving up correctness guarantees.
 
Joint work with Supratik Chaudhuri, Daniel Fremont, Kuldeep Meel, and Sanjit Sheshia.

Wednesday, February 24th, 2016

9:30 am10:15 am

Counting the number of independent sets for a bipartite graph (#BIS) plays a crucial role in the study of approximate counting. It has been conjectured that there is no fully polynomial-time (randomized) approximation scheme (FPTAS/FPRAS) for #BIS, and it was proved that the problem for instances with a maximum degree of 6 is already as hard as the general problem. In this paper, we obtain a surprising tractability result for a family of #BIS instances. We design a very simple deterministic fully polynomial-time approximation scheme (FPTAS) for #BIS when the maximum degree for one side is no larger than 5 . There is no restriction for the degrees on the other side, which do not even have to be bounded by a constant. Previously, FPTAS was only known for instances with a maximum degree of 5 for both sides.

10:25 am11:10 am
A substantial body of work has been devoted to finding analogs of positive Ricci curvature in discrete spaces.  Earlier, the MCMC community discovered that path coupling is a powerful technique in establishing rapid mixing.  In an ideal world, these two research threads would eventually unite, enriching both areas.  A conjecture of Peres and Tetali offers a path to this resolution, but remains open.
 
Based on joint work with R. Eldan and J. Lehec, I will show how entropic interpolation can be used to establish an entropy-transport inequality in chains that admit a contracting coupling.  By work of Bobkov and Gotze, this is a necessary condition for establishing a log-Sobolev inequality (and thus resolving the Peres-Tetali conjecture).
11:40 am12:25 pm
Multi-diffusion limited aggregation model (MDLA) was introduced in physics literature in mid seventies as a model for dendrite growth. Its "simplification" - the Diffusion Limited Aggregation (DLA) of Witten and Sanders became a paradigm model in the field. 
 
MDLA is a stochastic aggregation  model on Z^d, d >= 1. Start with an infinite collection of particles located at vertices of the lattice, with at most one particle per vertex, initially distributed according to the product  Bernoulli measure with parameter 0< p < 1.  In addition, there is an aggregate, which initially consists of only  one special particle, placed at the origin and called seed. The system evolves in continuos time. Non-aggregated particles move independently, as  simple symmetric random walks obeying the exclusion rule, i.e. when one particle wants to jump on a vertex which already contains  another non-aggregated particle, such jump is suppressed and particle waits for another attempt to jump. The seed-particle (aggregate) does not move. Whenever a non-aggregated particle attempts to jump on  a vertex occupied by the aggregate, the jump of  this particle is suppressed, the particle is declared to be part of the aggregate and stops moving forever.  
 
Thus the aggregate grows by attaching particles to its surface whenever a particle attempts to  jump  on it.  This evolution will be referred as multi-diffusion limited aggregation (MDLA).
 
The growth of the aggregate exhibits very rich behavior, in particular a transition from sub-linear to linear growth, depending on the initial density of the particles. I will present recent proof (jointly with A. Stauffer, Univ.of Bath) of the linear phase, and discuss various conjectures and open problems related to speeds and shapes of growth.

Thursday, February 25th, 2016

9:30 am10:15 am

Often practitioners run a Markov chain because they are interested in some feature of the chain.  This might become suitably random much faster than "all features".  In this talk, I collect together some examples and methods.  One striking example drawn from work with Bob Hough: simple random walk on k x k (uni-upper-triangular) matrices with entries mod p, entries just above diagonal take order p^2 steps to get random, entries on the 2nd diagonal take order p steps, entries on the kth diagonal take p^2/k steps.

10:25 am11:10 am

How can we color the vertices of a graph by a local rule based on i.i.d. vertex labels? More precisely, suppose that the color of vertex v is determined by examining the labels within a finite (but perhaps random and unbounded) distance R of v, with the same rule applied at each vertex. (The coloring is then said to be a finitary factor of the i.i.d. labels). Focusing on Z^d, we investigate what can be said about the random variable R if the coloring is required to be proper, i.e. if adjacent vertices must have different colors. Depending on the dimension and the number of colors, the optimal tail decay is either a power law, or a tower of exponentials. I will briefly discuss generalizations to shifts of finite type and finitely dependent processes.

Based on joint work with Oded Schramm and David B Wilson.

11:40 am12:25 pm

An important problem in the implementation of Markov Chain Monte Carlo algorithms is to determine the convergence time, or the number of iterations before the chain is close to stationarity. For many Markov chains used in practice this time is not known. Even in cases where the convergence time is known to be polynomial, the theoretical bounds are often too crude to be practical. Thus, practitioners like to carry out some form of statistical analysis in order to assess convergence. This has led to the development of a number of methods known as convergence diagnostics which attempt to diagnose whether the Markov chain is far from stationarity. We study the problem of testing convergence in a number of settings and prove that the problem is hard in a computational sense. This is joint work with Andrej Bogdanov and Elchanan Mossel.

2:00 pm2:45 pm

No abstract available.

2:55 pm3:40 pm
Speaker: Guy Bresler, MIT
We study the problem of learning a tree graphical model from samples such that low-order marginals are accurate. We define a distance (``small set TV" or ssTV) between distributions P and Q by taking the maximum, over all subsets S of a given size, of the total variation between the marginals of P and Q on S. Approximating a distribution to within small ssTV allows making predictions based on partial observations. Focusing on pairwise marginals and tree-structured Ising models on p nodes, we give an algorithm that produces a distribution (from the same class) with ssTV at most eta using log p / eta^2 IID samples. We also prove a matching lower bound showing that this sample complexity is optimal. In contrast to the problem of learning the tree structure itself, there is no dependence on the strength of edge interactions. Thus, even when there are far too few samples to recover the correct tree, it is possible to learn an incorrect tree that is useful. The time complexity of the proposed algorithm is on the order of p^2, dominated by the computation of pairwise correlations.
 
Joint work with Mina Karzand.
4:10 pm4:55 pm
We consider the problem of designing a simple, oblivious scheme to generate (almost) random permutations. We use the concept of switching networks and show that almost every switching network of logarithmic depth can be used to almost randomly permute any set of (1-epsilon)n elements with any epsilon>0 (that is, gives an almost (1-epsilon)n-wise independent permutation). Furthermore, we show that the result still holds for every switching network of logarithmic depth that has some special expansion properties, leading to an explicit construction of such networks. Our result can be also extended to an explicit construction of a switching network of depth O(log^2n) and with O(n log n) switches that almost randomly permutes any set of n elements.
 
Our results are obtained using a non-trivial coupling approach to study mixing times of Markov chains which allows us to reduce the problem to some random walk-like problem on expanders.

Friday, February 26th, 2016

9:30 am10:15 am
We prove a new rigorous lower bound on the critical density $\rho_c$ of the hard disk model. Specifically, we show that up to a certain density, a natural Markov chain that moves one disk at a time mixes rapidly, and that, moreover, positional correlations decay exponentially with distance. Our main tool is an optimized metric on neighboring pairs of configurations, i.e., configurations that differ in the position of a single disk: we define a metric that depends on the difference in these positions, and which approaches zero continuously as they coincide. This improves the previous rigorous bound on rapid mixing from $\rho_c \ge 1/8$ to $\rho_c \ge 0.154$, and shows that this a rigorous lower bound on the transition to the solid phase.
 
This is joint work with Tom Hayes. 
10:25 am11:10 am

The hard-core model from statistical physics is a randomly chosen independent set from a graph G (often the d-dimensional integer lattice), and the hard sphere model is a random sphere packing in Euclidean space.  I will describe a new method for bounding the partition function in each of these models based on analyzing the occupancy fraction: the expected fraction of vertices of G that appear in the random independent set or the expected density of the sphere packing.  Some of the applications of this method include new theorems in combinatorics and a surprising fact about spheres in dimension 24. I will also describe a possible approach to understanding the phase transition in hard-core systems via the occupancy fraction.

11:40 am12:25 pm
We study equitable rectangular dissections of an n x n lattice region into n rectangles of  area n, where n=2^k for an even integer k.  There is a natural edge-flipping Markov chain that connects the state space.  A similar edge-flipping chain is also known to connect the state space when restricted to dyadic tilings, where each rectangle has edges that are dyadic intervals.  We consider a weighted version of these Markov chains, where the weight of each rectangular dissection (or dyadic tiling) is exponential in the total edge length.  We show there is a phase transition in the dyadic setting where the mixing time is unknown only at the critical point.  The behavior for general rectangular dissections is more subtle, and the chain requires exponential time above and below a unique critical point, but for two different reasons, and remains open at the critical point.