Monday, February 22nd, 2016
The random-cluster model has been widely studied as a unifying framework for random graphs, spin systems and electrical networks, but its dynamics have so far largely resisted analysis. In this talk we present recent results concerning the mixing behavior of natural Markov chains for the random-cluster model in two canonical cases: the mean-field model and the two dimensional lattice graph Z^2. In the mean-field case, we identify a critical regime of the model parameter p in which several natural dynamics undergo an exponential slowdown. In Z^2, we provide tight mixing time bounds for the heat-bath dynamics for all non-critical values of p. These results hold for all values of the second model parameter q > 1.
Based on joint works with Alistair Sinclair.
I will consider two types of problems on large locally tree-like graphs:
(1) Finding a small subset of vertices that are better connected than the others; (2) Finding a small bisection. In the first case, I will prove that no local algorithm can solve the problem optimally, and instead a large gap exists between the optimum and the best solution that can be constructed locally. In the second case, I will provide evidence that --surprisingly--this gap vanishes. In particular, Glauber dynamics appears to approach the optimum with an arbitrary small error. Proving that this is indeed the case is an open problem.
Based on joint work with Song Mei
Tuesday, February 23rd, 2016
How difficult is it to sample configurations of a spin model on Delta-regular graphs? Do Markov chains mix rapidly on random Delta-regular graphs?
Mean-field models provide a simplification that can lead to (qualitative) insights on the above questions. We study the q-state ferromagnetic Potts model on the n-vertex complete graph known as the mean-field (Curie-Weiss) model and analyze the Swendsen-Wang algorithm which is a Markov chain that utilizes the random cluster representation for the ferromagnetic Potts model to recolor large sets of vertices in one step and potentially overcomes obstacles that inhibit single-site Glauber dynamics.
Joint work with Andreas Galainis and Eric Vigoda.
A finite Markov chain exhibits cutoff if its distance from equilibrium remains close to the initial value over a certain number of iterations and then abruptly drops to near zero on a much shorter time scale. Originally discovered in the context of card shuffling (Aldous-Diaconis, 1986), this remarkable phenomenon is now rigorously established for many reversible chains. In this talk we consider the non-reversible case of random walks on sparse random directed graphs, for which even the equilibrium measure is far from being understood. We establish the cutoff phenomenon, determine its precise window and prove that the cutoff profile approaches a simple, universal shape. This is joint work with Charles Bordenave and Pietro Caputo.
Wednesday, February 24th, 2016
Counting the number of independent sets for a bipartite graph (#BIS) plays a crucial role in the study of approximate counting. It has been conjectured that there is no fully polynomial-time (randomized) approximation scheme (FPTAS/FPRAS) for #BIS, and it was proved that the problem for instances with a maximum degree of 6 is already as hard as the general problem. In this paper, we obtain a surprising tractability result for a family of #BIS instances. We design a very simple deterministic fully polynomial-time approximation scheme (FPTAS) for #BIS when the maximum degree for one side is no larger than 5 . There is no restriction for the degrees on the other side, which do not even have to be bounded by a constant. Previously, FPTAS was only known for instances with a maximum degree of 5 for both sides.
No abstract available.
Thursday, February 25th, 2016
Often practitioners run a Markov chain because they are interested in some feature of the chain. This might become suitably random much faster than "all features". In this talk, I collect together some examples and methods. One striking example drawn from work with Bob Hough: simple random walk on k x k (uni-upper-triangular) matrices with entries mod p, entries just above diagonal take order p^2 steps to get random, entries on the 2nd diagonal take order p steps, entries on the kth diagonal take p^2/k steps.
How can we color the vertices of a graph by a local rule based on i.i.d. vertex labels? More precisely, suppose that the color of vertex v is determined by examining the labels within a finite (but perhaps random and unbounded) distance R of v, with the same rule applied at each vertex. (The coloring is then said to be a finitary factor of the i.i.d. labels). Focusing on Z^d, we investigate what can be said about the random variable R if the coloring is required to be proper, i.e. if adjacent vertices must have different colors. Depending on the dimension and the number of colors, the optimal tail decay is either a power law, or a tower of exponentials. I will briefly discuss generalizations to shifts of finite type and finitely dependent processes.
Based on joint work with Oded Schramm and David B Wilson.
An important problem in the implementation of Markov Chain Monte Carlo algorithms is to determine the convergence time, or the number of iterations before the chain is close to stationarity. For many Markov chains used in practice this time is not known. Even in cases where the convergence time is known to be polynomial, the theoretical bounds are often too crude to be practical. Thus, practitioners like to carry out some form of statistical analysis in order to assess convergence. This has led to the development of a number of methods known as convergence diagnostics which attempt to diagnose whether the Markov chain is far from stationarity. We study the problem of testing convergence in a number of settings and prove that the problem is hard in a computational sense. This is joint work with Andrej Bogdanov and Elchanan Mossel.
Friday, February 26th, 2016
The hard-core model from statistical physics is a randomly chosen independent set from a graph G (often the d-dimensional integer lattice), and the hard sphere model is a random sphere packing in Euclidean space. I will describe a new method for bounding the partition function in each of these models based on analyzing the occupancy fraction: the expected fraction of vertices of G that appear in the random independent set or the expected density of the sphere packing. Some of the applications of this method include new theorems in combinatorics and a surprising fact about spheres in dimension 24. I will also describe a possible approach to understanding the phase transition in hard-core systems via the occupancy fraction.