Monday, April 10th, 2017

9:30 am10:00 am
Green and Tao ([GT04]) used the existence of a dense subset indistinguishable from the primes under certain tests from a certain class to prove the existence of arbitrarily long prime arithmetic progressions. Tao and Ziegler ([TZ06]) showed some general conditions under which such a model exists. In [RTTV08], a quantitatively improved characterization was obtained using an argument based on Nisan’s proof of the Impagliazzo hard-core set theorem ([I95]) from computational complexity. Gowers ([Gow08]) independently obtained a similar improvement. 
 
We show that the existence of dense models can be reduced directly to the improved hardcore distri-bution results of Holenstein ([H05]). Using Holenstein’s uniform proof of an optimal density hard-core set theorem, we show that the dense models that one derives have a canonical form, with models being (sampleable from) functions defined in terms of tests from the original class. We give several applications, including generalizations of weak regularity lemmas ([FK99, K97, COCF]). For example, we show that any graph G with ∆n2 edges has a γ-cut-approximator of rank 2poly(1/γ,1/ log(1/∆), whereas direct application of [FK99] gives rank 2O(1/γ2∆2)
10:00 am10:30 am
We consider random (or sufficiently pseudorandom) instances of a constraint satisfaction problem with n variables, m = m(n) constraints, and constraint predicate F.  Given a certain running time n^r for the "SOS algorithm", how strong an upper-bound 1-delta can be placed on the maximum fraction of simultaneously satisfiable constraints?
 
We establish the optimal 3-way tradeoff between m, r, and delta, in terms of F, generalizing all previously known lower bounds in the area.  
 
Perhaps the most interesting contribution is a new and improved definition for the "closure" of a set of constraints in a CSP, a graph-theoretic concept heavily used in previous work on proof complexity.
 
joint work with Pravesh Kothari, Ryuhei Mori, and David Witmer
11:00 am12:00 pm

What structure/randomness dichotomies can be found across families of infinite structures? How precisely can we pinpoint the interactions of randomness and freedom vs structure and control inside a structure? Model theory has long investigated these questions. It turns out that model theoretic theorems about infinite objects can have strong consequences for finite objects. The talk will expand on these questions and some motivating consequences from the speaker's work (joint with S. Shelah), including characterization of the existence of irregular pairs in Szemeredi's regularity lemma and proofs of instances of Erdos-Hajnal.

2:00 pm3:00 pm
We determine the asymptotics of the log-probability that the number of k-term arithmetic progressions in a random subset of integers exceeds its expectation by a constant factor. This is the arithmetic analog of subgraph counts in a random graph. The solution uses nonlinear large deviation principles developed by Chatterjee and Dembo, and, more recently, by Eldan. 
 
I will highlight some open problems in additive combinatorics that we encountered in our work, namely concerning the "complexity" of the dual functions of AP-counts. These problems seem to extend beyond the current reach of higher-order Fourier analysis.
3:30 pm4:00 pm
We consider the standard two-party communication model. The central problem studied in this talk is how much one can save in information complexity by allowing an error of epsilon. 
 
We answer a question of Braverman by establishing a gap between the two different notions of the prior-free information cost. This implies that amortized randomized communication complexity is not necessarily equal to the amortized distributional communication complexity with respect to the hardest distribution. 
 
This is based on a joint work with Yuval Dagan, Yuval Filmus, and Yaqiao Li.
4:00 pm4:30 pm
We address the question when randomly perturbed graphs are sparse (bounded expansion and nowhere dense).
This relates to special coloring problems. Joint work with P. Ossona de Mendez.

Tuesday, April 11th, 2017

9:30 am10:30 am
Speaker: Terry Tao, UCLA

We discuss a variant of the density and energy increment arguments that we call an "entropy decrement method", which can be used to locate a scale in which two relevant random variables share very little mutual information, and thus behave somewhat like independent random variables.  We were able to use this method to obtain a new correlation estimate for multiplicative functions, which in turn was used to establish the Erdos discrepancy conjecture that any sequence taking values in {-1,+1} had unbounded sums on homogeneous arithmetic progressions.

11:00 am12:00 pm
Vinogradov showed in 1937 that every large enough odd integer can be represented as a sum of three primes. One may ask what if these primes are restricted to some subset of the primes. In general, if the set is badly distributed in congruence classes or Bohr sets, the result does not necessarily hold. Examples of such badly distributed sets include the set of primes that are 1 mod 5, and the set of primes $p$ for which $|| \sqrt{2} p || < 1/10$, where $||x||$ denotes the distance from $x$ to the nearest integer.
 
In the talk I discuss joint works with X. Shao and J. Maynard and X. Shao on Vinogradov's three primes theorem for subsets of primes. In particular I will describe two ``transference type'' results aimed to show the obstructions describe above are only sort of obstructions. As applications of the transference principles we get that Vinogradov's three primes theorem holds for Chen's primes (i.e. primes $p$ for which $p+2$ has at most two prime factors) and for almost equal primes (i.e., for any $\theta > 0.55$, every large enough odd integer $N$ can be represented as $N = p_1 + p_2 + p_3$, where $|p_i - N/3| < N^\theta$ for each $i$).
2:00 pm3:00 pm
The collection of large Fourier coefficients of a function, whether they be called 'major arcs' or the 'large spectrum', are one way of representing the linearly structured component of a function, and as such plays an important role in many problems in additive combinatorics, analytic number theory, theoretical computer science, and beyond. In this talk I will discuss some results concerning what kind of additive structure such sets can have, and how this structure can be exploited to improve density increment arguments.
3:30 pm4:00 pm
A graph G has a threshold rank k if the normalized adjacency matrix of G has at most k ``large’’ eigenvalues. In this talk we discuss structural properties of graphs that have small threshold rank. We see that these graphs can be seen as a union of at most k expander graphs. We also discuss a natural extension of the weak regularity lemma to (sparse) low threshold rank graphs. 
 
Based on joint works with Luca Trevisan.

Wednesday, April 12th, 2017

9:30 am10:30 am
The symmetry of a circle is easily destroyed: just ``individualize'' two non-opposite points -- color one of them red, the other blue -- and all the symmetry is gone.  In fact, the resulting structure is completely irregular: every point is uniquely identified by a pair of numerical invariants, namely, its pair of distances to the two individualized points. We shall say that the circle has a high degree of hidden irregularity.
 
In contrast, Johnson graphs are objects with robust symmetry: individualizing a small number of vertices of a Johnson graph hardly makes a dent in its symmetry.
 
It turns out that Johnson graphs are unique in this regard: Every finite relational structure of small arity either has a measurable (say 10%) hidden irregularity (exposed by individualizing a small number of elements) or has a high degree of hidden structure, manifested in a canonically embedded Johnson graph on more than 90% of the underlying set.
 
This dichotomy provides the combinatorial Divide-and-Conquer tool to recent progress on the complexity of the Graph Isomorphism problem.
11:00 am12:00 pm
The symmetry of a circle is easily destroyed: just ``individualize'' two non-opposite points -- color one of them red, the other blue -- and all the symmetry is gone.  In fact, the resulting structure is completely irregular: every point is uniquely identified by a pair of numerical invariants, namely, its pair of distances to the two individualized points. We shall say that the circle has a high degree of hidden irregularity.
 
In contrast, Johnson graphs are objects with robust symmetry: individualizing a small number of vertices of a Johnson graph hardly makes a dent in its symmetry.
 
It turns out that Johnson graphs are unique in this regard: Every finite relational structure of small arity either has a measurable (say 10%) hidden irregularity (exposed by individualizing a small number of elements) or has a high degree of hidden structure, manifested in a canonically embedded Johnson graph on more than 90% of the underlying set.
 
This dichotomy provides the combinatorial Divide-and-Conquer tool to recent progress on the complexity of the Graph Isomorphism problem.
2:00 pm3:00 pm

This is part of our ongoing efforts to understand high-dimensional combinatorial objects. These include simplicial complexes (a 1-dimensional simpicial complex is a graph), high-dimensional permutations (e.g., a 2-dimensional permutation is synonymous with a Latin square), hypertrees and more. For every type of combinatorial object that we wish to study it is crucial to understand their typical behavior. Namely, to understand the properties of random objects of the type that we study. In this talk I describe recent work with my student Maya Dotan which allows us to efficiently generate random one-factorizations. I will also mention ongoing work with my student Michael Simkin on random generation of Latin squares. In addition I will speak about joint work with Zur Luria on low-discrepancy high-dimensional permutations.  

Thursday, April 13th, 2017

9:30 am10:30 am

Positional games are perfect information games between two players with no chance moves. As such, they do not seem to allow much room for randomness and random considerations. Yet, as convincingly shown by intensive research in the last decades, randomness is omnipresent in positional games and manifests itself in a variety of ways, including:
- probabilistic (looking) winning criteria;
- guessing the threshold bias of biased games; - winning using random play;
- playing on random boards;
- introducing randomness into game rules.

In this survey talk I will discuss and illustrate the rich interplay between positional games and randomness, concentrating mostly on Maker-Breaker games. Essentially no prior experience with positional games will be assumed.

11:00 am12:00 pm
I will describe a journey that starts at error correcting codes for distributed storage, and leads to graph labeling, the study of the Birkhoff polytope graph and the representation theory of the symmetric group.
 
On the technical side, we prove tight bounds for the independence number of the Birkhoff polytope graph, and use it to characterize the alphabet size needed for maximally recoverable codes in grid topologies.
 
Joint work with Daniel Kane (UCSD) and Sankeerth Rao (UCSD).
2:00 pm2:30 pm

Let f:F_2^n --> {0,1} be a function and suppose F_2^n x F_2^n is partitioned into k sets A_i x B_i such that f is constant on each sumset A_i + B_i.  We show this implies a partitioning of F_2^n to quasipoly(k) affine subspaces such that f is constant on each. In other words, up to polynomial factors, deterministic communication complexity and parity decision tree complexity of f are equal. This relies on a novel technique of entropy decrement combined with Sanders' Bogolyubov-Ruzsa lemma.

Joint work with Hamed Hatami and Shachar Lovett.

2:30 pm3:00 pm

A random subset with density alpha of an n-dimensional vector space over a finite field on p elements likely has about an alpha^3 fraction of the three-term arithmetic progressions with each nonzero common difference. Note that, for a subset of density alpha, the density of three-term arithmetic progressions can be much smaller than alpha^3. However, Green proved that for each epsilon>0 there is n_p(epsilon) such that if the dimension n of the vector space is at least n_p(epsilon), then there always exists a nonzero common difference d with three-term arithmetic progression density at least alpha^3-epsilon, what we expect from the random bound. The proof uses the arithmetic regularity lemma and gives a bound on n_p(epsilon) which is an exponential tower with height polynomial in 1/epsilon.

We provide a construction which shows that n_p(epsilon) is at least a tower with height logarithmic in 1/epsilon when epsilon is small, and prove a matching upper bound up to constant factor in the tower height. This gives the first natural application of a regularity lemma where the tower type bound is really necessary. Tight bounds are also obtained for other ranges of epsilon (in terms of the density alpha).

This is joint work with Jacob Fox and Yufei Zhao.

3:30 pm4:00 pm

Locally decodable codes (LDCs) are error correcting codes that allow for decoding of a single message bit using a small number of queries to a corrupted encoding. In this work, we give a new characterization of LDCs using distributions over smooth Boolean functions whose expectation is hard to approximate (in~$L_\infty$~norm) with a small number of samples. We give several candidates for such distributions  coming from finite field incidence geometry and from hypergraph (non)expanders. We also prove a useful lemma showing that (smooth) LDCs which are only required to work on average over a random message and a random message index can be turned into true LDCs at the cost of only constant factors in the parameters.

4:00 pm4:30 pm
A basic combinatorial interpretation of Shannon's entropy function is via the ``20 questions'' game. 
This cooperative game is played by two players, Alice and Bob: 
Alice picks a distribution $\pi$ over the numbers $\{1,\ldots,n\}$, 
and announces it to Bob.
 She then chooses a number $x$ according to $\pi$, 
and Bob attempts to identify $x$ using as few 
Yes/No queries as possible, on average.
 
An optimal strategy for the ``20 questions'' game is given by a Huffman code for $\pi$: 
Bob's questions reveal the codeword for $x$ bit by bit. 
This strategy finds $x$ using fewer than $H(\pi)+1$ questions on average. 
However, the questions asked by Bob could be arbitrary. 
In this paper, we investigate the following question:
\emph{Are there restricted sets of questions that match the performance of Huffman codes, either exactly or approximately?}
 
Our first main result shows that for every distribution $\pi$, 
Bob has a strategy that uses only questions of the form ``$x < c$?'' and ``$x = c$?'’, 
and uncovers $x$ using at most $H(\pi)+1$ questions on average, 
matching the performance of Huffman codes in this sense.
We also give a natural set of $O(rn^{1/r})$ questions that achieve a performance of at most $H(\pi)+r$, 
and show that $\Omega(rn^{1/r})$ questions are required to achieve such a guarantee.
 
 
Our second main result gives a set $\Q$ of $1.25^{n+o(n)}$ questions such that 
for every distribution $\pi$, Bob can implement an \emph{optimal} strategy for $\pi$ using only questions from $\cQ$. 
We also show that $1.25^{n-o(n)}$ questions are needed, for infinitely many $n$.
Here, the set $\Q$ is derived via a random construction.
If time will allow, we will discuss the challenge of derandomizing
this construction, and a connection with hitting maximal antichains.
 
Based on joint work with Yuval Dagan, Yuval Filmus, and Ariel Gabizon

Friday, April 14th, 2017

9:30 am10:30 am
A word w is said to contain the pattern P if there is a way to substitute a nonempty word for each letter in P so that the resulting word is a subword of w. Bean, Ehrenfeucht and McNulty and, independently, Zimin characterised the patterns P which are unavoidable, in the sense that any sufficiently long word over a fixed alphabet contains P. Zimin's characterisation says that a pattern is unavoidable if and only if it is contained in a Zimin word, where the Zimin words are defined by Z_1 = x_1 and Z_n=Z_{n-1} x_n Z_{n-1}.  We study the quantitative aspects of this theorem, obtaining essentially tight tower-type bounds for the function f(n,q), the least integer such that any word of length f(n, q) over an alphabet of size q contains Z_n.
 
Joint work with Jacob Fox and Benny Sudakov.
11:00 am12:00 pm
Suppose we have a set of integers S, and we wish to count solutions to a certain system of linear equations in S; in particular, we want to know when we get the expected "random" answer.  A by-now standard approach to this is to show that the count is controlled by a certain uniformity norm for some s, and use tools from higher order Fourier analysis to understand when S is pseudorandom by that measure.
 
One question is: what is the smallest s you can get away with?  In other wards, how high up in the higher Fourier hierarchy do you have to go to get the right answer?  When can we get away with looking just for linear discrepancies?
 
This quantity s is called the "true complexity" of the system, and these question were addressed in work of Gowers--Wolf and Green--Tao.  A striking feature, though, is the significant gap between what elementary methods can show (the "Cauchy--Schwarz complexity") and the truth, which is currently only accessible using very heavy machinery and which gives ineffective bounds.
 
We will explore some questions in this area.