Monday, October 15th, 2018

9:30 am10:30 am
Speaker: Alexander Sherstov (UCLA)

We study the infinity-norm approximation of halfspaces h:{0,1}^n->{0,1} by polynomials and rational functions. We determine the minimum error to which approximants of any given degree can approximate the worst-case halfspace, and construct this "hardest" halfspace explicitly. As an application, we construct a communication problem with the largest possible gap between sign-rank and discrepancy (or equivalently, communication complexity with unbounded versus weakly bounded error), improving quadratically on previous constructions. The proof features elements of approximation theory, random walks, and number theory. Our starting point is the construction, due to Ajtai et al. (1990), of a set of O(log m) of integers that appear random modulo m in the sense of the discrete Fourier transform on Z/mZ.

11:00 am11:30 am
Speaker: Michal Koucky (Charles University)

We consider the problem of elimination in communication complexity, that was first raised by Ambainis et al. (2001) and later studied by Beimel et al. (2014) for its connection to the famous direct sum question. In this problem, let f from {0,1}^{2n} to {0,1} be any boolean function. Alice and Bob get k inputs x_1,...,x_k and y_1,...,y_k respectively, with x_i,y_i in {0,1}^n. They want to output a k-bit vector v, such that there exists one index i for which v_i is not equal f(x_i,y_i). We prove a general result lower bounding the randomized communication complexity of the elimination problem for f using its discrepancy. Consequently, we obtain strong lower bounds for the functions Inner-Product and Greater-Than, that work for exponentially larger values of k than the best previous bounds. To prove our result, we use a pseudo-random notion called regularity that was first used by Raz and Wigderson (1999). We show that functions with small discrepancy are regular. We also observe that a weaker notion, that we call weak-regularity, already implies hardness of elimination. Finally, we give a different proof, borrowing ideas from Viola (2015) to show that Greater-Than is weakly regular.

11:30 am12:00 pm
Speaker: Amir Yehudayoff (Technion Israel Institute of Technology)

The plan is to see how to use triangular discrimination to prove stronger round elimination results.

2:00 pm2:30 pm
Speaker: Justin Thaler (Georgetown University)

Quantum query complexity is known to behave multiplicatively under block composition: for any two functions f, g with quantum query complexities Q(f) and Q(g), the block composition F obtained by applying f to the results of g on disjoint sets of variables satisfies Q(F)=Theta(Q(f) * Q(g)). Meanwhile, "shared-input" compositions, where the copies of the inner functions g may share n inputs in an arbitrary fashion, are not well understood. I will describe a general setting in which the overlapping structure of the inputs to copies of g can be exploited algorithmically. Specifically, for any function f, I will give an algorithm for evaluating any shared-input composition of f and g=AND using ~O((Q(f) * n)^{1/2}) quantum queries. As long as Q(f)<<n, this improves on the bound of O(Q(f) * n^{1/2}) that follows by treating each conjunction independently, and is tight for worst-case choices of f. I will describe consequences in circuit complexity and learning theory, and highlight several tantalizing open questions. Joint work with Mark Bun and Robin Kothari.

2:30 pm3:00 pm
Speaker: Toni Pitassi (University of Toronto)

In this talk we discuss both new and old connections between classic problems in Number-on-Forehead Communication Complexity and Additive Combinatorics and Ramsey Theory. We also give a new, direct protocol for the Exact-n problem, of determining if the sum of the players inputs is exactly n. This is joint work with Adi Shraibman and Nati Linial.

3:30 pm4:00 pm
Speaker: Emanuele Viola (Northeastern University)

We show that if we add a little noise to any distribution with bounded independence then we fool tests given by the product of bounded functions. This result has applications to pseudorandomness and communication complexity. In the talk we shall consider decoding noisy codewords of length m split among k parties. For Reed-Solomon codes of dimension m/k where k = O(1), communication Omega(pm) is required to decode one message symbol from a codeword with pm errors, and this is nearly tight. This result gives a formal sense in which decoding is harder than encoding. Based on work with Elad Haramaty and Chin Ho Lee.

4:00 pm4:30 pm
Speaker: Sumegha Garg (Princeton University)

While it is known that using network coding can significantly improve the throughput of directed networks, it is a notorious open problem whether coding yields any advantage over the multicommodity flow (MCF) rate in undirected networks. It was conjectured by Li and Li (2004) that the answer is "no". In this talk, we show that even a small advantage over MCF can be amplified to yield a near-maximum possible gap. We prove that any undirected network with k source-sink pairs that exhibits a (1+eps) gap between its MCF rate and its network coding rate can be used to construct a family of graphs G whose gap is log(|G|)^c for some constant c<1. The resulting gap is close to the best currently known upper bound, log(|G|), which follows from the connection between MCF and sparsest cuts. 

Joint work with Mark Braverman and Ariel Schvartzman.

Tuesday, October 16th, 2018

9:30 am10:30 am
Speaker: Grigory Yaroslavtsev (Indiana University, Bloomington)

I will describe recent advances in linear sketching over finite fields. For a function f: Z_p^n -> R a linear sketch modulo p is a distribution A over k x n matrices with entries in Z_p which for every x \in Z_p^n allows one to compute f(x) given access only to Ax. An important example here are binary sketches (corresponding to p = 2). I will describe recent structural results about the smallest dimension of k which enables linear sketching using the tools of Fourier analysis and communication complexity. In particular, if f is an XOR-function linear sketching over Z_2 turns out to be the optimal tool for the design of two-player one-way communication protocols (under the uniform distribution of x) as well as multi-player one-way broadcasting protocols with Omega(n) players (for any distribution of x).

Based on joint works with Kannan, Mossel and Sanyal (CCC'18) and Hosseini and Lovett.

11:00 am11:30 am
Speaker: Joshua Brody (Swarthmore College)

In this talk, we consider randomized query complexity in the low-error regime, along with the direct sum problem for randomized query complexity. First, we give a total function whose \eps-error query complexity scales with log(1/\eps), even when the query algorithm is allowed to abort with constant probability. Asymptotic separations between zero-error and (1/3)-error query complexity for total functions weren't even known until very recently. We then give a strong direct sum theorem for randomized query complexity. Specifically, we show that computing k copies of any function f with error \eps requires k times the query complexity of computing one copy with low error (~ \eps/k) and constant abort probability. Taken together, these results give a total function f, such that computing k copies of f requires an Omega(k log k) blowup in query complexity, partially answering a question of Drucker [\cite{Drucker 12}] As a technical tool, we introduce the notion of a query resistant code, which is a probabilistic encoding ENC of an alphabet \Sigma such that at least half of the bits of ENC(x) must be queried before learning anything about x \in \Sigma.

This is joint work with Eric Blais.

11:30 am12:00 pm
Speaker: Troy Lee (Nanyang Technological University)

A natural way to build up functions is to compose them together. For boolean functions f: {0,1}^n -> {0,1} and g: {0,1}^m -> {0,1} we can create a composed function f o g^n : {0,1}^{mn} -> {0,1}, where f o g^n (x_1, ..., x_n) = f(g(x_1), g(x_2), ..., g(x_n)). For most reasonable complexity measures the complexity of f o g^n is at most the complexity of f times the complexity of g. For deterministic query complexity and quantum query complexity this has been shown to be a lower bound as well. For randomized query complexity, however, it is not known if the randomized complexity of f times the randomized complexity of g is a lower bound on the randomized complexity of the composition of f and g. We show an example where f is a relation and g is a partial function where R(f o g^n) <= R(f) * sqrt(R(g)). Thus in this case the perfect composition lower bound cannot hold. For this general setting we show a matching lower bound that R(f o g^n) >= Omega(R(f)*sqrt(R(g))). This generalizes (and slightly improves) the lower bound of Ben-David and Kothari who showed a similar result when f is a partial function and g is total. Our lower bound goes through a new complexity measure we introduce called conflict complexity. This is joint work with Dmitry Gavinsky, Miklos Santha, and Swagato Sanyal.

2:00 pm2:30 pm
Speaker: Rotem Oshman (Tel Aviv University)

Interactive proof systems allow a resource-bounded verifier to decide an intractable language (or compute a hard function) by communicating with a powerful but untrusted prover. Such systems guarantee that the prover can only convince the verifier of true statements. In the context of centralized computation, a celebrated result shows that interactive proofs are extremely powerful, allowing polynomial-time verifiers to decide any language in PSPACE. In this work we initiate the study of interactive distributed proofs: a network of nodes interacts with a single untrusted prover, who sees the entire network graph, to decide whether the graph satisfies some property. We focus on the communication cost of the protocol ? the number of bits the nodes must exchange with the prover and with each other. Our model can also be viewed as a generalization of the various models of ?distributed NP? (proof labeling schemes, etc.) which received significant attention recently: while these models only allow the prover to present each network node with a string of advice, our model allows for back-and-forth interaction. We prove both upper and lower bounds for the new model. We show that for some problems, interaction can exponentially decrease the communication cost compared to a non-interactive prover, but on the other hand, some problems retain non-trivial cost even with interaction.

2:30 pm3:00 pm
Speaker: Kasper Green Larsen (Aarhaus University)

An Oblivious RAM (ORAM) introduced by Goldreich and Ostrovsky [JACM'96] is a (possibly randomized) RAM, for which the memory access pattern reveals no information about the operations performed. The main performance metric of an ORAM is the bandwidth overhead, i.e., the multiplicative factor extra memory blocks that must be accessed to hide the operation sequence. In their seminal paper introducing the ORAM, Goldreich and Ostrovsky proved an amortized Omega(log n) bandwidth overhead lower bound for ORAMs with memory size n. Their lower bound is very strong in the sense that it applies to the ``offline'' setting in which the ORAM knows the entire sequence of operations ahead of time. However, as pointed out by Boyle and Naor [ITCS'16] in the paper "Is there an oblivious RAM lower bound?", there are two caveats with the lower bound of Goldreich and Ostrovsky: (1) it only applies to ``balls in bins'' algorithms, i.e., algorithms where the ORAM may only shuffle blocks around and not apply any sophisticated encoding of the data, and (2), it only applies to statistically secure constructions. Boyle and Naor showed that removing the ``balls in bins'' assumption would result in super linear lower bounds for sorting circuits, a long standing open problem in circuit complexity. As a way to circumventing this barrier, they also proposed a notion of an ``online'' ORAM, which is an ORAM that remains secure even if the operations arrive in an online manner. They argued that most known ORAM constructions work in the online setting as well. In this talk, we present an Omega(log n) lower bound on the bandwidth overhead of any online ORAM, even if we require only computational security and allow arbitrary representations of data, thus greatly strengthening the lower bound of Goldreich and Ostrovsky in the online setting. The lower bound applies to ORAMs with memory size n and any word size w >= log n. The bound therefore asymptotically matches the known upper bounds when w = Omega(log^2 n) or when m=Omega(n^eps) for any constant eps>0. This work received the CRYPTO'18 Best Paper Award and is joint with Jesper Buus Nielsen from Aarhus University.

3:30 pm4:00 pm
Speaker: Amit Chakrabarti (Dartmouth College)

A single-player Memory Game starts with n pairs of identical cards facing down, with a picture drawn on the face of each card. There are n distinct pictures: the two cards in each pair have the same picture. Each operation in the game reveals two cards. If these cards match, i.e., they have the same picture, then the player takes them away; otherwise the cards are turned back to face down. The object of the game is to clear all cards as quickly as possible. Past works have thoroughly studied the expected number of moves required, assuming optimal play, but these works assume that the player has perfect memory. Let T be the number of operations required to clear all cards when the player has only S bits of memory. A little thought shows that a tradeoff upper bound of ST = O(n^2 log n) is achievable. In our work, we study corresponding lower bounds. We prove that ST = Omega(n^2) in a restricted model where the player may only compare cards for equality and cannot perform any computations on the pictures. We further prove that ST^2 = Omega(n^3) without any such restrictions, assuming only that each picture can be represented as an integer in {1,2,...,R}. We do this by studying the problem in an R-way branching program model, extending an old counting argument due to Borodin and Cook, but with a new twist involving negatively associated random variables. All our lower bounds hold for both deterministic and randomized algorithms. We view these results as preliminary steps towards understanding the complexity of finding good matchings in graphs in space-limited settings. This is joint work with Yining Chen (Dartmouth).

4:00 pm4:30 pm
Speaker: Andrew McGregor (University of Massachusetts, Amherst)

Alice has a length n binary string x. She transmits it to Bob over a deletion channel that deletes each bit with probability 1/2. How many transmissions of x are necessary before Bob can reconstruct x with high probability? This problem is known as the trace reconstruction problem. The problem has attracted significant attention over the last fourteen years but there is still a big gap between the best known lower and upper bounds. The goal of our work in to understand the complexity of the problem in terms of parameters such as the sparsity and run lengths of the transmitted string.

Wednesday, October 17th, 2018

9:30 am10:30 am
Speaker: Omri Weinstein (Columbia University)

We show that static data structure lower bounds in the group (linear) model imply semi-explicit lower bounds on matrix rigidity. In particular, we show that an explicit lower bound of t >= \omega(log^2 n) on the query time of data structures in the restricted group model, even against arbitrarily small linear space (s= (1+eps)*n), would already imply a semi-explicit (P^NP) construction of rigid matrices with significantly better parameters than the current state of art [Alon, Panigrahy, and Yekhanin, 2009]. Our result also proves that stronger data structure lower bounds (against *polynomial* space) would imply super-linear circuit lower bounds for log-depth linear circuits (which would close a four-decade open question). In the succinct space regime (s=n+o(n)), we show that any improvement on current cell-probe lower bounds in the linear model would also imply new rigidity bounds. Our main result relies on a new connection between the "inner" and "outer" dimensions of a matrix (Pudlak and Paturi, 2006). Based on a joint work with Zeev Dvir and Omri Weinstein.

11:00 am11:30 am
Speaker: Siva Natarajan Ramamoorthy (University of Washington)

We prove new cell-probe lower bounds for dynamic data structures that maintain a subset of {1,2,...,n}, and compute various statistics of the set. The data structure is said to handle insertions non-adaptively if the locations of memory accessed depend only on the element being inserted, and not on the contents of the memory. For any such data structure that can compute the median of the set, we prove that: t_{med} >= Omega(n^{1/(t_{ins}+1)}/(w^2 * t_{ins}^2)), where t_{ins} is the number of memory locations accessed during insertions, t_{med} is the number of memory locations accessed to compute the median, and w is the number of bits stored in each memory location. When the data structure is able to perform deletions non-adaptively and compute the minimum non-adaptively, we prove t_{min} + t_{del} >= Omega(log n /(log w + log log n)), where t_{min} is the number of locations accessed to compute the minimum, and t_{del} is the number of locations accessed to perform deletions. For the predecessor search problem, where the data structure is required to compute the predecessor of any element in the set, we prove that if computing the predecessors can be done non-adaptively, then either t_{pred} >= Omega(log n/(log log n + log w)), or t_{ins} >= Omega(n^{1/(2(t_{pred}+1))}), where t_{pred} is the number of locations accessed to compute predecessors. These bounds are nearly matched by Binary Search Trees in some range of parameters. Our results follow from using the Sunflower Lemma of Erdös and Rado [Paul Erdös and Richard Rado, 1960] together with several kinds of encoding arguments.

This is joint work with Anup Rao.

11:30 am12:00 pm
Speaker: Arkadev Chattopadhyay (Tata Institute of Fundamental Research)

We develop a technique for proving lower bounds in the setting of asymmetric communication, a model that was introduced in the famous works of Miltersen (STOC’94) and Miltersen, Nisan, Safra and Wigderson (STOC’95). At the core of our technique is a new simulation theorem: Alice gets a p by n matrix X over F2 and Bob gets a vector y in (F-2)^n. Alice and Bob need to evaluate f(X.y) for a Boolean function f:{0,1}^p → {0,1} . Our simulation theorems show that a deterministic/randomized communication protocol exists for this problem, with cost C·n for Alice and C for Bob, if and only if there exists a deterministic/randomized parity decision tree of cost about C for evaluating f.

We provide applications of this theorem to obtain new lower bounds for static data-structure problems like Vector-Matrix-Vector product and Orthogonal Vector counting. We also show there are problems for which the simulation theorem yields significantly better bounds than what the richness technique of Miltersen et al. provides.

Joint work with Michal Koucky, Bruno Loff and Sagnik Mukhopadhyay.

2:00 pm2:30 pm
Speaker: Faith Ellen (University of Toronto)

It is impossible to solve consensus without using locks in an asynchronous system where 2 processes communicate by reading from and writing to shared registers. The proof of this result, described in my boot camp lecture, builds an infinite execution, by repeatedly extending a finite execution, in which no process decides a value. In contrast, all known proofs of the impossibility of solving (n-1)-set agreement among n ? 3 processes rely on complex topological arguments, either directly or indirectly. We define a class of extension-based proofs and show that no such proof can prove the unsolvability of (n-1)-set agreement among n ? 3 processes. To do so, we view a proof as an interaction between a prover, who is trying to construct a bad execution in which n values are decided or some process takes an infinite number of steps without deciding a value, and an adversarial algorithm which is constructed adaptively. This is joint work with Dan Alistarh, James Aspnes, Rati Gelashvili, and Leqi Zhu.

2:30 pm3:00 pm
Speaker: Diptarka Chakraborty (Charles University)

The conjectured hardness of Boolean matrix-vector multiplication has been used with great success to prove conditional lower bounds for numerous important data structure problems, see Henzinger et al. [STOC'15]. In this talk, we discuss the cell probe complexity (that is, we only charge for memory accesses, while computation is free) of Boolean matrix-vector multiplication. In a recent work, Larsen and Williams [SODA'17] attacked the problem from the upper bound side and gave a surprising cell probe data structure. Their cell probe data structure answers queries in O(n^7/4) time and is succinct in the sense that it stores the input matrix in read-only memory, plus an additional O(n^7/4) bits on the side. In this talk we present a lower bound showing that any data structure storing r bits on the side, with n < r < n^2 must have query time t satisfying tr = Omega(n^3). For r ? n, any data structure must have t = Omega(n^2). Since lower bounds in the cell probe model also apply to classic word-RAM data structures, the lower bounds naturally carry over. We complement the lower bound with a new succinct cell probe data structure with query time O(n^3/2) storing just O(n^3/2) bits on the side.

Thursday, October 18th, 2018

9:30 am10:30 am
Speaker: Shachar Lovett (UC San Diego)

An MDS matrix is a k x n matrix, such that any k columns in it are linearly independent. A standard construction of such matrices is a Vandermonde matrix, which is equivalent to the generating matrix of a Reed-Solomon code.

There was interest in the coding community in *sparse* MDS matrices, with specific coordinates set to zero based on the underlying code topology. A natural question arises: when do these exist? It turns out that there is a simple combinatorial condition which is both necessary and sufficient over large enough fields, of size > {n \choose k}.

Dau, Song and Yuen (ISIT 2014) conjectured that the same combinatorial condition suffices also over much smaller fields, of size n+k-1. The reason is that they gave an algebraic construction for such sparse MDS matrices, based on an algebraic conjecture. In this work, we prove this algebraic conjecture, and hence resolve the GM-MDS conjecture.

In the talk, I will describe the background, the combinatorial condition, the algebraic construction of Dau et al and their algebraic conjecture. If time permits, I will sketch the proof of the algebraic conjecture.

11:00 am11:30 am
Speaker: T.S. Jayram (IBM Almaden)

I will revisit the problem of determining the space complexity of arguably the oldest problem in streaming: count the number of ones in a data stream approximately. This work is joint with Miklos Ajtai.

11:30 am12:00 pm
Speaker: Huacheng Yu (Harvard University)

Graph sketching is a powerful technique to probabilistically ``summarize'' a graph. It may be used to obtain streaming algorithm with low space usage for maintaining the connectivity of a graph, computing approximate minimum spanning forest, and to obtain communication efficient protocols for spanning forest [AGM'12], etc. In this talk, I will discuss the optimality of two of its applications: the space complexity of maintaining a spanning forest of a graph under edge insertion and deletion; the simultaneous communication complexity of spanning forest. The method of graph sketching solves the first problem with O(nlog^3 n) bits of space, and the latter with O(log^3 n) bits of communication per player. We show that both bounds cannot be improved for any streaming algorithm and communication protocol. Joint work with Jelani Nelson.

2:00 pm2:30 pm
Speaker: Michael Kapralov (École Polytechnique Fédérale de Lausanne)

Subgraph counting is a fundamental primitive in graph processing, with applications in social network analysis (e.g., estimating the clustering coefficient of a graph), database processing and other areas. The space complexity of subgraph counting has been studied extensively in the literature, but many natural settings are still not well understood. In this paper we revisit the subgraph (and hypergraph) counting problem in the sketching model, where the algorithm's state as it processes a stream of updates to the graph is a linear function of the stream. This model has recently received a lot of attention in the literature, and has become a standard model for solving dynamic graph streaming problems. In this paper we give a tight bound on the sketching complexity of counting the number of occurrences of a small subgraph H in a bounded degree graph G presented as a stream of edge updates. Specifically, we show that the space complexity of the problem is governed by the fractional vertex cover number of the graph H. Our subgraph counting algorithm implements a natural vertex sampling approach, with sampling probabilities governed by the vertex cover of H. Our main technical contribution lies in a new set of Fourier analytic tools that we develop to analyze multiplayer communication protocols in the simultaneous communication model, allowing us to prove a tight lower bound. We believe that our techniques are likely to find applications in other settings. Besides giving tight bounds for all graphs H, both our algorithm and lower bounds extend to the hypergraph setting, albeit with some loss in space complexity.

2:30 pm3:00 pm
Speaker: Qin Zhang (Indiana University, Bloomington)

We consider statistical estimations of a matrix product over the integers in a distributed setting, where we have two parties Alice and Bob; Alice holds a matrix A and Bob holds a matrix B, and they want to estimate statistics of AB. We focus on the well-studied l_p-norm, distinct elements (p = 0), and heavy hitter problems. The goal is to minimize both the communication cost and the number of rounds of communication.

This problem is closely related to the fundamental set-intersection join problem in databases: when p = 0 the problem corresponds to the size of the set-intersection join. When p = \infty the output is simply the pair of sets with the maximum intersection size. When p = 1 the problem corresponds to the size of the corresponding natural join. We also consider the heavy hitters problem which corresponds to finding the pairs of sets with intersection size above a certain threshold.

Joint work with David Woodruff.

3:00 pm3:30 pm
Speaker: Lior Kamma (Aarhus University)

Feature hashing, also known as the hashing trick, introduced by Weinberger et al. (2009), is one of the key techniques used in scaling-up machine learning algorithms. Loosely speaking, feature hashing uses a random sparse projection matrix $A : \mathbb{R}^n \to \mathbb{R}^m$ (where $m \ll n$) in order to reduce the dimension of the data from $n$ to $m$ while approximately preserving the Euclidean norm. Every column of $A$ contains exactly one non-zero entry, equals to either $-1$ or $1$. Weinberger et al. showed tail bounds on $\|Ax\|_2^2$. Specifically they showed that for every $\varepsilon, \delta$, if $\|x\|_{\infty} / \|x\|_2$ is sufficiently small, and $m$ is sufficiently large, then \begin{equation*}\Pr[ \; | \;\|Ax\|_2^2 - \|x\|_2^2\; | < \varepsilon \|x\|_2^2 \;] \ge 1 - \delta \;.\end{equation*} These bounds were later extended by Dasgupta et al. (2010) and most recently refined by Dahlgaard et al. (2017), however, the true nature of the performance of this key technique, and specifically the correct tradeoff between the pivotal parameters $\|x\|_{\infty} / \|x\|_2, m, \varepsilon, \delta$ remained an open question. We settle this question by giving tight asymptotic bounds on the exact tradeoff between the central parameters, thus providing a complete understanding of the performance of feature hashing. We complement the asymptotic bound with empirical data, which shows that the constants "hiding" in the asymptotic notation are, in fact, very close to $1$, thus further illustrating the tightness of the presented bounds in practice.