Monday, September 28th, 2015

9:30 am10:15 am
Does there exists a family H of hash functions h with the property that
 
1.  Any h in H is humanly computable: a human can select a random h∈H, then
 - (preprocessing time) learn to compute h in at most a few (≤ 3?) hours, and thereafter
 - (processing time) take at most 1 minute per input x_i to compute h(x_i),
 
but
 
2.  NO  ADVERSARY -- BE IT HUMAN, COMPUTER, OR COMBINATION OF THE TWO -- that 
 - knows the family H but not which particular h was chosen, and 
 - observes (just) enough pairs (x1, h(x1)), (x2, h(x2)), (x3, h(x3)), ... (for randomly chosen x1, x2, x3, ...) to completely specify h, can (with nontrivial probability) compute h(x) on a new randomly chosen x. 
 
This talk will deal with this question for humans that compute h in their heads matched against a powerful human + supercomputer (10¹⁸ instructions per second) adversary.  The adversary wins in the first round, Q, at which it correctly guesses h(x) for a new randomly chosen x. 
 
Human computability has numerous applications including CAPTCHAS and humanly computable passwords.
10:45 am11:30 am

We explicitly construct an extractor for two independent sources on n bits, each with min-entropy at least log^C n for a large enough constant C. Our extractor outputs one bit and has error n^{-\Omega(1)}. The best previous extractor, by Bourgain, required each source to have min-entropy .499n. 

A key ingredient in our construction is an explicit construction of a monotone, almost-balanced boolean function on n bits that is resilient to coalitions of size n^{1-delta}, for any delta>0. In fact, our construction is stronger in that it gives an explicit extractor for a generalization of non-oblivious bit-fixing sources on n bits, where some unknown n-q bits are chosen almost polylog(n)-wise independently, and the remaining q=n^{1-\delta} bits are chosen by an adversary as an arbitrary function of the n-q bits. The best previous construction, by Viola, achieved q=n^{1/2 - \delta}. 

Our explicit two-source extractor directly implies an explicit construction of a 2^{(log log N)^{O(1)}}-Ramsey graph over N vertices, improving bounds obtained by Barak et al. and matching independent work by Cohen.

11:45 am12:30 pm
Pseudorandom number generators (PRGs) which fool certain statistical tests unconditionally are well studied in complexity, to understand the power of randomness in computation. Such generators have since found many applications in algorithm design, especially in settings like hashing or streaming where the input is large and true randomness is prohibitively costly.  
 
In this talk, we describe a template for PRG construction based on combining a few strings where the level of independence in a string gradually increases. This paradigm has recently led to optimal PRGs for several classes of tests including halfspaces, combinatorial rectangles/shapes, read-once DNFs, and to optimal constructions of hash functions families for load-balancing, min-wise hashing and dimension reduction. Also, the PRGs/hash functions given by this paradigm are typically quite simple: they are variants on the XOR/interleaving of a few strings with limited independence. We speculate about other problems where this paradigm could apply.
 
(Based on joint work with coauthors, and disjoint work by other researchers).
2:45 pm3:00 pm
Speaker: Raghu Meka, UCLA
We prove anti-concentration results for polynomials of independent random variables with arbitrary degree. Our results extend the classical Littlewood-Offord result for linear polynomials, and improve several earlier estimates. 
 
We discuss applications in two different areas. In complexity theory, we prove near optimal lower bounds for computing the Parity, addressing a challenge in complexity theory posed by Razborov and Viola, and also address a problem concerning OR functions. In random graph theory, we derive a general anti-concentration result on the number of copies of a fixed graph in a random graph.
 
Based on joint work with Oanh Nguyen and Van Vu. 
3:00 pm3:15 pm
In this paper we give lower bounds for the representation of real univariate polynomials as sums of powers of degree 1 polynomials. We present two families of polynomials of degree d such that the number of powers that are required in such a representation must be at least of order d. This is clearly optimal up to a constant factor. Previous lower bounds for this problem were only of order $\sqrt(d)$, and were obtained from arguments based on Wronskian determinants and "shifted derivatives." We obtain this improvement thanks to a new lower bound method based on Birkhoff interpolation (also known as "lacunary polynomial interpolation"). It is currently not known how to extend this result to the field of complex numbers.
 
This is joint work with Ignacio Garcia-Marco.
3:15 pm3:30 pm
Space efficient algorithms have been actively (again) studied from requirements in practical applications.  In particular, we consider here sublinear working space and polynomial-time algorithms w.r.t. an input size parameter specified by each problem. Investing the possibility of having such algorithms is also an important and interesting topic in computational complexity theory, in particular, for understanding the nature of "computation." We survey some recent results and discuss some open questions from the view point of computational complexity theory.

Tuesday, September 29th, 2015

9:30 am10:15 am

No abstract available.

10:45 am11:30 am
We reduce non-deterministic time T > 2^n to a 3SAT instance phi of size |phi| = T polylogsuch that there is an explicit circuit C that on input an index i of log |phi| bits outputs the i-th clause, and each output bit of C depends on O(1) inputs bits. The previous best result was C in NC^1. Even in the simpler setting of |phi| = poly(T) the previous best result was C in AC^0. We also somewhat optimize the complexity of PCP reductions. As an application, we tighten Williams' connection between satisfiability (or derandomization) and circuit lower bounds. The original connection employed previous reductions as black boxes. If one instead employs the reductions above as black boxes then the connection is more direct.
 
Based on joint works with Hamid Jahanjou and Eric Miles, and with Eli Ben-Sasson.
11:45 am12:00 pm

I will briefly describe a couple of widely open research directions in Cryptography that are closely related to fine-grained Complexity. First, we will demonstrate the importance of exact hardness bounds through the notion of “proof of work.” Then we will wonder how Cryptographic hardness can be formed.

12:00 pm12:15 pm
The hardness-versus-randomness paradigm shows that in sufficiently general models of computation that constructing explicit functions hard for the model is equivalent to constructing pseudorandom generators fooling that model of computation.  This paradigm is even known to hold for algebraic circuits, which is the most natural computational model for computing (multivariate) polynomials. However, most algebraic circuit classes where explicit hard functions are known are unfortunately too weak to instantiate this connection.  Fooling such circuit classes (deterministically deciding whether such circuits compute the zero polynomial, aka the polynomial identity testing problem) is typically only achieved through further non-trivial study of the lower bound technique.
 
We will discuss one recent method of turning lower bounds into such derandomization.  This method requires a "robust" lower bound, which is one that proves not only that a polynomial f(x) is hard, but also that f(x)+g(x) is hard whenever g(x) is a "lower order term".  Important lower bound techniques (such as Nisan's partial derivative matrix, the partial derivative method of Nisan-Widgerson, and the recent shifted partial derivative method of Kayal/Gupta-Kayal-Kamath-Saptharishi) can all be shown to be robust in certain cases, thus yielding various derandomizations of polynomial identity testing.
 
In particular, the robustness of the shifted partial derivative method (new to this work) can be pushed to yield the first efficient deterministic algorithm for deciding whether a constant-degree polynomial divides a sparse polynomial.
12:15 pm12:30 pm
The Minimum Circuit Size Problem is a well-studied example of a problem in NP that is believed to be intractable, but is not widely believed to be NP-complete.  Until now, all reductions from of seemingly-hard problems to MCSP have made use of the [HILL] pseudorandom generator, which shows that any polynomial-time computable function can be inverted with relatively high probability by a probabilistic oracle machine with access to MCSP.  This approach is not suitable for presenting a ZPP-reduction to MCSP from a problem that is not known to be in NP intersect coNP.
 
Our main contribution is to present a very different reduction from Graph Isomorphism to MCSP; the reduction does not make use of [HILL].  (The reduction does require some new graph-theoreticl algorithms.)  We use this to show that the Graph Automorphism problem is in ZPP relative to MCSP.
 
This is joint work with Josh Grochow and Cris Moore.

Wednesday, September 30th, 2015

9:30 am10:15 am
An *optimal meta algorithm* for a class C of computational problems is an algorithm A that can solve any problem P in C in some time T_P(n) and such that there does not exist an algorithm that can solve P significantly faster. A natural optimal algorithm for a class C in some sense provides the most satisfactory explanation to what makes certain problems in C hard and other problems easy.
 
In this high level and informal talk I will discuss for what classes of problems we might hope to get natural optimal algorithms, the works on the Sum-of-Squares SDP as a candidate optimal algorithm, and how such optimal algorithms/proof systems give rise to a computational analog of Bayesian probability.
10:45 am11:30 am
A recent work of Chen, Kabanets, Kolokolova, Shaltiel and Zuckerman (CCC 2014, Computational Complexity 2015) introduced the notion of the Compression problem for a class C of circuits, defined as follows. Given as input the truth table of a Boolean function f:\{0,1\}^n \rightarrow \{0,1\} that has a small (say size s) circuit from C, find in time 2^{O(n)} any Boolean circuit for f of size less than trivial, i.e. much smaller than 2^n/n.
 
The work of Chen et al. gave compression algorithms for many classes of circuits including \AC^0 (the class of constant-depth unbounded fan-in circuits made up of AND, OR, and NOT gates) and Boolean formulas of size nearly n^2. They asked if similar results can be obtained for the circuit class AC^0[p], the class of circuits obtained by augmenting AC^0 with unbounded fan-in MOD_p gates. 
 
We answer the question positively, using techniques from work of Kopparty and the author (FSTTCS 2012).
11:45 am12:30 pm
We begin by describing the ideas behind the state-of-the-art bounds on omega, the exponent of matrix multiplication.
 
We then present the "group-theoretic" approach of Cohn and Umans as an alternative to these methods, and we generalize this approach from group algebras to general algebras. We identify adjacency algebras of coherent configurations as a promising family of algebras in the generalized framework. We prove a closure property involving symmetric powers of adjacency algebras, which enables us to prove nontrivial bounds on omega using commutative coherent configurations, and suggests that commutative coherent configurations may be sufficient to prove omega = 2.
 
Along the way, we introduce a relaxation of the notion of tensor rank, called s-rank, and show that upper bounds on the s-rank of the matrix multiplication tensor imply upper bounds on the ordinary rank. In particular, if the "s-rank exponent of matrix multiplication" equals 2, then the (ordinary) exponent of matrix multiplication, omega, equals 2.
 
Finally, we will mention connections between several conjectures implying omega=2, and variants of the classical sunflower conjecture of Erdos and Rado.
 
No special background is assumed.
 
Based on joint works with Noga Alon, Henry Cohn, Bobby Kleinberg, Amir Shpilka, and Balazs Szegedy.

Thursday, October 1st, 2015

9:30 am10:15 am
I will discuss recent work on beating exhaustive search for solving satisfiability on quantified Boolean formulas with a small number of quantifier blocks. The talk will include (i) an algorithmic result: satisfiability of QBF formulas of size poly(n) on n variables and with q quantifier blocks can be solved in time 2^{n-(n^{\Omega(1/q)})}  and (ii) a connection with lower bounds: even slight improvements of our algorithmic results would imply NEXP not in NC^1
 
This is based partly on joint works with Ruiwen Chen and Ryan Williams.
10:45 am11:30 am

In this talk, I will report moderately exponential time satisfiability algorithms for (i) SYM*AND circuits with a slightly superpolynomial number of gates, (ii) THR*THR circuits with a subquadratic number of gates and (iii) XOR*AND*XOR*AND*XOR circuits with a nearly exponential number of gates. The first algorithm is based on backtracking and dynamic programming. The second and third algorithms are based on the polynomial method in Boolean circuit complexity.

11:45 am12:30 pm
We show how to compute any symmetric Boolean function on n variables over any field (as well as the integers) with a probabilistic polynomial of degree O(sqrt(n*log(1/eps)) and error at most eps. The degree dependence on n and eps is optimal, matching a lower bound of Razborov (1987) and Smolensky (1987) for the MAJORITY function. The proof is constructive: a low-degree polynomial can be efficiently sampled from the distribution.
 
This polynomial construction is combined with other algebraic ideas to give the first subquadratic time algorithm for computing a (worst-case) batch of Hamming distances in superlogarithmic dimensions, exactly. To illustrate, let c(n):N→N. Suppose we are given a database D of n vectors in {0,1}^(c(n)*log n) and a collection of n query vectors Q in the same dimension. For all u∈Q, we wish to compute a v∈D with minimum Hamming distance from u. We solve this problem in n^(2−1/O(c(n)*log^2 c(n))) randomized time. Hence, the problem is in "truly subquadratic" time for O(log n) dimensions, and in subquadratic time for d=o((log^2 n)/(loglog n)^2). We apply the algorithm to computing pairs with maximum inner product, closest pair in l1 for vectors with bounded integer entries, and pairs with maximum Jaccard coefficients.
2:45 pm3:00 pm
It is known that a random Boolean function on $n$ variables requires circuits of size $\Theta(2^n/n)$. At the same time, the strongest known lower bound $3n-o(n)$ for an explicit Boolean function (that is, a function from NP) was given by Blum more than 30 years ago. This bound, as well as most of the previous bounds, is based on a simple method called gate elimination. Its idea is straightforward: roughly, to prove a lower bound $cn$ for a function from a certain class one shows that in any circuit computing a function from the class one can find a substitution of a constant to an input variable that eliminates at least c gates from a circuit and leaves a subfunction from the same class; the bound then follows by induction.
 
In this talk, we will discuss ways of generalizing the gate elimination method for proving circuit lower bounds. First, we will present a slightly stronger than $3n$ lower bound for affine dispersers. These are functions that are not constant on any affine subspace of sufficiently large dimension. Equivalently, such functions do not degenerate under sufficiently many linear substitutions. We will then show that an explicit construction of dispersers for quadratic varieties (currently unknown) implies stronger lower bounds. Informally, these are functions that survive even under quadratic (but not only linear) substitutions.
 
3:00 pm3:15 pm
Let $U_{k,N}$ denote the Boolean function which takes as input $k$  strings of $N$ bits each, representing $k$ numbers $a^{(1)},\dots,a^{(k)}$ in $\{0,1,\dots,2^{N}-1\}$, and outputs 1 if and only if $a^{(1)} + \cdots + a^{(k)} \geq 2^N.$   Let THR$_{t,n}$ denote a monotone unweighted threshold gate, i.e., the Boolean function which takes as input a single string $x \in \{0,1\}^n$ and outputs $1$ if and only if $x_1 + \cdots + x_n \geq t$. The function $U_{k,N}$ may be viewed as a monotone function that performs addition, and THR$_{t,n}$ may be viewed as a monotone function that performs counting. We refer to circuits that are composed of THR gates as monotone majority circuits.
 
The main result of this paper is an exponential lower bound on the size of bounded-depth monotone majority circuits that compute $U_{k,N}$.   More precisely, we show that for any constant $d \geq 2$, any depth-$d$   monotone majority circuit computing $U_{d,N}$ must have size $\smash{2^{\Omega(N^{1/d})}}$.  Since $U_{k,N}$ can be computed by a single  monotone weighted threshold gate (that uses exponentially large weights), our lower bound implies that constant-depth monotone majority circuits require exponential size to simulate monotone weighted threshold gates. This answers a question posed by Goldmann and Karpinski (STOC'93) and recently restated by Hastad (2010, 2014).  We also show that our lower bound is essentially best possible, by constructing a depth-$d$, size-$2^{O(N^{1/d})}$ monotone majority circuit for $U_{d,N}$.
 
As a corollary of our lower bound, we significantly strengthen a classical theorem in circuit complexity due to Ajtai and Gurevich (JACM'87). They exhibited a monotone function that is in AC$^0$ but requires super-polynomial size for any constant-depth monotone circuit composed of unbounded fan-in AND and OR gates.  We describe a monotone function that is in depth-$3$ AC$^0$ but requires exponential size monotone circuits of any constant depth, even if the circuits are composed of THR gates.
 
Joint work with Xi Chen (Columbia) and Rocco Servedio (Columbia).
3:15 pm3:30 pm

The random restriction approach was originally used to prove formula size lower bounds, where it was shown that formulas shrink on average under random restrictions. In several recent works, this result was improved to a concentrated version where formulas shrink with high probability. This concentrated shrinkage property leads to nontrivial algorithms for Formula-SAT, MAX-SAT, etc. In this talk, we will survey techniques in showing concentrated shrinkage, and its applications in designing efficient algorithms.