Theory at the Institute and Beyond, October 2022
by Venkatesan Guruswami (Simons Institute)
Research and collaborations at the Simons Institute always proceeded apace through the pandemic with a diverse slate of programs and a large concentration of postdoctoral fellows. But even by the Institute’s busy standards, this past summer saw an exceptional level of activity, with four summer clusters, a full summer program, and an extended reunion, each running for at least a month, as well as another reunion workshop. This fall, the open spaces at the Institute continue to buzz with activity matching or even surpassing the pre-pandemic heyday, with two energetic interdisciplinary programs.
It was a busy time for theory events beyond the Institute as well. Most of the summer theory conferences returned to convening in person. I heard that STOC in Rome, the first major in-person theory conference since 2019, was a blast. Too bad I missed it, because in addition to the (as typical for STOC) exciting program, there were many excellent workshops, including one on my beloved topic of coding theory that featured great talks and many friends.
A mega event for the mathematical community this summer was the 2022 International Congress of Mathematicians (ICM), which had to be held virtually. With plenary lectures by Umesh Vazirani and Craig Gentry, as well as the Abel lecture by Avi Wigderson, the theory of computing was well represented at the center stage. As usual, there was also a substantive section on mathematical aspects of computer science, with some talks in other sections such as statistics and data analysis, combinatorics, and probability having strong ties to computer science as well.
One of the much anticipated highlights of ICM is the announcement and award of major mathematical prizes. Hearty congratulations to Mark Braverman for receiving the Abacus Medal. Mark’s research portfolio is astonishingly broad, ranging from pseudorandomness to communication complexity to optimization to mechanism design and seemingly everything in between. This includes information and coding theory, where a central theme of this work is the study of information complexity and error correction in interactive settings. In its basic form, interactive coding seeks to protect a communication protocol between two parties from errors, say an arbitrary constant fraction of bit flips. The naive approach of using error-correcting codes to individually protect each message doesn’t work, as the channel can completely corrupt one message, causing a misunderstanding that renders the rest of the protocol incorrect.
In a seminal work from 1992, which is being fittingly recognized with a 30-year Test of Time Award at FOCS 2022, Leonard Schulman defined and proved the existence of “tree codes” and used them to show how one can tolerate a constant fraction of errors in an interactive protocol while only incurring a constant factor blowup in communication — a highly surprising result! The fraction of errors tolerated in Schulman’s simulation was small (1/240) — the main point of his work was to show that it could be bounded away from 0. How large a fraction of bit flips can one tolerate? Determining such thresholds is fundamental to coding theory. In the noninteractive setting, the answer is well-known to be ¼. In a beautiful work from 2011, Braverman and Anup Rao revisited the interactive coding problem with a renewed perspective, and gave a simulation resilient to a fraction of bit flips approaching 1/8. It is also known that the error fraction cannot exceed 1/6. Finally, this summer, PhD students Meghal Gupta and Rachel Zhang posted a paper settling this question, by giving a simulation resilient to a fraction of bit flips approaching 1/6 (and thus optimal). The last decade saw lots of work culminating in this result — Ran Gelles’s survey is an excellent resource on this progress, though being a few years old, it misses the most recent progress.
The ICM is also where the legendary Fields Medals are announced. Congratulations to Maryna Viazovska, James Maynard, June Huh, and Hugo Duminil-Copin for their breakthrough results and 2022 Fields Medals. Quite amazingly, the works of all four of these mathematicians include results that can be appreciated (and sometimes even understood!) by theoretical computer scientists and discrete mathematicians. (See, for example, this post by Gil Kalai.) The sphere-packing results of Viazovska are of much interest to combinatorial coding theory, and the log-concavity of coefficients of the characteristic polynomials of matroids established by Adiprasito, Huh, and Katz has close connections to the Anari-Liu-Oveis Gharan-Vinzant algorithm for estimating the number of bases of a matroid. Andrei Okounkov, my colleague in Math, has written incredible popular science expositions (that are accessible, informative, enlightening, and entertaining) of the works of all four medalists. These can be found on the IMU Field Medalists page (some articles are also on arXiv) and are highly recommended!
In the past few months, the works of some renowned theoretical computer scientists were recognized with some of the highest accolades in mathematics. Noga Alon and Ehud Hrushovski received the 2022 Shaw Prize in Mathematical Sciences for "their remarkable contributions to discrete mathematics and model theory with interaction notably with algebraic geometry, topology and computer sciences." Noga is also the recipient of the Knuth Prize this year, and I look forward to his Knuth Prize lecture at FOCS in Denver. Just last month, Dan Spielman received the 2023 Breakthrough Prize in Mathematics for "breakthrough contributions to theoretical computer science and mathematics, including to spectral graph theory, the Kadison-Singer problem, numerical linear algebra, optimization, and coding theory." Ronen Eldan, whose stochastic localization work was featured in an earlier edition of this column, received one of the New Horizons prizes, and the work of two of the winners of the Maryam Mirzakhani New Frontiers Prize relates strongly to TCS — Vera Traub works on approximation algorithms, and Jinyoung Park’s work resolved the Kahn-Kalai expectation threshold conjecture. And so as not to leave out physics honors, the 2023 Breakthrough Prize in Fundamental Physics was shared by Bennett, Brassard, Deutsch, and Shor for foundational work on quantum information — again a central topic of increasing importance in the theory of computing.
So there was a lot to celebrate for theoretical computer scientists and the field of theory of computing this summer, even in circles outside computer science! This leaves me not too much room to discuss specific recent favorite results in any depth, but let me mention a couple briefly, with hopefully more to come in the next edition of this column.
I began this column with a mention of the plethora of recent goings-on at the Institute. Rather than even attempt to outline major themes, I will describe two fun results I saw at the reunion workshops this summer. The first concerns a variant of computer scientists’ ever favorite CNF satisfiability. The NP-complete 3-SAT problem asks if an input 3CNF formula is satisfiable — i.e., if a random assignment will satisfy it with positive probability. What if we wanted to determine whether a random assignment will satisfy the formula with probability at least 50% (the Maj-3-SAT problem)? Instead of 3-SAT, if we had SAT (with unbounded width clauses), this is the canonical PP-complete problem, and thus extremely intractable (unlikely to be solved at any finite level of the polynomial hierarchy). One might feel that a similar PP-completeness ought to hold for Maj-3-SAT, but no such reduction was known (the introduction of dummy variables in reducing SAT to 3-SAT messes up the probability space). Well, there was a good reason such a result wasn’t known — as Shyan Akmal and Ryan Williams showed, the Maj-kSAT problem is polynomial-time solvable for every fixed k! The algorithm for Maj-2-SAT is very simple and cute (would be a good algorithms/discrete math exercise) — if there are many (even three suffices) disjoint clauses, the probability of their simultaneous satisfaction is smaller than ½. Otherwise, there is a small number of literals that hit every clause, and one can try all assignments to this hitting set and, for each, exactly compute the probability that the resulting 1-CNF on the remaining variables is satisfied by a random assignment. The algorithm for Maj-3-SAT applies this idea inductively, and in addition to many disjoint sets, a large sunflower is also used as a witness that the satisfaction probability is less than ½. If instead of probability at least 50%, we ask about probability greater than 50%, the resulting StrictMaj-3-SAT is also in P. However, StrictMaj-4-SAT is NP-hard (via a reduction that’s an easy NP-hardness exercise)! There’s some theory-of-computing trick or treat for you.
Another talk from a parallel workshop in the same week (I said there was a lot going on!) focused on a well-studied problem of central importance in geometry, complexity, cryptography, and quantum computing: the shortest vector problem (SVP) in lattices. Given an integer lattice, the goal in SVP is to find the shortest nonzero vector in Euclidean norm. Starting with the pioneering work of Ajtai, by now there are a few different proofs showing the hardness of (even approximately solving) this problem. These are based on different constructions of the main gadget — a lattice with a "bad list decoding configuration"— that drives the reduction. This work by Bennett and Peikert gives perhaps the simplest construction of this gadget yet, pleasingly relying on the Lee distance of Reed-Solomon codes. The construction is still randomized, as are all previous constructions, and a deterministic reduction showing NP-hardness of SVP remains elusive. While the distinction between randomized and deterministic reductions might not be relevant to our perception of the difficulty of a problem, for a problem as basic as SVP it would be nice to show good old NP-hardness. And if established via the current approaches, the proof would likely shed new light on the list decodability of well-studied codes/lattices.
Since I mentioned Avi, Ronen, and Gil in this post already, let me mention this recent paper that got optimal bounds for the very suggestively named “it ain’t over till it’s over” conjecture. The setting here is that one has an arbitrary Boolean function on n-bit inputs with variance bounded away from 0, such that no variable has much individual influence. “It ain’t over till it’s over” states that even if one projects the function to a small number (vanishing fraction p) of randomly chosen input bits, setting the remaining bits to 0,1 at random, the resulting function will be nonconstant (in fact have nontrivial variance) with overwhelming probability. Such a result was earlier shown by Mossel, O’Donnell, and Oleszkiewicz using the invariance principle, but did not yield tight trade-offs between p and the influence. This paper uses a control-theory-inspired pathwise stochastic analysis approach, and gets the optimal trade-offs. Could this approach be a surrogate for the invariance principle in some other settings (e.g., analysis results underlying inapproximability results)? In any case, one might claim that "it ain't over till it's over" is over after all.