Monday, June 11th, 2018

9:15 am9:45 am

I will discuss the difficulties inherent in the classical simulation of quantum many-body systems and provide an overview of the state-of-the-art algorithms developed for this task, focusing on tensor network based approaches. By exploiting the boundary laws for entanglement entropy scaling found in local systems, tensor network states can provide efficient representations of ground state wavefunctions in many interesting classes of quantum systems. I will review some of the successes of the tensor network program, and outline the open challenges that remain.

10:00 am10:30 am

There has been a revolution in the last few years in quantum simulation. The purpose of this talk is to provide an overview of these recent developments. Specifically, I will review Trotter-Suzuki simulation methods and discuss recent work that provides tighter estimates for the error in such approximations. I will then discuss truncated Dyson series simulations for time-dependent Hamiltonians before giving a discussion of qubitization. I will then conclude by discussing current open problems in quantum simulation and the strategies that may be needed to solve them.

12:00 pm12:30 pm

The efficient simulation of quantum chemistry is among the most anticipated applications of quantum computers. However, as the age of industrial quantum technology dawns, so has the realization that even “polynomial” resource overheads are often prohibitive. There remains a large gap between the capabilities of existing hardware and the resources required to quantum compute classically intractable problems in chemistry. This talk will introduce the quantum chemistry problem and several popular approaches for its solution on a quantum computer. We will then survey recent advances in the field and discuss key challenges for implementing these quantum simulations on realistic quantum computers, both in the near-term and on the first fault-tolerant devices.

3:45 pm4:15 pm

In this talk, I will attempt to give an overview over the state of affairs of analog quantum simulators, so large-scale quantum systems allowing to probe physical quantities with high levels of control. Following a brief overview over common architectures, we will move on to discuss questions that such analog quantum simulators can address. At the heart of the talk will be conceptual questions on the sense in which one can hope for achieving a quantum advantage with analog quantum simulators. It will end with an outlook of open questions in the field, stressing the significance of notions of verification and robustness.

4:30 pm5:00 pm

In my lecture, I will discuss the present status of one of the pillars of the Quantum Technologies Flagship: Quantum Simulations. I will comment on various platforms and approaches, focusing, however, on ultra-cold atoms, molecules and ions. I will present the recent progress in the most challenging quantum simulations and speculate about the future directions, such as merging of quantum simulations and machine learning, or novel methods of quantum many body verification and certification.

Tuesday, June 12th, 2018

9:15 am9:45 am

As microelectronics technology nears the end of exponential growth over time, known as Moore’s law, there is a renewed interest in new computing paradigms such as quantum computing. A key step in the roadmap to build a scientifically or commercially useful quantum computer will be to demonstrate its exponentially growing computing power. I will explain how a 7 by 7 array of superconducting xmon qubits with nearest-neighbor coupling, and with programmable single- and two-qubit gate with errors of about 0.2%, can execute a modest depth quantum computation that fully entangles the 49 qubits. Sampling of the resulting output can be checked against a classical simulation to demonstrate proper operation of the quantum computer and compare its system error rate with predictions. With a computation space of 2^49 = 5 x 10^14 states, the quantum computation can only be checked using the biggest supercomputers. I will show experimental data towards this demonstration from a 9 qubit adjustable-coupler “gmon” device, which implements the basic sampling algorithm of quantum supremacy for a computational (Hilbert) space of about 500. We have begun testing of the quantum supremacy chip.

10:00 am10:30 am

Reliable qubits are difficult to engineer. What can we do with just a few of them? Here are some ideas:

1. Dimension test. An n-qubit system should have 2^n dimensions, but systems with just polynomial(n) dimensions can look like they have n qubits. Is nature really exponential? We give a test for verifying that your system has 2^n dimensions.

2. Entanglement and nonlocality tests. A Bell-inequality violation establishes that your systems share some entanglement. We give a test to show that your systems share lots of entanglement. Additionally, we give a test to eliminate non-signaling correlations (like the Popescu-Rohrlich nonlocal box), giving a way to check whether multi-party entanglement breaks down.

3. Error test. Error correction will be needed for scalable quantum computers. But high qubit overhead makes it impractical for small devices. We show that a seven-qubit computer can fault tolerantly correct errors on one encoded qubit, and that a 17-qubit computer can protect and compute fault tolerantly on seven encoded qubits.

11:15 am11:45 am

We will discuss recent advances towards developing quantum simulators and processors based on neutral atom arrays excited into Rydberg states. Applications including detailed studies of quantum phase transitions, probing non-equilibrium quantum dynamics and  quantum optimization algorithms will be discussed.

1:30 pm2:00 pm

I'll discuss some of the complexity-theoretic foundations of quantum supremacy experiments with random quantum circuits, of the sort that Google is planning in the near future with its 72-qubit chip. (Based on my joint work with Lijie Chen.) Then, in a second part of the talk, I'll discuss how these experiments could be used to generate certified random bits.

Wednesday, June 13th, 2018

9:15 am9:45 am

Recent quantum algorithms for the Hamiltonian simulation problem, the problem of simulating the time dynamics of a quantum system, have introduced new algorithmic primitives with wide applicability. The first Hamiltonian simulation algorithms with exponentially improved dependence on precision introduced the Linear Combination of Unitaries method and Oblivious Amplitude Amplification. Later works introduced the techniques of Quantum Signal Processing and Qubitization.

These techniques are very general algorithmic primitives, much like phase estimation or amplitude amplification, and have found other applications in solving linear systems of equations, Gibbs state preparation, etc. In this talk I'll describe what problems these tools allow us to solve, and describe some applications.

10:00 am10:30 am

We apply the framework of block-encodings, introduced by Low and Chuang (under the name standard-form), to the study of quantum machine learning algorithms using quantum accessible data structures. We develop several tools within the block-encoding framework, including quantum linear system solvers using block-encodings. Our results give new techniques for Hamiltonian simulation of non-sparse matrices, which could be relevant for certain quantum chemistry applications, and which in turn imply an exponential improvement in the dependence on precision in quantum linear systems solvers for non-sparse matrices.

11:15 am11:45 am

We study the problem of simulating the time evolution of a lattice Hamiltonian, where the qubits are laid out on a lattice and the Hamiltonian only includes geometrically local interactions (i.e., a qubit may only interact with qubits in its vicinity). This class of Hamiltonians is very general and encompasses all physically reasonable Hamiltonians. Our algorithm simulates the time evolution of such a Hamiltonian on n qubits for time T up to error ϵ using O(nTpolylog(nT/ϵ)) gates with depth O(Tpolylog(nT/ϵ)). Our algorithm is the first simulation algorithm that achieves gate cost quasilinear in nT and polylogarithmic in 1/ϵ. Our algorithm also readily generalizes to time-dependent Hamiltonians and yields an algorithm with similar gate count for any piecewise slowly varying time-dependent bounded local Hamiltonian.

We also prove a matching lower bound on the gate count of such a simulation, showing that any quantum algorithm that can simulate a piecewise constant bounded local Hamiltonian in one dimension to constant error requires Ω~(nT) gates in the worst case. The lower bound holds even if we only require the output state to be correct on local measurements. To our best knowledge, this is the first nontrivial lower bound on the gate complexity of the simulation problem. Our algorithm is based on a decomposition of the time-evolution unitary into a product of small unitaries using Lieb-Robinson bounds. In the appendix, we prove a Lieb-Robinson bound tailored to Hamiltonians with small commutators between local terms, giving zero Lieb-Robinson velocity in the limit of commuting Hamiltonians. This improves the performance of our algorithm when the Hamiltonian is close to commuting.

12:00 pm12:30 pm

Following the first paper on quantum algorithms for SDP-solving by Brandao and Svore in 2016, rapid developments have been made on quantum optimization algorithms. Recently Brandao et al. improved the quantum SDP-solver in the so-called quantum state input model, where the input matrices of the SDP are given as purified mixed states. They also gave the first non-trivial application of quantum SDP-solving by obtaining a more efficient algorithm for the problem of shadow tomography (proposed by Aaronson in 2017). In this talk we present a quantum SDP-solver, that combines all benefits of previous approaches. We then apply the results to the problem of shadow tomography to simultaneously improve the best known upper bounds on sample complexity due to Aaronson and gate complexity due Brandao et al., and show some other applications too. Finally, we discuss the limitations of the presented approaches.

Thursday, June 14th, 2018

9:15 am9:45 am

I will discuss a quantum algorithm for exact combinatorial optimization.  This algorithm has a provable super-Grover speedup for MAX-D-LIN-2 (for D=2, this is the Ising model) under some mild assumptions on the density of states of the given instance.  The talk is based on arXiv:1802.10124 as well as some additional unpublished results generalizing the results in that paper.

10:00 am10:30 am

Quantum adiabatic optimization (QAO) was one of the first quantum algorithms to be proposed for combinatorial optimization, but in spite of its long history the search for a quantum speedup with QAO still remains elusive.  This inability to find a speedup is often attributed to the use of stoquastic Hamiltonians, because ground states of these models can in many cases be efficiently classically simulated by Markov chain Monte Carlo methods.  In this talk I'll start by reviewing aspects of the general stoquastic simulation conjecture, with a particular focus on the role of the adiabatic path in these simulations.    More recently, this threat of classical simulation has motivated proposals to consider the use of general (nonstoquastic) Hamiltonians in QAO.  To further our understanding of these proposals I'll present a general isoperimetric inequality that constrains the spectral gap of a Hamiltonian in terms of its quantum ground state geometry, without a dependence on the details of the interaction couplings.  This result can be applied to see that some explicit probability distributions, even ones which are simple to describe, will inevitably take exponential time to be precisely reached by pure adiabatic evolution with local Hamiltonians.

11:15 am11:45 am

We provide new quantum algorithms for classification, one of the fundamental problem in Machine Learning, and analyse their accuracy and efficiency on real data.

2:00 pm3:00 pm
3:30 pm4:30 pm

No abstract available.

Friday, June 15th, 2018

9:15 am10:00 am

We present the first protocol allowing a classical computer to interactively verify the result of an efficient quantum computation. We achieve this by constructing a measurement protocol, which enables a classical verifier to ensure that the quantum prover holds an n qubit quantum state, and correctly reports the results of measuring it in a basis of the verifier's choice. This is enforced based on the assumption that the learning with errors problem is computationally intractable for efficient quantum machines.

11:30 am12:15 pm

In 1964, Bell showed that an entirely classical experimenter can certify that two spatially-separated parties share quantum entanglement by playing a game with them. In this talk, I will survey a recent line of research that combines Bell's work with classical techniques from the study of interactive proof systems, to  design games that certify not just the presence of entanglement but specific, many-qubit states and operators. I will give an outline of the main techniques used as well as applications to delegated quantum computation and to quantum complexity theory, including a quantum analog of the classical PCP theorem.

2:00 pm2:30 pm

In this talk I will explain the blueprint for constructing succinct non-interactive (computationally sound) proofs from standard assumptions. The construction explores an interesting connection between cryptography and non-signaling.

2:45 pm3:15 pm

Zero knowledge plays a central role in cryptography and complexity. The seminal work of Ben-Or et al. (STOC 1988) shows that zero knowledge can be achieved unconditionally for any language in NEXP, as long as one is willing to make a suitable *physical assumption*: if the provers are spatially isolated, then they can be assumed to be playing independent strategies.

Quantum mechanics, however, tells us that this assumption is unrealistic, because spatially-isolated provers could share a quantum entangled state and realize a non-local correlated strategy. The MIP^{*} model captures this setting.

In this work we study the following question: does spatial isolation still suffice to unconditionally achieve zero knowledge even in the presence of quantum entanglement?

We answer this question in the affirmative: we prove that every language in NEXP has a 2-prover *zero knowledge* interactive proof that is sound against entangled provers; that is, NEXP \subseteq ZK-MIP^{*}.

Our proof consists of constructing a zero knowledge interactive PCP with a strong algebraic structure, and then lifting it to the MIP^{*} model. This lifting relies on a new framework that builds on recent advances in low-degree testing against entangled strategies, and clearly separates classical and quantum tools.

Our main technical contribution is the development of algebraic techniques for obtaining unconditional zero knowledge; this includes a zero knowledge variant of the celebrated sumcheck protocol, a key building block in many probabilistic proof systems. A core component of our sumcheck protocol is a new algebraic commitment scheme, whose analysis relies on algebraic complexity theory.

The talk is self-contained and does not assume any complexity preliminaries.

Joint work with Alessandro Chiesa, Michael Forbes, and Nicholas Spooner.