# SimonsTV

Our videos can also be found on YouTube.
Monday, June 17 – Friday, June 21, 2024

Playlist: 24 videos

Playlist: 20 videos

Playlist: 16 videos

Sep. 2022

Chen-Yu Wei (University of Southern California)

https://simons.berkeley.edu/talks/tbd-482

Quantifying Uncertainty: Stochastic, Adversarial, and Beyond

To evaluate the performance of a bandit learner in a changing environment, the standard notion of regret is insufficient. Instead, "dynamic regret" is a better measure that can evaluate the learner's ability to track the changes. How to achieve the optimal dynamic regret without prior knowledge on the number of times the environment changes had been open for a long time, and was recently resolved by Auer, Gajane, and Ortner in their COLT 2019 paper. We will discuss their consecutive sampling technique, which is rare in the bandit literature, and see how their idea can be elegantly generalized to a wide range of bandit/RL problems. Finally, we will discuss important open problems that remain in the area.

https://simons.berkeley.edu/talks/tbd-482

Quantifying Uncertainty: Stochastic, Adversarial, and Beyond

To evaluate the performance of a bandit learner in a changing environment, the standard notion of regret is insufficient. Instead, "dynamic regret" is a better measure that can evaluate the learner's ability to track the changes. How to achieve the optimal dynamic regret without prior knowledge on the number of times the environment changes had been open for a long time, and was recently resolved by Auer, Gajane, and Ortner in their COLT 2019 paper. We will discuss their consecutive sampling technique, which is rare in the bandit literature, and see how their idea can be elegantly generalized to a wide range of bandit/RL problems. Finally, we will discuss important open problems that remain in the area.

Aug. 2022

Nika Haghtalab (UC Berkeley)

https://simons.berkeley.edu/talks/tbd-460

Data-Driven Decision Processes Boot Camp

Social and real-world considerations such as robustness, fairness, social welfare, and multi-agent tradeoffs have given rise to multi-distribution learning paradigms. In recent years, these paradigms have been studied by several disconnected communities and under different names, including collaborative learning, distributional robustness, and fair federated learning. In this short tutorial, I will highlight the importance of multi-distribution learning paradigms in general, introduce technical tools for addressing them, and discuss how these problems relate to classical and modern consideration in data driven processes.

https://simons.berkeley.edu/talks/tbd-460

Data-Driven Decision Processes Boot Camp

Social and real-world considerations such as robustness, fairness, social welfare, and multi-agent tradeoffs have given rise to multi-distribution learning paradigms. In recent years, these paradigms have been studied by several disconnected communities and under different names, including collaborative learning, distributional robustness, and fair federated learning. In this short tutorial, I will highlight the importance of multi-distribution learning paradigms in general, introduce technical tools for addressing them, and discuss how these problems relate to classical and modern consideration in data driven processes.

Jul. 2019

Playlist: 34 videos

Jun. 2018

Jens Eisert, Freie Universität Berlin

https://simons.berkeley.edu/talks/jens-eisert-06-11-18

Challenges in Quantum Computation

https://simons.berkeley.edu/talks/jens-eisert-06-11-18

Challenges in Quantum Computation

Apr. 2018

S. Murray Sherman, University of Chicago

https://simons.berkeley.edu/talks/s-murray-sherman-4-18-18

Computational Theories of the Brain

https://simons.berkeley.edu/talks/s-murray-sherman-4-18-18

Computational Theories of the Brain

Mar. 2018

David Sussillo, Google

https://simons.berkeley.edu/talks/david-sussillo-3-22-18

Targeted Discovery in Brain Data

https://simons.berkeley.edu/talks/david-sussillo-3-22-18

Targeted Discovery in Brain Data

Much of the progress in solving discrete optimization problems, especially in terms of approximation algorithms, has come from designing novel continuous relaxations. The primary tools in this area are linear programming and semidefinite programming. Other forms of relaxations have also been developed, such as multilinear relaxation for submodular optimization. In this workshop, we explore the state-of-the-art techniques for performing discrete optimization based on continuous relaxations of the underlying problem, as well as our current understanding of the limitations of this kind of approach. We focus on LP/SDP relaxations and techniques for rounding their solutions, as well as methods for submodular optimization, both in the offline and online setting. We also investigate the limits of such relaxations and hardness of approximation results.

Playlist: 28 videos

May. 2017

Sanjoy Dasgupta, UC San Diego

Computational Challenges in Machine Learning

https://simons.berkeley.edu/talks/tba-3

Computational Challenges in Machine Learning

https://simons.berkeley.edu/talks/tba-3

Oct. 2016

Dan Suciu, University of Washington

https://simons.berkeley.edu/talks/dan-suciu-10-05-2016

Uncertainty in Computation

https://simons.berkeley.edu/talks/dan-suciu-10-05-2016

Uncertainty in Computation

Dec. 2015

Richard Karp sat down with Tim Roughgarden to discuss the Fall 2015 program on Economics and Computation.

https://simons.berkeley.edu/programs/economics2015

https://simons.berkeley.edu/programs/economics2015

Nov. 2015

Tuomas Sandholm, Carnegie Mellon University

Algorithmic Game Theory and Practice

https://simons.berkeley.edu/talks/tuomas-sandholm-2015-11-18

Algorithmic Game Theory and Practice

https://simons.berkeley.edu/talks/tuomas-sandholm-2015-11-18

Nov. 16 – Nov. 20, 2015

Playlist: 23 videos

Feb. 9 – Feb. 13, 2015

Playlist: 24 videos

Apr. 21 – Apr. 24, 2014

Playlist: 20 videos

Sept. 9 – Sept. 13, 2013

Playlist: 13 videos