SimonsTV
Our videos can also be found on YouTube.
The boot camp is intended to acquaint program participants with the key themes of the program. It will consist of five days of tutorial presentations from leading experts in the topics of the program.
Playlist: 19 videos
Playlist: 18 videos
Apr. 2023
Po-Shen Loh (Carnegie Mellon University)
https://simons.berkeley.edu/events/theoretically-speaking-building-human-intelligence-scale-save-next-generation-chatgpt
Theoretically Speaking
The scale of global societal problems looks daunting. One person, or even a small team, is minuscule relative to the number of people who need help. For example, since ChatGPT has exploded onto the scene, our children's future employment prospects (and current educational experience, with ChatGPT-powered cheating) are in existential danger. There is an area close to mathematics, however, which devises solutions in which problems solve themselves even through self-serving human behavior: game theory.
Po-Shen Loh is a math professor, researcher, and educator who transitioned to devise new solutions for large-scale real-world problems. He will talk about his experience going between the ivory tower of academia and the practicality of the real world, where he ultimately innovated fundamentally new approaches to pandemic control (https://novid.org) and scalable advanced live math education (https://live.poshenloh.com).
He will also discuss educational strategies that build relevant skills to survive this new era of Generative AI (e.g. ChatGPT). He has been working extensively on that problem, and draws from experience teaching across the entire spectrum, from underprivileged schools to the International Math Olympiad.
Po-Shen Loh is a social entrepreneur and inventor working across the spectrum of mathematics, education, and healthcare, all around the world. He is a math professor at Carnegie Mellon University, and the national coach of the USA International Mathematical Olympiad team. He holds math degrees from Caltech and Cambridge, and a PhD from Princeton. As an academic, Po-Shen has earned distinctions ranging from an International Mathematical Olympiad silver medal to the United States Presidential Early Career Award for Scientists and Engineers. He was the coach of Carnegie Mellon University’s math team when it achieved its first-ever #1 rank among all North American universities, and the coach of the USA Math Olympiad team when it achieved its first-ever back-to-back #1-rank victories in 2015 and 2016, and then again in 2018 and 2019. His research and educational outreach takes him to cities across the world, reaching over 10,000 people each year through public lectures and events, and he has featured in or co-created videos totaling over 19 million YouTube views.
https://simons.berkeley.edu/events/theoretically-speaking-building-human-intelligence-scale-save-next-generation-chatgpt
Theoretically Speaking
The scale of global societal problems looks daunting. One person, or even a small team, is minuscule relative to the number of people who need help. For example, since ChatGPT has exploded onto the scene, our children's future employment prospects (and current educational experience, with ChatGPT-powered cheating) are in existential danger. There is an area close to mathematics, however, which devises solutions in which problems solve themselves even through self-serving human behavior: game theory.
Po-Shen Loh is a math professor, researcher, and educator who transitioned to devise new solutions for large-scale real-world problems. He will talk about his experience going between the ivory tower of academia and the practicality of the real world, where he ultimately innovated fundamentally new approaches to pandemic control (https://novid.org) and scalable advanced live math education (https://live.poshenloh.com).
He will also discuss educational strategies that build relevant skills to survive this new era of Generative AI (e.g. ChatGPT). He has been working extensively on that problem, and draws from experience teaching across the entire spectrum, from underprivileged schools to the International Math Olympiad.
Po-Shen Loh is a social entrepreneur and inventor working across the spectrum of mathematics, education, and healthcare, all around the world. He is a math professor at Carnegie Mellon University, and the national coach of the USA International Mathematical Olympiad team. He holds math degrees from Caltech and Cambridge, and a PhD from Princeton. As an academic, Po-Shen has earned distinctions ranging from an International Mathematical Olympiad silver medal to the United States Presidential Early Career Award for Scientists and Engineers. He was the coach of Carnegie Mellon University’s math team when it achieved its first-ever #1 rank among all North American universities, and the coach of the USA Math Olympiad team when it achieved its first-ever back-to-back #1-rank victories in 2015 and 2016, and then again in 2018 and 2019. His research and educational outreach takes him to cities across the world, reaching over 10,000 people each year through public lectures and events, and he has featured in or co-created videos totaling over 19 million YouTube views.
Feb. 2022
Eric Mazumdar (Caltech)
https://simons.berkeley.edu/talks/learning-presence-strategic-agents-dynamics-equilibria-and-convergence
Meet the Fellows Welcome Event
https://simons.berkeley.edu/talks/learning-presence-strategic-agents-dynamics-equilibria-and-convergence
Meet the Fellows Welcome Event
Jul. 2021
Theory Shorts is a documentary web series that explores topics from the Simons Institute’s research programs.
The second short film in the series, “Until the Sun Engulfs the Earth: Lower Bounds in Computational Complexity,” explores how we know that a problem is impossible to solve.
FURTHER READING
Fruit game optimal algorithm: Hossein Jowhari, Mert Saglam, Gábor Tardos. "Tight bounds for Lp
samplers, finding duplicates in streams, and related problems." PODS 2011: 49-58.
Fruit game optimal lower bound: Michael Kapralov, Jelani Nelson, Jakub Pachocki, Zhengyu Wang, David P. Woodruff, Mobin Yahyazadeh. "Optimal Lower Bounds for Universal Relation, and for Samplers and Finding Duplicates in Streams." FOCS 2017: 475-486.
FEATURING
Paul Beame
Faith Ellen
Jelani Nelson
Manuel Sabin
Madhu Sudan
DIRECTORS
Anil Ananthaswamy
Kristin Kane
SCIENTIFIC ADVISOR
Shafi Goldwasser
HOST/WRITER
Anil Ananthaswamy
EDITOR/PRODUCER
Kristin Kane
GRAPHIC AND ANIMATION DESIGNER
Barry Bödeker
ANIMATORS
Caresse Haaser
Kristin Kane
VIDEOGRAPHER
Drew Mason
PRODUCTION ASSISTANTS
Kevin Hung
Bexia Shi
COPY EDITOR
Preeti Aroon
TECH SUPPORT
Adriel Olmos
SPECIAL THANKS
Ryan Adams
Wesley Adams
Marco Carmosino
Kani Ilangovan
Sampath Kannan
Richard Karp
David Kim
Bryan Nelson
Jeremy Perlman
Kat Quigley
Siobhan Roberts
Amelia Saul
Umesh Vazirani
MUSIC
Dill Pickles (Heftone Banjo Orchestra)
Flamenco Rhythm (Sunsearcher)
Place Pigalle (Uncle Skeleton)
Plastic (Purple Moons)
SOUND EFFECTS
Courtesy of byxorna, inspectorj, janbezouska, jorickhoofd, kash15, kyster, robinhood76, smotasmr, svarvarn, and vandrandepinnen via Freesound.org
OTHER MEDIA
Becoming (Jan van IJken)
A Decade of Sun (Solar Dynamics Observatory, NASA)
Move Mountain (Kirsten Lepore)
© Simons Institute for the Theory of Computing, 2021
The second short film in the series, “Until the Sun Engulfs the Earth: Lower Bounds in Computational Complexity,” explores how we know that a problem is impossible to solve.
FURTHER READING
Fruit game optimal algorithm: Hossein Jowhari, Mert Saglam, Gábor Tardos. "Tight bounds for Lp
samplers, finding duplicates in streams, and related problems." PODS 2011: 49-58.
Fruit game optimal lower bound: Michael Kapralov, Jelani Nelson, Jakub Pachocki, Zhengyu Wang, David P. Woodruff, Mobin Yahyazadeh. "Optimal Lower Bounds for Universal Relation, and for Samplers and Finding Duplicates in Streams." FOCS 2017: 475-486.
FEATURING
Paul Beame
Faith Ellen
Jelani Nelson
Manuel Sabin
Madhu Sudan
DIRECTORS
Anil Ananthaswamy
Kristin Kane
SCIENTIFIC ADVISOR
Shafi Goldwasser
HOST/WRITER
Anil Ananthaswamy
EDITOR/PRODUCER
Kristin Kane
GRAPHIC AND ANIMATION DESIGNER
Barry Bödeker
ANIMATORS
Caresse Haaser
Kristin Kane
VIDEOGRAPHER
Drew Mason
PRODUCTION ASSISTANTS
Kevin Hung
Bexia Shi
COPY EDITOR
Preeti Aroon
TECH SUPPORT
Adriel Olmos
SPECIAL THANKS
Ryan Adams
Wesley Adams
Marco Carmosino
Kani Ilangovan
Sampath Kannan
Richard Karp
David Kim
Bryan Nelson
Jeremy Perlman
Kat Quigley
Siobhan Roberts
Amelia Saul
Umesh Vazirani
MUSIC
Dill Pickles (Heftone Banjo Orchestra)
Flamenco Rhythm (Sunsearcher)
Place Pigalle (Uncle Skeleton)
Plastic (Purple Moons)
SOUND EFFECTS
Courtesy of byxorna, inspectorj, janbezouska, jorickhoofd, kash15, kyster, robinhood76, smotasmr, svarvarn, and vandrandepinnen via Freesound.org
OTHER MEDIA
Becoming (Jan van IJken)
A Decade of Sun (Solar Dynamics Observatory, NASA)
Move Mountain (Kirsten Lepore)
© Simons Institute for the Theory of Computing, 2021
Playlist: 52 videos
This colloquium series features talks by some of the foremost experts in quantum computation in the form of "an invitation to research in area X". With the explosion of interest in quantum computation, there is a dizzying flurry of results, as well as a diverse group of researchers who are drawn to this field. This colloquium series aims to target three audiences:
Playlist: 70 videos
Playlist: 22 videos
Apr. 2020
Theory Shorts is a documentary web series that explores topics from the Simons Institute’s research programs.
Episode 1, “Perception as Inference: The Brain and Computation,” explores the computational processes by which the brain builds visual models of the external world, based on noisy or incomplete data from patterns of light sensed on the retinae.
HOST
Bruno Olshausen
DIRECTOR
Christoph Drösser
EDITOR
Michaelle McGaraghan
PRODUCERS
Kristin Kane
Michaelle McGaraghan
SCIENTIFIC ADVISOR
Shafi Goldwasser
ANIMATORS
Caresse Haaser
Christoph Drösser
Lukas Engelhardt
GRAPHIC DESIGNER
Barry Bödeker
VIDEOGRAPHERS
Drew Mason
Omied Far
Michaelle McGaraghan
Matt Beardsley
PRODUCTION ASSISTANTS
Christine Wang
Bexia Shi
Lior Shavit
THEME MUSIC
“Plastic” by Purple Moons
Courtesy of Marmoset in Portland, Oregon
OTHER MEDIA COURTESY OF
Bruce Damonte
Arash Fazl
Anders Garm
Jean Lorenceau and Maggie Shiffrar
Beau Lotto
A. L. Yarbus
Bruno Olshausen
videocobra / Pond5
BlackBoxGuild / Pond5
nechaevkon / Pond5
DaveWeeks / Pond5
CinematicStockVideo / Pond5
BananaRepublic / Pond5
MicroStockTube / Pond5
shelllink / Pond5
AudioQuattro / Envato Market
HitsLab / Envato Market
FlossieWood / Envato Market
plaincask / Envato Market
MusicDog / Envato Market
Loopmaster / Envato Market
Ryokosan / Envato Market
Images used under license from Shutterstock.com
© Simons Institute for the Theory of Computing, 2019
Episode 1, “Perception as Inference: The Brain and Computation,” explores the computational processes by which the brain builds visual models of the external world, based on noisy or incomplete data from patterns of light sensed on the retinae.
HOST
Bruno Olshausen
DIRECTOR
Christoph Drösser
EDITOR
Michaelle McGaraghan
PRODUCERS
Kristin Kane
Michaelle McGaraghan
SCIENTIFIC ADVISOR
Shafi Goldwasser
ANIMATORS
Caresse Haaser
Christoph Drösser
Lukas Engelhardt
GRAPHIC DESIGNER
Barry Bödeker
VIDEOGRAPHERS
Drew Mason
Omied Far
Michaelle McGaraghan
Matt Beardsley
PRODUCTION ASSISTANTS
Christine Wang
Bexia Shi
Lior Shavit
THEME MUSIC
“Plastic” by Purple Moons
Courtesy of Marmoset in Portland, Oregon
OTHER MEDIA COURTESY OF
Bruce Damonte
Arash Fazl
Anders Garm
Jean Lorenceau and Maggie Shiffrar
Beau Lotto
A. L. Yarbus
Bruno Olshausen
videocobra / Pond5
BlackBoxGuild / Pond5
nechaevkon / Pond5
DaveWeeks / Pond5
CinematicStockVideo / Pond5
BananaRepublic / Pond5
MicroStockTube / Pond5
shelllink / Pond5
AudioQuattro / Envato Market
HitsLab / Envato Market
FlossieWood / Envato Market
plaincask / Envato Market
MusicDog / Envato Market
Loopmaster / Envato Market
Ryokosan / Envato Market
Images used under license from Shutterstock.com
© Simons Institute for the Theory of Computing, 2019
Playlist: 27 videos
Nov. 2018
Rong Ge (Duke University)
https://simons.berkeley.edu/talks/can-non-convex-optimization-be-robust
Robust and High-Dimensional Statistics
https://simons.berkeley.edu/talks/can-non-convex-optimization-be-robust
Robust and High-Dimensional Statistics
Mar. 2018
Christian Machens, Champalimaud Research
https://simons.berkeley.edu/talks/christian-machens-3-21-18
Targeted Discovery in Brain Data
https://simons.berkeley.edu/talks/christian-machens-3-21-18
Targeted Discovery in Brain Data
Iterative methods have been greatly influential in continuous optimization. In fact, almost all algorithms in that field are iterative in nature. Recently, a confluence of ideas from optimization and theoretical computer science has led to breakthroughs in terms of new understanding and running time bound improvements for some of the classic iterative continuous optimization primitives. In this workshop we explore these advances as well as new directions that they have opened up. Some of the specific topics that this workshop plans to cover are: advanced first-order methods (non-smooth optimization, regularization and preconditioning), structured optimization, fast LP/SDP solvers, advances in interior point methods and fast streaming/sketching techniques. One of the key themes that will be highlighted is how combining the continuous and discrete points of view can often allow one to achieve near-optimal running time bounds.
Playlist: 23 videos
Much of the progress in solving discrete optimization problems, especially in terms of approximation algorithms, has come from designing novel continuous relaxations. The primary tools in this area are linear programming and semidefinite programming. Other forms of relaxations have also been developed, such as multilinear relaxation for submodular optimization. In this workshop, we explore the state-of-the-art techniques for performing discrete optimization based on continuous relaxations of the underlying problem, as well as our current understanding of the limitations of this kind of approach. We focus on LP/SDP relaxations and techniques for rounding their solutions, as well as methods for submodular optimization, both in the offline and online setting. We also investigate the limits of such relaxations and hardness of approximation results.
Playlist: 28 videos
Aug. 2017
Steve Wright, University of Wisconsin-Madison; Aaron Sidford, Stanford University; and Aleksander Mądry, MIT
https://simons.berkeley.edu/talks/interior-point-methods-4
Bridging Continuous and Discrete Optimization Boot Camp
https://simons.berkeley.edu/talks/interior-point-methods-4
Bridging Continuous and Discrete Optimization Boot Camp
Mar. 2017
Nathan Srebro, TTI Chicago
Representation Learning
https://simons.berkeley.edu/talks/nathan-srebro-bartom-2017-3-27
Representation Learning
https://simons.berkeley.edu/talks/nathan-srebro-bartom-2017-3-27
This workshop will focus on dramatic advances in representation and learning taking place in natural language processing, speech and vision. For instance, deep learning can be thought of as a method that combines the tasks of finding a classifier (which we can think of as the top layer of the deep net) with the task of learning a representation (namely, the representation computed at the last-but-one layer).
Playlist: 30 videos
Sep. 2016
Yaron Singer, Harvard University
https://simons.berkeley.edu/talks/yaron-singer-09-20-2016
Optimization and Decision-Making Under Uncertainty
https://simons.berkeley.edu/talks/yaron-singer-09-20-2016
Optimization and Decision-Making Under Uncertainty
Apr. 2016
Dec. 2 – Dec. 6, 2013
Playlist: 23 videos