Events Summer 2022

Theoretically Speaking — Adversarial Examples in Deep Learning

Friday, July 22nd, 2022, 10:00 am12:00 pm

Add to Calendar

Speaker: 

Anil Ananthaswamy (freelance journalist, moderator)

Panelists: Sébastien Bubeck (Microsoft Research), Melanie Mitchell (Santa Fe Institute), and Laurens van der Maaten (Meta AI Research)

Location: 

Optical illusions fool our brains into creating false perceptions. Something similar can be done with deep learning systems. Attackers can intentionally design inputs — known as adversarial examples — that can cause deep neural networks to make mistakes. The mistakes might be harmless (classifying an image of a panda as that of a gibbon, for example) or potentially dangerous (a neural network fails to recognize a stop sign because of strategically placed stickers). This panel will discuss adversarial examples: how can they be designed, how can machine learning models guard against them, if at all, and the connection between robustness against adversarial attacks and the size of deep neural networks, in both theory and practice.

Anil Ananthaswamy (Moderator) was the Simons Institute science communicator in residence for summer and fall 2021. He is a former staff writer and deputy news editor for New Scientist magazine, and former MIT Knight Science Journalism fellow. Ananthaswamy contributes regularly to Quanta Magazine, Scientific American, and others. He’s the author of three popular-science books, The Edge of Physics, The Man Who Wasn’t There, and Through Two Doors at Once, and is currently writing a book on the mathematics of machine learning.

Sébastien Bubeck is a senior principal research manager for Machine Learning Foundations at Microsoft Research, Redmond. In recent years, Bubeck has worked extensively on adversarial examples in machine learning, including studies of computational complexity of adversarial examples and the development of a “law of robustness,” a theory of the relation between the size of machine learning models and their ability to withstand adversarial attacks.

Melanie Mitchell is the Davis Professor at the Santa Fe Institute. Her current research focuses on conceptual abstraction, analogy-making, and visual recognition in artificial intelligence systems. Mitchell is the author or editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems. Her latest book is Artificial Intelligence: A Guide for Thinking Humans.

Laurens van der Maaten is the research director of Meta AI. His research focuses on machine learning and computer vision. Currently, van der Maaten is working on embedding models, large-scale weakly supervised learning, visual reasoning, and cost-sensitive learning. He thinks about the practical relevance of the current work on adversarial attacks and about the robustness of large vision models in practice.

Theoretically Speaking is a lecture series highlighting exciting advances in theoretical computer science for a broad general audience. Events are free and open to the public. No special background is assumed. All speakers will be presenting remotely. The lecture will be viewable via Zoom webinar. Registration is required. Please use the Zoom Q&A feature to ask questions. This lecture will be viewable thereafter on this page and on our YouTube channel.

If you require accommodation for communication, please contact our access coordinator at simonsevents [at] berkeley.edu with as much advance notice as possible.