Talks
Summer 2019

Adversarial Examples

Wednesday, May 29th, 2019, 3:40 pm5:00 pm

Add to Calendar

Speaker: 

Sébastien Bubeck (Microsoft Research)

Modern machine learning models (i.e., neural networks) are incredibly sensitive to small perturbations of their input. This creates potentially critical security breach in many deep learning applications (object detection, ranking systems, etc). In this talk I will cover some of what we know and what we don't know about this phenomenon of ``adversarial examples". I will focus on three topics: (i) generalization (do you need more data than for standard ML?), (ii) inevitability of adversarial examples (is this problem unsolvable?), and (iii) certification techniques (how do you provably --and efficiently-- guarantee robustness?).