No abstract available.
Tuesday, May 28th, 2019
We review tools useful for the analysis of the generalization performance of deep neural networks on classification and regression problems. We review uniform convergence properties, which show how this performance depends on notions of complexity, such as Rademacher averages, covering numbers, and combinatorial dimensions, and how these quantities can be bounded for neural networks. We also review the analysis of the performance of nonparametric estimation methods such as nearest-neighbor rules and kernel smoothing. Deep networks raise some novel challenges, since they have been observed to perform well even with a perfect fit to the training data. We review some recent efforts to understand the performance of interpolating prediction rules, and highlight the questions raised for deep learning.
We review tools useful for the analysis of the generalization performance of deep neural networks on classification and regression problems. We review uniform convergence properties, which show how this performance depends on notions of complexity, such as Rademacher averages, covering numbers, and combinatorial dimensions, and how these quantities can be bounded for neural networks. We also review the analysis of the performance of nonparametric estimation methods such as nearest-neighbor rules and kernel smoothing. Deep networks raise some novel challenges, since they have been observed to perform well even with a perfect fit to the training data. We review some recent efforts to understand the performance of interpolating prediction rules, and highlight the questions raised for deep learning.
Wednesday, May 29th, 2019
We review tools useful for the analysis of the generalization performance of deep neural networks on classification and regression problems. We review uniform convergence properties, which show how this performance depends on notions of complexity, such as Rademacher averages, covering numbers, and combinatorial dimensions, and how these quantities can be bounded for neural networks. We also review the analysis of the performance of nonparametric estimation methods such as nearest-neighbor rules and kernel smoothing. Deep networks raise some novel challenges, since they have been observed to perform well even with a perfect fit to the training data. We review some recent efforts to understand the performance of interpolating prediction rules, and highlight the questions raised for deep learning.
We survey recent developments in the optimization and learning of deep neural networks. The three focus topics are on: 1) geometric results for the optimization of neural networks , 2) Overparametrized neural networks in the kernel regime (Neural Tangent Kernel) and its implications and limitations , and 3) potential strategies to prove SGD improves on kernel predictors.
We review tools useful for the analysis of the generalization performance of deep neural networks on classification and regression problems. We review uniform convergence properties, which show how this performance depends on notions of complexity, such as Rademacher averages, covering numbers, and combinatorial dimensions, and how these quantities can be bounded for neural networks. We also review the analysis of the performance of nonparametric estimation methods such as nearest-neighbor rules and kernel smoothing. Deep networks raise some novel challenges, since they have been observed to perform well even with a perfect fit to the training data. We review some recent efforts to understand the performance of interpolating prediction rules, and highlight the questions raised for deep learning.
Modern machine learning models (i.e., neural networks) are incredibly sensitive to small perturbations of their input. This creates potentially critical security breach in many deep learning applications (object detection, ranking systems, etc). In this talk I will cover some of what we know and what we don't know about this phenomenon of ``adversarial examples". I will focus on three topics: (i) generalization (do you need more data than for standard ML?), (ii) inevitability of adversarial examples (is this problem unsolvable?), and (iii) certification techniques (how do you provably --and efficiently-- guarantee robustness?).
Thursday, May 30th, 2019
We survey recent developments in the optimization and learning of deep neural networks. The three focus topics are on: 1) geometric results for the optimization of neural networks , 2) Overparametrized neural networks in the kernel regime (Neural Tangent Kernel) and its implications and limitations , and 3) potential strategies to prove SGD improves on kernel predictors.
No abstract available.
No abstract available.
Friday, May 31st, 2019
No abstract available.