Talks
![](https://old.simons.berkeley.edu/sites/default/files/styles/workshop_main/public/machine_learning_pod_logo.png?itok=jHng2NmG)
Universality of Neural Networks
Tuesday, December 7th, 2021, 11:35 am–11:50 am
Speaker:
Dan Mikulincer (MIT)
Location:
Calvin Lab Auditorium
It is well known that, at a random initialization, as their width approaches infinity, neural networks can be well approximated by Gaussian processes. We quantify this phenomenon by providing non-asymptotic convergence rates in the space of continuous functions. In the process, we study the Central Limit Theorem in high and infinite dimensions, as well as anti-concentration properties of polynomials with random variables
Attachment | Size |
---|---|
![]() | 226.82 KB |