Events
Fall 2020

Computational/Statistical Gaps for Learning Neural Networks

Tuesday, December 1st, 2020, 11:00 am12:00 pm

Add to Calendar

Speaker: 

Adam Klivans (University of Texas, Austin)

Location: 

Zoom link will be sent out to program participants.

It has been known for decades that a polynomial-size training sample suffices for learning neural networks.  Most theoretical results, however, indicate that these learning tasks are computationally intractable.  Where exactly is the dividing line?  In this talk we consider arguably the most well-studied scenario, namely when the marginal distribution on inputs is Gaussian, and show unconditionally that gradient descent cannot learn even one-layer neural networks.  We then point to a potential way forward and sketch the first fixed-parameter tractable algorithm for learning deep ReLU networks.  Its running time is polynomial in the ambient dimension and exponential in only the network's parameters.  This is the first nontrivial positive result for networks of depth more than two.