Talks
Fall 2017

Natasha 2: Faster Non-convex Optimization Than SGD

Friday, October 6th, 2017, 11:30 am12:15 pm

Add to Calendar

We design a stochastic algorithm to train any smooth neural network to eps-approximate local minima, using O(e^{-3.25}) backpropagations. The best result was essentially O(e^{-4}) by SGD.

More broadly, it finds eps-approximate local minima of any smooth nonconvex function in rate O(e^{-3.25}), with only oracle access to stochastic gradients and Hessian-vector products.