Smooth Nonconvex Min-Max Optimization
Meisam Razaviyayn (University of Southern California)
Calvin Lab Auditorium
Recent applications that arise in machine learning have surged significant interest in solving min-max optimization problems. This problem has been extensively studied in the convex-concave regime for which a globally optimal solution can be computed efficiently. In the nonconvex regime, on the other hand, most problems cannot be solved to any reasonable notion of stationarity. In this talk, we present different classes of smooth nonconvex min-max problems that can be solved efficiently up to first-order stationarity of its Moreau envelope. In particular, we propose efficient algorithms for finding (first-order) stationary solutions to nonconvex min-max problems classes when the inner maximization problem is concave or when the diameter of the constraint set for the inner maximization problem is "small". Our results are the first of their kind that finds stationary solutions to nonconvex-nonconcave min-max problems without assuming any restriction on the objective function (other than standard smoothness assumptions). We also discuss the validity of our assumptions in various applications and evaluate the performance of our algorithms on different applications including training adversarially robust neural networks, fair machine learning, data imputation, and training generative adversarial networks.