Talks
Spring 2022

The Limits Of Regularized Learning In Games

Monday, March 28th, 2022, 9:35 am10:30 am

Add to Calendar

Speaker: 

Panayotis Mertikopoulos (French National Center for Scientific Research (CNRS))

Location: 

Calvin Lab Auditorium

This talk focuses on the following question: Does learning from empirical observations in a game-theoretic setting converge to a Nash equilibrium? And, if so, at what rate? A well-known impossibility result in the field precludes the possibility of a "universally positive" answer - i.e., the existence a dynamical process which, based on player-specific information alone, converges to Nash equilibrium in all games. In view of this, we will instead examine the equilibrium convergence properties of a class of widely used no-regret learning processes which includes the exponential weights algorithm and the “follow the regularized leader” family of methods, their optimistic variants, extra-gradient schemes, and many others. In this general context, we establish a comprehensive equivalence between the stability of a Nash equilibrium and its support: a Nash equilibrium is locally stable and attracting if and only if it is strict (i.e., each equilibrium strategy has a unique best response). We will also discuss the robustness of this equivalence to different feedback models - from full information to bandit, payoff-based feedback - as well as the methods' rate of convergence in terms of the underlying regularizer and the type of feedback available to the players.