Talks
Spring 2022

No-Regret Learning in Extensive-Form Games

Thursday, May 5th, 2022, 11:30 am12:15 pm

Add to Calendar

Speaker: 

Amy Greenwald (Brown University)

Location: 

Calvin Lab Auditorium

The convergence of \Phi-regret-minimization algorithms in self-play to \Phi-equilibria is well understood in normal-form games (NFGs), where \Phi is the set of deviation strategies. This talk investigates the analogous relationship in extensive-form games (EFGs). While the primary choices for \Phi in NFGs are internal and external regret, the space of possible deviations in EFGs is much richer. We restrict attention to a class of deviations known as behavioral deviations, inspired by von Stengel and Forges' deviation player, which they introduced when defining extensive-form correlated equilibria (EFCE). We then propose extensive-form regret minimization (EFR), a regret-minimizing learning algorithm whose complexity scales with the complexity of \Phi, and which converges in self-play to EFCE when \Phi is the set of behavioral deviations. Von Stengel and Forges, Zinkevich et al., and Celli et al. all weaken the deviation player in various ways, and then derive corresponding efficient equilibrium-finding algorithms. These weakenings (and others) can be seamlessly encoded into EFR at runtime, by simply defining an appropriate \Phi. The result is a class of efficient \Phi-equilibrium finding algorithms for EFGs.

AttachmentSize
PDF icon Slides954.2 KB