Talks
Summer 2019

On the Hardness of Reinforcement Learning With Value-function Approximation

Tuesday, August 6th, 2019, 9:30 am10:15 am

Add to Calendar

Speaker: 

Nan Jiang (University of Illinois Urbana-Champaign)

Value-function approximation methods that operate in batch mode have foundational importance to reinforcement learning (RL). Finite sample guarantees for these methods---which provide the theoretical backbones for empirical ("deep") RL today---crucially rely on strong representation assumptions, e.g., that the function class is closed under Bellman update. Given that such assumptions are much stronger and less desirable than the ones needed for supervised learning (e.g., realizability), it is important to confirm the hardness of learning in their absence. Such a hardness result would also be a crucial piece of a bigger picture on the tractability of various RL settings. Unfortunately, while algorithm-specific lower bound has existed for decades, the information-theoretic hardness remains a mystery. In this talk I will introduce the mathematical setup for studying value-function approximation, introduce our findings in the investigation of the hardness conjecture, and discuss connections to related results/open problems and their implications. Part of the talk is based on work with my student Jinglin Chen at ICML-19.