Fellows Talk - Jalaj Bhandari
Jalaj Bhandari (Columbia University)
Zoom link will be sent out to program participants.
Speaker: Jalaj Bhandari (Columbia University)
Title: A Finite Time Analysis of Temporal Difference Learning with Linear Function Approximation
Abstract: Temporal difference learning (TD) is a simple iterative algorithm used to estimate the value function corresponding to a given policy in a Markov decision process. Although TD is one of the most widely used algorithms in reinforcement learning, its theoretical analysis has proved challenging. We give a simple and explicit finite time analysis of TD learning with linear function approximation. Except for a few key insights, our analysis mirrors standard techniques used for analyzing stochastic gradient descent algorithms, and therefore inherits the simplicity and elegance of that literature. Our analysis seamlessly generalizes to the study of TD learning with eligibility traces, known as TD(λ), and to Q-learning applied in high-dimensional optimal stopping problems.