Talks
Fall 2017

Near-Optimal Time and Sample Complexities for Reinforcement Learning with a Generative Model

Tuesday, December 11th, 2018, 4:15 pm4:45 pm

Add to Calendar

Speaker: 

Lin Yang (Princeton University)

We consider the reinforcement-learning problem of computing an eps-optimal policy of a discounted Markov Decision Process (DMDP) provided we can only access its transition function through samples. Given such a DMDP with states S, actions A, discount factor gamma in (0,1), and rewards in range [0,1] we provide an algorithm which computes an eps-optimal policy with high probability where both the time spent and the number of samples taken are upper bounded by ~ |S||A|(1-gamma)^{-3}eps^{-2}. This improves upon the previously best-known bounds by a factor of (1-gamma)^-1 and matches the information theoretical lower bound up to logarithmic factors.