Statistical Property Testing and Estimation Beyond the i.i.d. Setting
Calvin Lab Auditorium
Over the past fifteen years there has been much work tackling various aspects of the basic question: given independent draws from some fixed distribution (or a pair of distributions), and a statistical or property of interest (e.g. entropy, support size, distance between distributions, distance to a uniform distribution, etc.), how many draws does one need to accurately estimate the property value? The general punchline that has emerged from this body of work is that for most of these properties of interest, one can accurately estimate the property value, and decide whether the distribution possesses certain properties of interest, using far fewer draws from the distribution than would be necessary to actually learn the distribution. In this talk, I will briefly discuss some new results on these questions in the setting where the samples are not drawn independently from a fixed distribution, but instead are drawn from a hidden Markov model.