Talks
Fall 2022

Causal Matrix Completion: Applications to Offline Causal Reinforcement Learning

Monday, September 12th, 2022, 2:45 pm3:30 pm

Add to Calendar

Speaker: 
Location: 

Calvin Lab Auditorium

Matrix completion is the study of recovering an underlying matrix from a sparse subset of noisy observations. Traditionally, it is assumed entries of the matrix are “missing completely at random” (MCAR), i.e., each entry is revealed at random, independent of everything else, with uniform probability. This is likely unrealistic due to the presence of “latent confounders”, i.e., unobserved factors that determine both the entries of the underlying matrix and the missingness pattern in the observed matrix. For example, in the context of movie recommender systems—a canonical application for matrix completion—a user who vehemently dislikes horror films is unlikely to ever watch horror films. In general, these confounders yield “missing not at random” (MNAR) data, which can severely impact any inference procedure. We develop a formal causal model for matrix completion through the language of potential outcomes, and provide novel identification arguments for a variety of causal estimands of interest. We combine a nearest neighbors approach for matrix completion —popularly known as collaborative filtering—with the synthetic controls approach for panel data, to design a simple two-step algorithm, which we call “synthetic nearest neighbors” (SNN) to estimate these causal estimands. We prove entry-wise finite-sample consistency and asymptotic normality of the SNN estimator for matrix completion with MNAR data. Lastly, we show how algorithms for reinforcement learning, where the data is collected offline and there is potentially unobserved confounding in how actions are picked, can be effectively designed through the lens of causal tensor completion, an extension of matrix completion to higher dimensions.