11/18/2015- Luke Miratrix (Harvard)- Estimating and assessing treatment effect variation in large-scale randomized trials with randomization inference

Presentation Date: 

Wednesday, November 18, 2015

Authors: Peng Deng, Avi Feller, Luke Miratrix

Abstract: Recent literature has underscored the critical role of treatment effect variation in estimating and understanding causal effects. This approach, however, is in contrast to much of the foundational research on causal inference; Neyman, for example, avoided such variation through his focus on the average treatment effect (ATE) and his definition of the confidence interval. We extend the Neymanian framework to explicitly allow both for treatment effect variation explained by covariates, known as the systematic component, and for unexplained treatment effect variation, known as the idiosyncratic component. This perspective enables estimation and testing of impact variation without imposing a model on the marginal distributions of potential outcomes, with the workhorse approach of regression with interaction terms being a special case. Our approach leads to two practical results. First, estimates of systematic impact variation give sharp bounds on overall treatment variation as well as bounds on the proportion of total impact variation explained by a given model---this is essentially an R^2 for treatment effect variation. Second, by using covariates to partially account for the correlation of potential outcomes, we sharpen the bounds on the variance of the unadjusted average treatment effect estimate itself. As long as the treatment effect varies across observed covariates, these bounds are sharper than the current sharp bounds in the literature.  We demonstrate these ideas on the Head Start Impact Study, a large randomized evaluation in educational research, showing that these results are meaningful in practice.

See also: 2015