Thomas Leavitt presents "Model selection for Decreasing Dependence on Counterfactual, Identification Assumptions in Controlled Pre-Post Designs."

Presentation Date: 

Wednesday, October 12, 2022
Researchers often draw causal leverage from measures of outcomes before and after treatment in both a treated group and an untreated, comparison group. Such controlled pre-post designs, e.g., Difference-in-Differences and Comparative-Interrupted-Time-Series, differ in terms of their predictive models and associated counterfactual assumptions to identify the treated group’s average effect (ATT). This paper derives a general, one-stop shop counterfactual assumption — and associated sensitivity analysis — that unifies the differently named assumptions of each design. While the definition of our one-stop shop assumption is general, its validity depends on the specific predictive model. Existing best practice in light of this model dependence is to choose the model that is most plausible. However, this practice is not especially useful when reasonable people disagree about the plausibility of competing models. We instead propose a cross-validation procedure that anticipates the results of a sensitivity analysis to violations of our one-stop shop assumption and then chooses the model that yields the least sensitivity. We formally show that our procedure maximizes robustness to identification assumption violations among a large class of predictive models, and then apply our procedure to the debate about the effect of concealed-carry laws on violent crime. Our method contributes to this debate, which has been stymied by the sensitivity of researchers’ findings to minor changes in model specifications, by choosing not the model that makes a counterfactual assumption most plausible, but instead the model that makes our causal conclusions least dependent on this counterfactual assumption. 
See also: 2022