Title: A Machine Learning Perspective on Causal Inference
Abstract: Usually the terms "causal inference" and "machine learning" mix like oil and water. Machine learning models are often black box complicated functions that provide predictions without causal explanations. For causal inference, this kind of model is unacceptable. Maybe we can find ways to harness the predictive power of machine learning methods for the purpose of causal inference. I will discuss three very recent preliminary ideas, from the perspective of a machine learner:
1) Causal Falling Rule Lists (with Fulton Wang). This is a machine learning method that bridges the gap - it's nonlinear yet interpretable, and models causal effects. (More details below.)
2) The Factorized Self-Controlled Case Series Method: An Approach for Estimating the Effects of Many Drugs on Many Outcomes (with Ramin Moghaddass and David Madigan). We estimate the effects of many drugs on many outcomes simultaneously. This Bayesian hierarchical model is formulated with layers of latent factors, which substantially helps with both computation and interpretability.
3) Robust Testing for Causal Inference in Natural Experiments (with Md. Noor-E-Alam). We claim there is a major source of uncertainty that is ignored in matched pairs tests, which is how the matches were constructed by the experimenter. No matter which reasonably good experimenter conducts the test, the hypothesis test result still ought to hold. Our robust matched pairs tests use mixed-integer programming.