2015

9/16/2015- Gary King (Harvard)- Why Propensity Scores Should Not Be Used for Matching Wednesday, September 16, 2015

Abstract: Researchers use propensity score matching (PSM) as a data preprocessing step to selectively prune units prior to applying a model to estimate a causal effect. The goal of PSM is to reduce imbalance in the chosen pre-treatment covariates between the treated and control groups, thereby reducing the degree of model dependence and potential for bias. We show here that PSM often accomplishes the opposite of what is intended -- increasing imbalance, inefficiency, model dependence, and bias. The weakness of PSM is that it attempts to approximate...

Read more about 9/16/2015- Gary King (Harvard)- Why Propensity Scores Should Not Be Used for Matching
9/9/2015- Matthew Blackwell (Harvard)- Identification and Estimation of Joint Treatment Effects with Instrumental Variables Wednesday, September 9, 2015

Title: Identification and Estimation of Joint Treatment Effects with Instrumental Variables

Abstract- Over the last twenty years, a literature spanning several fields of applied statistics has analyzed how to identify and estimate causal effects of a nonrandomized treatment when a instrumental variable (IV) is available. But researchers often have multiple treatments and want to estimate either the direct or joint effect of these treatments. This paper introduces a set of novel estimands for instrumental variables with multiple treatments and multiple...

Read more about 9/9/2015- Matthew Blackwell (Harvard)- Identification and Estimation of Joint Treatment Effects with Instrumental Variables
Finale Doshi-Velez (Harvard) - Bayesian Or-of-And Models for Interpretable Classification Wednesday, April 29, 2015

Abstract: Interpretability is an important factor for models to be used and trusted in many applications.  Disjunctive normal forms, also known as or-of-and models, are models with classification rules of the form "Predict True if (A and B) or (A and C) or D."  They are an appealing form of classifier because one can easily trace how a classification decision was made, and has some basis in human decision-making.  In this talk, I will talk about a Bayesian approach to learning or-of-and models and describe an application to context-aware...

Read more about Finale Doshi-Velez (Harvard) - Bayesian Or-of-And Models for Interpretable Classification
Neil Shephard (Harvard)- Pricing each income contingent student loan using administrative data. Some statistical challenges Wednesday, April 22, 2015

Abstract: Income student loans are used in many countries as the prime way for students to fund their tuition fees and maintenance.  Repayments on the loans are a fraction of the former student’s income above some threshold. In England the fraction is 9% and the threshold is around $35,000.  Interest is charged on the loans and any outstanding debt is forgiven after 30 years.  The UK Government has offered to issue such loans to any qualified English student going to a UK university since 1998.  How much are these loans worth to the...

Read more about Neil Shephard (Harvard)- Pricing each income contingent student loan using administrative data. Some statistical challenges
Tyler VanderWeele (Harvard) - A Unification of Mediation and Interaction: A 4-Way Decomposition Wednesday, April 15, 2015

Abstract:  The overall effect of an exposure on an outcome, in the presence of a mediator with which the exposure may interact, can be decomposed into 4 components: (1) the effect of the exposure in the absence of the mediator, (2) the interactive effect when the mediator is left to what it would be in the absence of exposure, (3) a mediated interaction, and (4) a pure mediated effect. These 4 components, respectively, correspond to the portion of the effect that is due to neither mediation nor interaction, to just interaction (but not mediation), to both...

Read more about Tyler VanderWeele (Harvard) - A Unification of Mediation and Interaction: A 4-Way Decomposition
Sherri Rose (Havard Medical School) - Rethinking Plan Payment Risk Adjustment with Machine Learning Wednesday, April 8, 2015

Abstract: Risk adjustment models for plan payment are typically estimated using classical linear regression models. These models are designed to predict plan spending, often as a function of age, gender, and diagnostic conditions. The trajectory of risk adjustment methodology in the federal government has been largely frozen since the 1970s, failing to incorporate methodological advances that could yield improved formulas. The use of novel machine learning techniques may improve estimators for risk adjustment, including reducing the ability of insurers...

Read more about Sherri Rose (Havard Medical School) - Rethinking Plan Payment Risk Adjustment with Machine Learning
Miguel Hernan (Harvard) - Comparative effectiveness of dynamic treatment strategies: The renaissance of the parametric g-formula Wednesday, April 1, 2015

Abstract: Causal questions about the comparative effectiveness and safety of health-related interventions are becoming increasingly complex. Decision makers are now often interested in the comparison of interventions that are sustained over time and that may be personalized according to the individuals’ time-evolving characteristics. These dynamic treatment strategies cannot be adequately studied by using conventional analytic methods that were designed to compare “treatment” vs. “no treatment”. The parametric g-formula was developed by Robins in 1986...

Read more about Miguel Hernan (Harvard) - Comparative effectiveness of dynamic treatment strategies: The renaissance of the parametric g-formula
Fabrizia Mealli (University of Florence/Harvard) - Evaluating the effect of university grants on student dropout: Evidence from a regression discontinuity design using Bayesian principal stratification analysis Wednesday, March 25, 2015

Abstract: Regression discontinuity (RD) designs are often interpreted as local randomized experiments: a RD design can be considered as a randomized experiment for units with a realized value of a so-called forcing variable falling around a pre-fixed threshold. Motivated by the evaluation of Italian university grants, we consider a fuzzy RD...

Read more about Fabrizia Mealli (University of Florence/Harvard) - Evaluating the effect of university grants on student dropout: Evidence from a regression discontinuity design using Bayesian principal stratification analysis
James Robins (Harvard) - The Foundations of Statistics and Its Implications for Current Methods for Causal Inference from Observational and Randomized Trial Data Wednesday, March 11, 2015

Abstract:  The foundations of statistics are the fundamental conceptual principles that underlie statistical methodology and distinguish statistics from the highly related fields of probability and mathematics. Examples of foundational concepts include ancillarity, the conditionality principle, the likelihood principle, statistical decision theory, the weak and strong repeated sampling principle, coherence and even the meaning of probability itself. In the 1950s and 1960s, the study of the foundations of statistics held an important place in the field....

Read more about James Robins (Harvard) - The Foundations of Statistics and Its Implications for Current Methods for Causal Inference from Observational and Randomized Trial Data
Maximilian Kasy (Harvard) - Why experimenters should not randomize, and what they should do instead Wednesday, March 4, 2015

Abstract- This paper discusses experimental design for the case that (i) we are given a distribution of covariates from a pre-selected random sample, and (ii) we are interested in the average treatment effect (ATE) of some binary treatment. We show that in general there is a unique optimal non-random treatment assignment if there are continuous covariates. We argue that experimenters should choose this assignment. The optimal assignment minimizes the risk (e.g., expected squared error) of treatment effects estimators. We provide explicit expressions for the...

Read more about Maximilian Kasy (Harvard) - Why experimenters should not randomize, and what they should do instead

Pages