Presentations

Sherri Rose (Havard Medical School) - Rethinking Plan Payment Risk Adjustment with Machine Learning Wednesday, April 8, 2015

Abstract: Risk adjustment models for plan payment are typically estimated using classical linear regression models. These models are designed to predict plan spending, often as a function of age, gender, and diagnostic conditions. The trajectory of risk adjustment methodology in the federal government has been largely frozen since the 1970s, failing to incorporate methodological advances that could yield improved formulas. The use of novel machine learning techniques may improve estimators for risk adjustment, including reducing the ability of insurers...

Read more about Sherri Rose (Havard Medical School) - Rethinking Plan Payment Risk Adjustment with Machine Learning
Miguel Hernan (Harvard) - Comparative effectiveness of dynamic treatment strategies: The renaissance of the parametric g-formula Wednesday, April 1, 2015

Abstract: Causal questions about the comparative effectiveness and safety of health-related interventions are becoming increasingly complex. Decision makers are now often interested in the comparison of interventions that are sustained over time and that may be personalized according to the individuals’ time-evolving characteristics. These dynamic treatment strategies cannot be adequately studied by using conventional analytic methods that were designed to compare “treatment” vs. “no treatment”. The parametric g-formula was developed by Robins in 1986...

Read more about Miguel Hernan (Harvard) - Comparative effectiveness of dynamic treatment strategies: The renaissance of the parametric g-formula
Fabrizia Mealli (University of Florence/Harvard) - Evaluating the effect of university grants on student dropout: Evidence from a regression discontinuity design using Bayesian principal stratification analysis Wednesday, March 25, 2015

Abstract: Regression discontinuity (RD) designs are often interpreted as local randomized experiments: a RD design can be considered as a randomized experiment for units with a realized value of a so-called forcing variable falling around a pre-fixed threshold. Motivated by the evaluation of Italian university grants, we consider a fuzzy RD...

Read more about Fabrizia Mealli (University of Florence/Harvard) - Evaluating the effect of university grants on student dropout: Evidence from a regression discontinuity design using Bayesian principal stratification analysis
James Robins (Harvard) - The Foundations of Statistics and Its Implications for Current Methods for Causal Inference from Observational and Randomized Trial Data Wednesday, March 11, 2015

Abstract:  The foundations of statistics are the fundamental conceptual principles that underlie statistical methodology and distinguish statistics from the highly related fields of probability and mathematics. Examples of foundational concepts include ancillarity, the conditionality principle, the likelihood principle, statistical decision theory, the weak and strong repeated sampling principle, coherence and even the meaning of probability itself. In the 1950s and 1960s, the study of the foundations of statistics held an important place in the field....

Read more about James Robins (Harvard) - The Foundations of Statistics and Its Implications for Current Methods for Causal Inference from Observational and Randomized Trial Data
Maximilian Kasy (Harvard) - Why experimenters should not randomize, and what they should do instead Wednesday, March 4, 2015

Abstract- This paper discusses experimental design for the case that (i) we are given a distribution of covariates from a pre-selected random sample, and (ii) we are interested in the average treatment effect (ATE) of some binary treatment. We show that in general there is a unique optimal non-random treatment assignment if there are continuous covariates. We argue that experimenters should choose this assignment. The optimal assignment minimizes the risk (e.g., expected squared error) of treatment effects estimators. We provide explicit expressions for the...

Read more about Maximilian Kasy (Harvard) - Why experimenters should not randomize, and what they should do instead
Tamara Broderick (MIT) - Feature allocations, probability functions, and paintboxes Wednesday, February 25, 2015

Abstract: Clustering involves placing entities into mutually exclusive
categories. We wish to relax the requirement of mutual exclusivity,
allowing objects to belong simultaneously to multiple classes, a
formulation that we refer to as "feature allocation." The first step
is a theoretical one. In the case of clustering the class of
probability distributions over exchangeable partitions of a dataset
has been characterized (via exchangeable partition probability...

Read more about Tamara Broderick (MIT) - Feature allocations, probability functions, and paintboxes
Justin Reich (Harvard) - Massive Open Online Courses and the Science of Learning Wednesday, February 18, 2015
Abstract: Large-scale open online learning environments continuously record learner activities: the 54 courses conducted by HarvardX and MITx in the 2013-2014 academic year had 1.1MM participants who recorded over a half a billion actions. Increasingly, online learning platforms also support A/B testing frameworks that allow for a variety of experimental designs. This combination of data recording and experimentation opens up excited new avenues for educational research. This talk will provide an overview of the various...
Read more about Justin Reich (Harvard) - Massive Open Online Courses and the Science of Learning
Kayhan Batmanghelich (MIT) - Joint Modeling Imaging and Genetics: a Probabilistic Approach Wednesday, February 11, 2015

Abstract: An increasing number of clinical and imaging research studies is collecting various additional information including genetic data. The goals of the emerging field of imaging genetics can be summarized into two aims: 1) using imaging biomarkers as an intermediate phenotype to uncover underlying biological mechanisms of diseases; 2) phenotype discovery. 
In this talk, we will focus on the first goal, namely using imaging as an intermediate phenotype, and briefly discuss the...
Read more about Kayhan Batmanghelich (MIT) - Joint Modeling Imaging and Genetics: a Probabilistic Approach
Jelani Nelson (Harvard) - Dimensionality Reduction Via Sparse Matrices Wednesday, January 28, 2015

Abstract- This talk will discuss sparse Johnson-Lindenstrauss transforms, i.e.sparse linear maps into much lower dimension which preserve the Euclidean geometry of a set of vectors. Both upper and lower bounds will be presented, as well as applications to certain domains such as numerical linear algebra and compressed sensing.  Based on various joint works with Jean Bourgain, Daniel M. Kane, and Huy Le Nguyen.

...
Read more about Jelani Nelson (Harvard) - Dimensionality Reduction Via Sparse Matrices

Pages