Presentations

Kosuke Imai presents "Experimental Evaluation of Algorithm-Assisted Human Decision-Making: Application to Pretrial Public Safety Assessment", at https://harvard.zoom.us/j/99424949004?pwd=aWtPNFM3ZzFYbWxIMXNoZDlyUElVZz09, Wednesday, December 2, 2020
Despite an increasing reliance on fully-automated algorithmic decision making in our day-to-day lives, human beings still make highly consequential decisions.  As frequently seen in business, healthcare, and public policy, recommendations produced by algorithms are provided to human decision-makers in order to guide their decisions.  While there exists a fast growing literature evaluating the bias and fairness of such algorithmic recommendations, an overlooked question is whether they help humans make better decisions.  We develop a statistical methodology for experimentally... Read more about Kosuke Imai presents "Experimental Evaluation of Algorithm-Assisted Human Decision-Making: Application to Pretrial Public Safety Assessment"
Tyler VanderWeele presents "Revisiting Psychometric Theory and Factor Analysis", at https://harvard.zoom.us/j/99424949004?pwd=aWtPNFM3ZzFYbWxIMXNoZDlyUElVZz09, Wednesday, November 18, 2020

The presentation will revisit some of the conceptual and statistical foundations of psychometric measurement theory and factor analysis, specifically addressing the questions: (i) What happens to “factors” when they causally affect one another?, (ii) Is an underlying univariate latent variable a reasonable model for psycho-social constructs?, (iii) What are the testable empirical implications of such a model? and (iv) What alternative interpretations of analyses with constructed measures might be possible?

The presentation will be based upon the following three preprints:...

Read more about Tyler VanderWeele presents "Revisiting Psychometric Theory and Factor Analysis"
Cory McCartan presents "Sequential Monte Carlo for Sampling Balanced and Compact Redistricting Plans", at https://harvard.zoom.us/j/99424949004?pwd=aWtPNFM3ZzFYbWxIMXNoZDlyUElVZz09, Wednesday, November 11, 2020

Random sampling of graph partitions under constraints has become a popular tool for evaluating legislative redistricting plans. Analysts detect partisan gerrymandering by comparing a proposed redistricting plan with an ensemble of sampled alternative plans. For successful application, sampling methods must scale to large maps with many districts, incorporate realistic legal constraints, and accurately sample from a selected target distribution. Unfortunately, most existing methods struggle in at least one of these three areas. We present a new Sequential Monte Carlo (SMC) algorithm that...

Read more about Cory McCartan presents "Sequential Monte Carlo for Sampling Balanced and Compact Redistricting Plans"
Yiling Chen presents "Unexpected Consequences of Algorithm-in-the-Loop Decision Making", at https://harvard.zoom.us/j/99424949004?pwd=aWtPNFM3ZzFYbWxIMXNoZDlyUElVZz09, Wednesday, November 4, 2020

The rise of machine learning has fundamentally altered decision making: rather than being made solely by people, many important decisions are now made through an “algorithm-in-the-loop” process where machine learning models inform people. Yet insufficient research has considered how the interactions between people and models actually influence human decision making. In this talk, I’ll discuss results from a set of controlled experiments on algorithm-in-the-loop human decision making in two contexts (pretrial release and financial lending). For example, when presented with algorithmic...

Read more about Yiling Chen presents "Unexpected Consequences of Algorithm-in-the-Loop Decision Making"
Eric Tchetgen Tchetgen presents "An Introduction to Proximal Causal Learning", at https://harvard.zoom.us/j/99424949004?pwd=aWtPNFM3ZzFYbWxIMXNoZDlyUElVZz09, Wednesday, October 21, 2020

A standard assumption for causal inference from observational data is that one has measured a sufficiently rich set of covariates to ensure that within covariates strata, subjects are exchangeable across observed treatment values. Skepticism about the exchangeability assumption in observational studies is often warranted because it hinges on one's ability to accurately measure covariates capturing all potential sources of confounding. Realistically, confounding mechanisms can rarely if ever, be learned with certainty from measured covariates. One can therefore only ever hope that...

Read more about Eric Tchetgen Tchetgen presents "An Introduction to Proximal Causal Learning"
Luke Miratrix presents "A Practitioner’s Guide to Intent-to-Treat Effects from Multisite (blocked) Individually Randomized Trials: Estimands, Estimators, and Estimates", at https://harvard.zoom.us/j/99424949004?pwd=aWtPNFM3ZzFYbWxIMXNoZDlyUElVZz09, Wednesday, October 14, 2020
There are many ways to estimate an overall average effect of a large-scale multisite individually randomized control trial.  The researcher can target the average effect across individuals or sites. Furthermore, the researcher can target the effect for the experimental sample or a larger population. If treatment effects vary across sites, these estimands can differ. Once an estimand is selected, an estimator must be chosen. Standard estimators, such as fixed-effects regression, can be biased. We describe 15 different estimators commonly in use, consider which estimands they are... Read more about Luke Miratrix presents "A Practitioner’s Guide to Intent-to-Treat Effects from Multisite (blocked) Individually Randomized Trials: Estimands, Estimators, and Estimates"
Michael Baiocchi presents "When black box algorithms are (not) appropriate: a principled prediction-problem ontology", at Zoom: https://harvard.zoom.us/j/99424949004?pwd=aWtPNFM3ZzFYbWxIMXNoZDlyUElVZz09, Wednesday, September 30, 2020

In the 1980s a new, extraordinarily productive way of reasoning about algorithms emerged. Though this type of reasoning has come to dominate areas of data science, it has been under-discussed and its impact under-appreciated. For example, it is the primary way we reason about "black box'' algorithms. In this talk we discuss its current use (i.e., as "the common task framework'') and its limitations; we find a large class of prediction-problems are inappropriate for this type of reasoning. Further, we find the common task framework does not provide a foundation for the deployment of an...

Read more about Michael Baiocchi presents "When black box algorithms are (not) appropriate: a principled prediction-problem ontology"
Reagan Moze presents "Recent Adventures in Causal(ish) Inference with Text as Data.", at Zoom: https://harvard.zoom.us/j/99424949004?pwd=aWtPNFM3ZzFYbWxIMXNoZDlyUElVZz09, Wednesday, September 23, 2020
Text data have a long history in social science and education research. However, these data are notoriously high-dimensional and characterized by many nuances of language that lack plausible statistical models. As a result, analysis of text data typically involves intensive human coding tasks where particular constructs or features of the text are first defined, and then a collection of documents are inspected and coded for the presence or absence of these constructs. While this process may be feasible in studies with smaller sample sizes, the time and resources required to train and employ... Read more about Reagan Moze presents "Recent Adventures in Causal(ish) Inference with Text as Data."

Pages