In the 1980s a new, extraordinarily productive way of reasoning about algorithms emerged. Though this type of reasoning has come to dominate areas of data science, it has been under-discussed and its impact under-appreciated. For example, it is the primary way we reason about "black box'' algorithms. In this talk we discuss its current use (i.e., as "the common task framework'') and its limitations; we find a large class of prediction-problems are inappropriate for this type of reasoning. Further, we find the common task framework does not provide a foundation for the deployment of an algorithm in a real world situation. Building off of its core features, we identify a class of problems where this new form of reasoning can be used in deployment. We purposefully develop a novel framework so both technical and non-technical people can discuss and identify key features of their prediction problem and whether or not it is suitable for this new kind of reasoning.
Paper is available here.