Spring 2021 Syllabus

Syllabus CS 288

“AI for Social Impact”

 

Harvard University Spring 2021

Milind Tambe (Instructor)

 

TFs: Boriana Gjura  & Doria Speigel

Course Description

The key thrust behind the fast emerging area of “AI for social impact” has been to apply AI research for addressing societal challenges. AI has a great potential to provide tremendous societal benefits in the future. In this course, we will discuss the successful deployments and the potential use of AI in various topics that are essential for social good, including but not limited to health, environmental sustainability, public safety and public welfare. In AI, we have just recently begun to define this area as its own area of research, and we have just recently started understanding that the area includes more than simply providing methodological advances in terms of newer models and algorithms. This course is focused on understanding the latest research in this area, and discussing foundations.  In doing so, we will familiarize ourselves with key open questions in this emerging area of research.

Learning Objectives

Computing Community Consortium logo
Computing Community Consortium

Over the past few decades, there have been major advancements in Artificial Intelligence, and particularly given the intense interest and investment in AI in industry, government and Academia, now is the time to focus our energies in applying AI to solving complex social problems in health, sustainability, community violence and safety, and in assisting low resource communities.

This area of “AI for Social Impact” or “AI for Social Good” is fast emerging as a new field of scientific and technological endeavor in the use of Artificial Intelligence to solve wicked societal problems.   (We choose to use the term “AI for Social Impact” to emphasize the need to achieve social impact). This course is intended to provide students with an understanding of this growing area of research and familiarize ourselves with the latest research in this area.

Here are some key questions we wish to understand. These questions will show that in AI for Social Impact, we are interested in topics beyond algorithmic improvements, to deliver actual social impact on the ground. We will often try to get at these learning goals via case studies or applications, but where possible also include papers that try to define foundations for this area of work. Key topics we will cover include the following:

Definitions

How do we define the area of “AI for social impact” or “AI for social good”, and how is this research different from a more traditional approach to AI research? While we have some agreement on what types of applications are considered to be included in this area, the agreement is not universal. One defining characteristic that we will embrace is that for some work to be part of “AI for social impact” ultimately, we have to have social impact. Might it be useful to think about contrasting a more “traditional model” of AI research with “social impact driven” AI research, drawing inspiration from “Pasteur’s quadrant”: (Stokes, Donald E. (1997). Pasteur's Quadrant – Basic Science and Technological Innovation. Brookings Institution Press. p. 196. ISBN 9780815781776.)

What might be considered a more traditional model of AI research: Mostly we work on research ideas to provide methodological advances in the lab, providing new papers that demonstrate some idea on some simulation or some robots in the lab. We may use real world data and demonstrate advances in the lab over known benchmarks. This is where we advance our basic science – models and algorithms.  The idea is that such models might eventually influence products and policy, but that is to be taken up by someone else who gets impacted by these ideas.

AI for social impact research: We start with a societal challenge, and then attempt to address it by providing the right AI tool, and in this often must provide some methodological advance. So on the one hand, we wish to provide methodological advance, but on the other, show actual impact on society.

In this view, AI for social impact actually includes the entire pipeline shown below. It is not just the algorithmic portion, but includes the HCI component of “immersion” all the way to field testing and deployment (see below). 

Problems, tools, approaches

What is the overall “AI for social impact” problem solving process

We will describe one way this process may be described, but it is not the only way of describing the process; there are other more detailed views of the process available.

As a first step, immersion in the domains is crucial to get a critical understanding of the problems, constraints and datasets. This may be a step that involves discussions with various stakeholders including the impacted community. In this step it is also important to build interdisciplinary partnerships and understand the challenges from the perspective of domain experts. Another aspect of this step is to understand data limitations and how to address these limitations.

Following an in-depth understanding via immersion, our next step is building a predictive model using machine learning or domain expert input; such a predictive model may for example predict which are high risk vs low risk cases in a population. Next, the prescriptive algorithm phase that plans interventions -- I will use the example of “game theoretic reasoning” but it could be any relevant intervention. Work on AI for social impact is often focused on domains where access to data is difficult (e.g., low resource communities or emerging market countries), and thus the challenge is often to plan interventions despite data that is uncertain and sparse.

Finally, we are also keenly interested in the final step of field testing and deployment. This is not only because we are interested in the social impact, but because these tests are often how we can learn key limitations of our models and algorithms, often leading to fundamental new research challenges to address.  This research critically requires interdisciplinary partnerships for immersion and field testing; often we find such interdisciplinary partnerships lead to new research that is outside the scope of any one discipline.

Milind Tambe AI4SI

What are some types of problems solved, and what are types of AI tools available to solve them

What types of societal challenges can be tackled or have been tackled with work in “AI for social impact”? This includes work in public health, conservation, climate change, disaster response, public safety, education, and many other topics. 

In addition to problem types, it is important to understand the types of tools that get used in AI for social impact. It may be feasible to categorize key types of problems in these areas and the types of AI tools that get used to address these problems. In this course, we will emphasize tools from the subarea of AI called multiagent systems, but remain open to other types of tools. Simultaneously we wish to provide students with at least an introduction to the underlying AI tools and approaches. There is a vast array of AI tools to consider; this  course will emphasize tools and techniques from  agents and multiagent systems research, e.g., MDPs/POMDPs, game theory, distributed constraint optimization, social influence maximization, human behavior modeling and others. In addition, we will of course use standard techniques in reinforcement learning, deep learning, and also recent advances in decision-focused learning and game-focused learning.

Challenges in applying AI tools

AI tools may not be immediately applicable out of the box. For example, as discussed earlier, in this area of “AI for social impact” we are faced with the problem of lack of “big data”. It is important to embrace this challenge.

Impact

Challenges in measuring impact

As mentioned earlier, we actually wish to demonstrate social impact in the real world. However, given a researcher’s resource limits, the right level of demonstration itself is an important question: a pilot study, a long term study, a working prototype showing long-term technological feasibility, a fully deployed and operating system. Each of these is still a valuable contribution. They all may pioneer or show evidence for some actual AI based intervention for social impact. 

Ethical challenges

Ethical challenges and pitfalls: When applying interventions, it is feasible that not all stakeholders benefit equally. Or there may be potential harms due to the interventions. What are some steps to ensure that we think through these challenges? What downstream effects should we watch out for?

While these questions could be explored in the context many different domains of application, In order to preserve some coherence, we will stick to topics of public health, environment (climate change, conservation, agriculture), possibly extending to public safety. We will of course cover case studies from previous papers that will inform us about challenges addressed in earlier work. Simultaneously we will discuss these questions with key domain experts to understand what challenges they may foresee in their areas for AI, potentially also offering guidance on the types of projects students may pursue for this class.

The course is intended for graduate students in Computer Science, but students from adjacent disciplines (EE) are also welcome.  We may selectively invite some graduate students from HSPH, EOB or other disciplines.

Grading Breakdown

Schedule of Assignments and Exams

  • Homework 1: Select papers and presentation partner (due Feb 10): 5%
  • Paper presentation, discussion I:(15%)
  • Midterm project ideas and discussion: (10%)
  • Homework 2: Select papers and presentation partner 5%
  • Paper presentation, discussion II: (15%)
  • Project: (30%)
  • Ethics and broader impact in project: (5%)
  • Attendance and weekly assigned readings: (15%)

Paper discussion and presentation

The key idea here is for students to read and present papers. While we will select papers for initial readings, we expect students to themselves select papers and get those approved. Students should also select dates for when they will present these papers.

What papers to select for readings when we do not provide papers

Students need to pick a paper on AI for Social Impact from calendar years 2020/2021. The paper should have appeared at a top AI conference: AAAI, IJCAI, AAMAS, NeurIPS, ICML, UAI, etc, and should focus on AI for Social Impact in some way. 

This is just one example of a set of papers that would be of interest. We do not want to lock in the students to one set of papers because students will come with many diverse backgrounds. We will also point students to other venues for AI for social good papers. These papers must have been rigorously reviewed and typically must have appeared at an AI conference.

Students should check with us once they have selected their papers. In order to preserve some coherence, we will stick to topics of public health, environment (climate change, conservation, agriculture), community safety, disaster response.

Guidance for reading and presentation

Paper reading: While the first week of readings will be done individually, for later weeks, readings should be done by two people together. We expect everyone to submit a short summary of the papers they have read. Typically there will be one or two required papers to read. The optional readings are included only in case someone is very interested in the topic. They are purely optional.

Paper presentation: This should be done in pairs. Presenters  will be asked to present not only a summary, but also some evaluation of the paper. For evaluation, presenters should use the “AAAI rubric” discussed below, but there may be exceptions. At the end of the paper presentations, there will be a 10 minute discussion. Presenters should come prepared with at least one discussion question. We expect that others in the class have read some of the papers being presented.

Projects

 Project related to use of AI and multiagent systems research for Social Good, in any of the many application areas discussed in class.

 We will provide various resources to create these projects. Ideally, the projects could lead to publications in key AI venues. 

Project grades

  • Groups of 2-3 students, project proposal due by mid semester (10% of grade), final project presentation at the end of the semester (30% of grade). Our teaching staff will provide detailed feedback on the project proposal and that should be addressed in the final project presentation.
  • Main points of evaluation in the 30% grade will include: (i) Relevance to class topic; (ii) Illustration of use of concepts learned in class or from research papers in related areas; (iii) Survey of related research in the field to indicate awareness of this work; (iv) Novelty of ideas. There is considerable flexibility in the kind of project completed.
  • 5% of project grade will be awarded for broader impact statement of the project. We will discuss this during the embedded ethics lecture.