Key takeaways

Key take-aways from CS 288 “AI for social impact”:

Around 3/4th of the way through the semester we asked our students what they thought were the key takeaways from this class. Here is some of what they suggested. 

 

AI for social impact pipeline

This entire pipeline shown below is part of AI for social impact. It's not just the predictive model, and not just the prescriptive algorithm. All the way from immersion to define problems, to deployment and feedback is part of AI for social impact. AI conferences should take note.



 

Stakeholder engagement

  • In order to achieve true social impact, it is important for key stakeholders and domain experts to be involved in the discussions with AI researchers from Day #1.  We should aim to center the needs of the experts and communities who we want to support.[Alexander]

  • From previous guest lectures, I learned that many AI4SI projects take several years or even longer to complete, conduct the field study and realize its promised social impact. I think this is a field that requires constant engagement with the stakeholders (e.g. homeless youths, WWF rangers), and long-lasting passion. I am very grateful for many researchers and volunteers working on this kind of problem for the well being of others. [Nicole]

Data in AI for Social Impact

  •  From doing the paper evaluation, it reinforced the idea to me that the availability of data is still one of the biggest barriers to the development of artificial intelligence algorithms. I hope to learn more about how we can innovate in the realm of dataset creation, data augmentation, and accessibility of these datasets. [Sayak]

  • Data can be very scarce or expensive to collect in some settings where AI can be potentially useful. Meanwhile, there can also be settings where we have so much data we don't know what to do with. It's hard and important to have a conversation around the ethics around this data, especially regarding privacy, safety, fairness, and interpretability. [Maggie]

AI Innovation

  • Especially in areas like conservation in AI4SI, better tech may not lead to more impact. The entire flow needs to be considered with key experts, stakeholders, and constituents involved—and common language should be adopted and adapted to fit the people involved. [Maggie]

  • There is still a long way to go in the efforts for establishing trust and understanding between data science researchers and experts in other (environmental, biological, social, etc.) researchers and policymakers. Our last few months in 288 have made me realize how so many students in CS hold a strong belief that computer science and data modeling can help in almost any scenario, yet fail to recognize how tricky it really is to forge a bond of trust and make a tangible impact. [Eric Lin]

  • A mismatch in incentives creates a significant gap between academic research and real-world deployment. Academia rewards novelty and publication, but this means that potentially impactful solutions are often not seen through to the finish line by those with the technical expertise needed to implement them. Even if a research team creates software that can be used by an NGO or government entity, they may not have the capacity or the funding for constant maintenance, without which the project will still fail.[Sherry]

 

How to measure performance

  • The success of an AI for Social Impact project is not measured by its performance on benchmarks but by actual social impact. As intuitive as this is to me now, the first time Prof. Tambe mentioned this I thought this was a fairly radical idea. [Gokhan]

  • True impact is often difficult to measure.  In many cases, when one works on research that is more preventative than reactive (e.g. security, public health policy), it is impossible to see the counterfactual (i.e. what would happen if we did not deploy this research). [Alexander]

  • Measuring the social impact of a system is not straightforward, and the implementation pipeline for the same needs to be carefully thought out. The entire lecture on implementation science, and papers related to the same put a lot of things in perspective. [Susobhan]

  • Demonstrating impact is often challenging and requires the system to be in place for multiple years. AI4SG projects often don’t have fast feedback loops which makes normal agile iteration infeasible. The time scale of AI4SG is very different from other fields of computer science and the conflict can affect the impact of the project. (e.g. systems may need support for many years) [Colby]

  •  

    It is extremely hard to measure social impact and evaluate how effective a model or intervention is. The goal of a project may be to prevent a negative outcome, but this outcome may already be a relatively rare occurrence (low probability, high damage); it may take 10+ years to observe a significant impact, but tracking progress for this long would be extremely costly and infeasible. Some of the papers we’ve read have presented creative alternatives (e.g. comparing to other models or comparing to human-designed allocations as baselines), but theoretical benchmarks often fail to measure true impact; there is no substitute for going into the field and engaging throughout the entire process [Sherry]

 

  • Ethics 
  •  
  • It might seem like ethical concerns are less of an issue for AI for social impact research since we are technically doing good, but in many ways ethics are arguably even more important here because our research is directly impacting society and may have unintended consequences that affect real people and real situations. Therefore, it is important to communicate with domain experts to understand how we can design ethical systems, and to continue to revise our systems when we discover unforeseen negative outcomes during initial deployment. [Rachel]

  • From this class, I’ve picked up a better sense of spotting concerns about data privacy and paying attention to biases being implemented in a system. It’s the human at the heart of the problems that we are solving that’s most important. In addition, the ethical side is always crucial to consider. [Sayak]

  • No matter how good the intentions are behind the AI algorithms,  we need to push ourselves out of the box to think about how different stakeholders are affected as a result of these algorithms.[Manana]

 

Risks

 
  • We learned about the importance of critically evaluating all the various risks that can come from deploying AI models in social impact contexts. Often, there are tradeoffs between models or in the choice to implement a model or not, for which there are rarely simple "right or wrong" decisions. [Noah]