AIGoverning Artificial Intelligence: A Conversation with Rumman Chowdhury

By Kat Duffy, Rumman Chowdhury, and Kyle Fendorf

Published 31 October 2023

Artificial intelligence, and its risks and benefits, has rapidly entered the popular consciousness in the past year. Kat Duffy and Dr. Rumman Chowdhury discuss how society can mitigate problems and ensure AI is an asset.

Dr. Rumman Chowdhury has built solutions in the field of applied algorithmic ethics since 2017. She is the CEO and co-founder of Humane Intelligence, a nonprofit dedicated to algorithmic access and transparency, and was recently named one of Time Magazine’s 100 Most Influential People in AI 2023. Previously, she was the Director of the Machine Learning Ethics, Transparency, and Accountability team at Twitter. 

Artificial intelligence’s transformational possibility is currently the focus of conversations at everything from kitchen tables to UN Summits. What can be built today with AI to solve one of society’s big challenges, and how can we drive attention and investment towards it?
Hand in hand with investment in technological innovation needs to be investment in the forms of AI systems that can protect humans from the augmentation of algorithmic bias. This might include new techniques for adversarial AI models that identify misinformation, toxic speech, or hateful content; this could mean more investment in proactive methods of illegal and malicious deepfake identification, and more.

Driving investment to this is simple: for every funding ask to develop some new AI capability must be equal investment in the research and development of systems to mitigate the inevitable harms that will follow.

The data underlying large language models raises fundamental questions about accuracy and bias, and whether these models should be accessible, auditable, or transparent. Is it possible to establish meaningful accountability or transparency for LLMs, and if so, what are effective means of achieving that?
Yes, but defining transparency and accountability has been the trickiest part. A new resource from Stanford’s Center for Research on Foundation Models (CRFM) illustrates how complex the question is. The Center recently published a new index on the transparency of foundational AI models, which scores the developers of foundational models (companies such as Google, OpenAI, and Anthropic) against one hundred different indicators designed to characterize transparency. This includes everything from transparency regarding what went into building a model, to the model’s capabilities and risks, to how it is being used. In other words, clarifying what meaningful transparency looks like is a huge question, and one that will continue to evolve. In addition, accountability is tricky as well. We want harms to be identified and addressed proactively, but it is hard to consider a method of accountability that is not reactive.