Governing Artificial Intelligence: A Conversation with Rumman Chowdhury

However, in a soon-to-be-published study that I’m conducting, I find that, broadly speaking, most model evaluators (defined very broadly) want the same things—they want secure access to an application programming interface, they want datasets they can use to test models, they want an idea of how the model is used and how it participates in an algorithmic system, and they want the ability to create their own test metrics. Interestingly not a single interviewee asked for model data or code directly, which is concerning as this is often a controversial touchpoint between regulators, policymakers, and companies.

Artificial intelligence is a multi-use technology, but that does not necessarily mean it should be used as a general purpose technology. What potential uses of AI most concern you and can those be constrained or prevented?
Any unmediated use that directly makes a decision on the quality of life of a human being. By “unmediated” I mean, without meaningful human input and ability to make an informed decision regarding the model. This applies to a massively broad range of uses for AI systems.

The market of powerful AI tools is growing exponentially, as is easy, public access to those tools. Although calls for AI governance are increasing, governance will struggle to keep pace with AI’s market and technological evolution. What elements of AI governance are most critical to achieve in the immediate terms, and which elements are the most possible to achieve in the immediate term?
I do not think what we need is regulation that moves at the pace of every new innovation, but regulatory institutions and systems that are flexible to the creation of new algorithmic capabilities. What we lack today in Responsible AI are legitimate, empowered institutions that have clear guidelines, remits, and subject matter expertise.
Mission critical are: transparency and accountability (see above), clear definitions of algorithmic auditing, and legal protections for third party assessors and ethical hackers.

Large digital platforms will be a key vector for disseminating AI-generated content. How can existing standards and norms in platform governance be leveraged to mitigate the spread of harmful AI-generated content, and how should they be expanded to address that threat?  
Generative AI will supercharge the dissemination of deepfake content for malicious use. While insufficient, we can learn from how platforms have used narrow AI and ML alongside human decisioning to address issues of toxicity, radicalization, online bullying, online gender-based violence, and more. These systems, policies, and approaches need to be significantly invested in and improved.

Is there anything else you’d like to address about AI development or governance?
The missing part of the story is public feedback. Today there is a broken feedback loop between the public, government and companies. It’s important to invest in methods of structured public feedback—everything ranging from expert and broad-based red teaming, bias bounties, and more—to identify and mitigate AI harms.

Kat Duffy is Senior Fellow for Digital and Cyberspace Policy at CFR. Rumman Chowdhury runs Parity Consulting, Parity Responsible Innovation Fund, and is a Responsible AI Fellow at the Berkman Klein Center for Internet & Society at Harvard University. Kyle Fendorf is Research Associate, Digital and Cyberspace Policy at CFR. This article is published courtesy of the Council on Foreign Relations(CFR).