Global AI Adoption Is Outpacing Risk Understanding, Warns MIT CSAIL
On average, frameworks mentioned just 34% of the 23 risk subdomains identified, with nearly a quarter covering less than 20%. No document or overview mentioned all 23 risk subdomains, and the most comprehensive (Gabriel et al., 2024) covered only 70%.
The work addresses the urgent need to help decision-makers in government, research, and industry understand and prioritize the risks associated with AI and work together to address them. “Many AI governance initiatives are emerging across the world focused on addressing key risks from AI,” says collaborator Risto Uuk, EU Research Lead at the Future of Life Institute. “These institutions need a more comprehensive and complete understanding of the risk landscape.”
Researchers and risk evaluation professionals are also impeded by the fragmentation of current literature. “It is hard to find specific studies of risk in some niche domains where AI is used, such as weapons and military decision support systems,” explains Taniel Yusef, a Research Affiliate, at The Centre for the Study of Existential Risk, at the University of Cambridge who was not involved in the research. “Without referring to these studies, it can be difficult to speak about technical aspects of AI risk to non-technical experts. This repository helps us do that.”
“There’s a significant need for a comprehensive database of risks from advanced AI which safety evaluators like Harmony Intelligence can use to identify and catch risks systematically,” argues collaborator Soroush Pour, CEO & Co-founder of AI safety evaluations and red teaming company Harmony Intelligence. “Otherwise, it’s unclear what risks we should be looking for, or what tests need to be done. It becomes much more likely that we miss something by simply not being aware of it”.
AI’s Risky Business
The researchers built on two frameworks (Yampolskiy 2016 & Weidinger et al., 2022) in categorizing the risks they extracted. Based on these approaches, they group the risks in two ways.
First by causal factors:
- Entity: Human, AI, and Other;
- Intentionality: Intentional, Unintentional, and Other; and
- Timing: Pre-deployment; Post-deployment, and Other.
Second, by seven AI risk domains:
- Discrimination & toxicity,
- Privacy & security,
- Misinformation,
- Malicious actors & misuse,
- Human-computer interaction,
- Socioeconomic & environmental, and
- AI system safety, failures, & limitations.
These are further divided into 23 subdomains (full descriptions here):
- 1.1. Unfair discrimination and misrepresentation
- 1.2. Exposure to toxic content
- 1.3. Unequal performance across groups
- 2.1. Compromise of privacy by leaking or correctly inferring sensitive information
- 2.2. AI system security vulnerabilities and attacks
- 3.1. False or misleading information
- 3.2. Pollution of the information ecosystem and loss of consensus reality
- 4.1. Disinformation, surveillance, and influence at scale
- 4.2. Cyberattacks, weapon development or use, and mass harm
- 4.3. Fraud, scams, and targeted manipulation
- 5.1. Overreliance and unsafe use
- 5.2. Loss of human agency and autonomy
- 6.1. Power centralization and unfair distribution of benefits
- 6.2. Increased inequality and decline in employment quality
- 6.3. Economic and cultural devaluation of human effort
- 6.4. Competitive dynamics
- 6.5. Governance failure
- 6.6. Environmental harm
- 7.1. AI pursuing its own goals in conflict with human goals or values
- 7.2. AI possessing dangerous capabilities
- 7.3. Lack of capability or robustness
- 7.4. Lack of transparency or interpretability
- 7.5. AI welfare and rights
“The AI Risk Repository is, to our knowledge, the first attempt to rigorously curate, analyze, and extract AI risk frameworks into a publicly accessible, comprehensive, extensible, and categorized risk database. It is part of a larger effort to understand how we are responding to AI risks and to identify if there are gaps in our current approaches,” says Dr. Neil Thompson, head of the MIT FutureTech Lab and one of the lead researchers on the project. “We are starting with a comprehensive checklist, to help us understand the breadth of potential risks. We plan to use this to identify shortcomings in organizational responses. For instance, if everyone focuses on one type of risk while overlooking others of similar importance, that’s something we should notice and address.”
The next phase will involve experts evaluating and prioritizing the risks within the repository, then using it to analyze public documents from influential AI developers and large companies. The analysis will examine if organizations respond to risks from AI — and do so in proportion to experts’ concerns — and compare risk management approaches across different industries and sectors.
The repository is freely available online to download, copy, and use. Feedback and suggestions can be submitted here.
Peter Slattery is a Researcher at MIT FutureTech at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Rachel Gordon is the Communications and Media Relations Officer at MIT’s CSAIL. Neil Thompson is the Director of the FutureTech research project at MIT’s CSAIL. The story was originally posted to the website of MIT’s CSAIL.