AIGlobal AI Adoption Is Outpacing Risk Understanding, Warns MIT CSAIL
As organizations rush to implement artificial intelligence (AI), a new analysis of AI-related risks finds significant gaps in our understanding, highlighting an urgent need for a more comprehensive approach.
The adoption of AI is rapidly increasing; census data shows a significant (47%) rise in AI usage within US industries, jumping from 3.7% to 5.45% between September 2023 and February 2024. However, a comprehensive review from researchers at MIT CSAIL and MIT FutureTech has uncovered critical gaps in existing AI risk frameworks. Their analysis reveals that even the most thorough individual framework overlooks approximately 30% of the risks identified across all reviewed frameworks.
To help address this, they have collaborated with colleagues from the University of Queensland, Future of Life Institute, KU Leuven, and Harmony Intelligence, to release the first-ever AI Risk Repository: a comprehensive and accessible living database of 700+ risks posed by AI that will be expanded and updated to ensure that it remains current and relevant.
“Since the AI risk literature is scattered across peer-reviewed journals, preprints, and industry reports, and quite varied, I worry that decision-makers may unwittingly consult incomplete overviews, miss important concerns, and develop collective blind spots,” says Dr. Peter Slattery, an incoming postdoc at the MIT FutureTech Lab and current project lead.
After searching several academic databases, engaging experts, and retrieving more than 17,000 records, the researchers identified 43 existing AI risk classification frameworks. From these, they extracted more than 700 risks. They then used approaches that they developed from two existing frameworks to categorize each risk by cause (e.g., when or why it occurs), risk domain (e.g., “Misinformation”), and risk subdomain (e.g., “False or misleading information”).
Examples of risks identified include “Unfair discrimination and misrepresentation”, “Fraud, scams, and targeted manipulation”, and “Overreliance and unsafe use.” More of the risks analyzed were attributed to AI systems (51%) than humans (34%) and presented as emerging after AI was deployed (65%) rather than during its development (10%). The most frequently addressed risk domains included “AI system safety, failures, and limitations” (76% of documents); “Socioeconomic and environmental harms” (73%); “Discrimination and toxicity” (71%); “Privacy and security” (68%); and “Malicious actors and misuse” (68%). In contrast, “Human-computer interaction” (41%) and “Misinformation” (44%) received comparatively less attention.
Some risk subdomains were discussed more frequently than others. For example, “Unfair discrimination and misrepresentation” (63%), “Compromise of privacy” (61%), and “Lack of capability or robustness” (59%), were mentioned in more than 50% of documents. Others, like “AI welfare and rights” (2%), “Pollution of information ecosystem and loss of consensus reality” (12%), and “Competitive dynamics” (12%), were mentioned by less than 15% of documents.