AI threatsGlobal AI experts warn of malicious use of AI in the coming decade
Twenty-six experts on the security implications of emerging technologies have jointly authored an important new report, sounding the alarm about the potential malicious use of artificial intelligence (AI) by rogue states, criminals, and terrorists. Forecasting rapid growth in cyber-crime and the misuse of drones during the next decade – as well as an unprecedented rise in the use of “bots” to manipulate everything from elections to the news agenda and social media. the report calls for governments and corporations worldwide to address the clear and present danger inherent in the myriad applications of AI.
Twenty-six experts on the security implications of emerging technologies have jointly authored an important new report, sounding the alarm about the potential malicious use of artificial intelligence (AI) by rogue states, criminals, and terrorists.
Forecasting rapid growth in cyber-crime and the misuse of drones during the next decade – as well as an unprecedented rise in the use of “bots” to manipulate everything from elections to the news agenda and social media. the report calls for governments and corporations worldwide to address the clear and present danger inherent in the myriad applications of AI.
The report – The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation – also recommends interventions to mitigate the threats posed by the malicious use of AI:
— Policy-makers and technical researchers need to work together now to understand and prepare for the malicious use of AI.
— AI has many positive applications, but is a dual-use technology and AI researchers and engineers should be mindful of and proactive about the potential for its misuse.
— Best practices can and should be learned from disciplines with a longer history of handling dual use risks, such as computer security.
— The range of stakeholders engaging with preventing and mitigating the risks of malicious use of AI should be actively expanded.
The co-authors come from a wide range of organizations and disciplines, including Oxford University’s Future of Humanity Institute; Cambridge University’s Center for the Study of Existential Risk; OpenAI, a leading non-profit AI research company; the Electronic Frontier Foundation, an international non-profit digital rights group; the Center for a New American Security, a U.S.-based bipartisan national security think-tank; and other organizations.
The 100-page report identifies three security domains (digital, physical, and political security) as particularly relevant to the malicious use of AI. It suggests that AI will disrupt the trade-off between scale and efficiency and allow large-scale, finely-targeted, and highly-efficient attacks.
Cambridge says that the authors expect novel cyberattacks such as: automated hacking, speech synthesis used