Global AI experts warn of malicious use of AI in the coming decade
to impersonate targets, finely-targeted spam emails using information scraped from social media, or exploiting the vulnerabilities of AI systems themselves (for example, through adversarial examples and data poisoning).
Likewise, the proliferation of drones and cyber-physical systems will allow attackers to deploy or repurpose such systems for harmful ends, such as crashing fleets of autonomous vehicles, turning commercial drones into face-targeting missiles or holding critical infrastructure to ransom. The rise of autonomous weapons systems on the battlefield risk the loss of meaningful human control and present tempting targets for attack.
In the political sphere, detailed analytics, targeted propaganda, and cheap, highly-believable fake videos present powerful tools for manipulating public opinion on previously unimaginable scales. The ability to aggregate, analyze and act on citizen’s information at scale using AI could enable new levels of surveillance, invasions of privacy and threaten to radically shift the power between individuals, corporations and states.
To mitigate such risks, the authors explore several interventions to reduce threats associated with AI misuse. They include rethinking cyber-security, exploring different models of openness in information sharing, promoting a culture of responsibility, and seeking both institutional and technological solutions to tip the balance in favor of those defending against attacks.
The report also “games” several scenarios where AI might be maliciously used as examples of the potential threats we may face in the coming decade.
While the design and use of dangerous AI systems by malicious actors has been highlighted in high-profile settings (for example, the U.S. Congress and White House separately), the intersection of AI and misuse writ large has not yet been analyzed comprehensively – until now.
Added Dr. Ó hÉigeartaigh: “Artificial intelligence is a game changer and this report has imagined what the world could look like in the next five to ten years.
“For many decades hype outstripped fact in terms of AI and machine learning. No longer. This report looks at the practices that just don’t work anymore – and suggests broad approaches that might help: for example, how to design software and hardware to make it less hackable – and what type of laws and international regulations might work in tandem with this.”
Miles Brundage, Research Fellow at Oxford University’s Future of Humanity Institute, said: “AI will alter the landscape of risk for citizens, organizations and states – whether it’s criminals training machines to hack or ‘phish’ at human levels of performance or privacy-eliminating surveillance, profiling and repression – the full range of impacts on security is vast.
“It is often the case that AI systems don’t merely reach human levels of performance but significantly surpass it. It is troubling, but necessary, to consider the implications of superhuman hacking, surveillance, persuasion, and physical target identification, as well as AI capabilities that are subhuman but nevertheless much more scalable than human labor.”
— Read more in Miles Brundage et al., The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (Future of Humanity Institute, University of Oxford, February 2018)