AI Disinformation: Lessons from the U.K. Election
The latter threat has been underscored by comments from Australia’s Director-General of Security (Mike Burgess) last week, when he helped announce the lifting of the country’s terrorism threat level. The basis for the increase was in part, Burgess said, that people with violent intent were ‘motivated by a diversity of grievances and personal narratives’ and were ‘interacting in ways we have not seen before’.
As a result, the risk of mis- and disinformation influencing election outcomes is much more serious.
Looking at the UK general election however, generative AI turned out to play a lesser role than traditional automated threats. For instance, several investigations into election-related content on online platforms found hallmarks of bot accounts seeking to sow division over controversial campaign issues such as immigration.
Some had possible links to Russia, and pushed pro-Kremlin narratives about the war in Ukraine. While these bot activities did include a few instances of AI-generated election material being circulated, the majority used a well-established tactic known as ‘astroturfing’, in which many automated accounts are used to increase perceived popular support for a particular policy stance or political candidate by spamming thousands of fake comments on relevant social media posts.
Alongside these bot incidents, the UK was targeted by a fake news operation with strong connections to a Russian-affiliated disinformation network called Doppelganger. Known as ‘CopyCop’, the operation involved the spreading of fictitious articles about the war in Ukraine, to confuse the UK public and reduce support for military aid. As part of CopyCop, real news stories were pasted into AI chatbots and then re-written to align them to the network’s strategic aims.
However, many had prompts left in, which betrayed obvious signs of AI editing and therefore failed to attract much engagement. That said, some of these sources were picked up by Russian media influencers and spread across their channels to tens of thousands of users. Often, the real sources of the articles were concealed in a tactic called ‘information laundering’ in an effort to trick users into assuming it originated from a credible news outlet.
While these disinformation activities can be connected to hostile foreign states, most viral misleading AI content in the UK election came from members of the public. This content included deepfakes that implicated political candidates in controversial statements that they never made. Interestingly, many users behind the content claimed they were doing it for satirical or ‘trolling’ purposes. Others may have pushed the content to increase support for their political party or because they were disillusioned with conventional political campaigns. This range of motives across different users highlights the new sources of risk and the expanded threat landscape that stem from such wide access to generative AI systems.
Taken together, the most prominent disinformation problems during the UK election did not arise from novel AI technology, but from longstanding issues tied to social media platforms – including the role of influencer accounts and recommender algorithms.
As we look ahead to the US election in November, it is vital that these platforms co-ordinate with different sectors to invest in measures to protect users.
This includes red-teaming exercises, requiring clear labels on AI-generated political adverts, and engaging with fact-checking organizations to detect malicious content before it goes viral.
And with Australia facing its own federal election in the next nine months, continued scrutiny of the risks and the malicious perpetrators – and emerging measures to combat them – is also vitally in the country’s interests.
Sam Stockwell is a research associate at the Centre for Emerging Technology and Security (CETaS) at The Alan Turing Institute in the UK. This article is published courtesy of the Australian Strategic Policy Institute (ASPI). This article is part of a short series The Strategist is running in the lead up to ASPI’s Sydney Dialogue on September 2 and 3. The event will cover key topics in critical, emerging and cyber technologies, including disinformation, electoral interference, artificial intelligence, hybrid warfare, clean technologies and more.