AI & EXTREMISMSix Pressing Questions We Must Ask About Generative AI

Published 15 May 2023

The past twenty-five years have demonstrated that waiting until after harms occur to implement internet safeguards fails to protect users. The emergence of Generative Artificial Intelligence (GAI) lends an unprecedented urgency to these concerns as this technology outpaces what regulations we have in place to keep the internet safe.

1. How Can We Prevent GAI from Being Weaponized in Sowing Disinformation and Harassment?
The rise of GAI adds pressure to the ongoing challenge social media users face to identify and separate disinformation. These tools make it easy, fast, and accessible for bad actors to produce convincing fake news, create visually compelling deepfakes, and quickly spread hate and harassment. Perpetrators can distribute harmful content in a matter of seconds. 

Deepfakes and other synthetic media have been a cause for concern for many years. For example, in 2017, machine learning researchers trained a neural network on hours of Barack Obama audio and built a tool that would mimic his voice and facial movements in video. At the time, critics worried that bad faith actors could use this technology for deception, further complicating efforts to combat mis- and disinformation. 

The rapid development and widespread accessibility of GAI and synthetic media tools could have significant implications. For example, it is not difficult to envision the potential for bad actors to leverage synthetic media to disseminate various forms of election-related disinformation using antisemitic tropes. Moreover, the algorithmic amplification of inflammatory content would further obscure the truth, making it harder for users to access accurate information. 

The potential for GAI-generated content to be used for harassment and the invasion of privacy has already been demonstrated, as seen in the 2023 case involving the creation of synthetic sexual videos of women Twitch streamers without their consent. AI-generated nonconsensual pornography may cause significant harm that cannot be undone by verification and correction after the fact. The findings from ADL’s ethnographic research and annual surveys on online hate and harassment demonstrate the damaging effects of online hate and harassment, including psychological and emotional distress for targets, reduced safety and security, and potential professional and economic consequences.

Given these repercussions, what product and policy-focused solutions can companies implement to mitigate potential risks associated with bad actors weaponizing generative AI? Social media platforms must find ways to create more robust content moderation systems that will withstand the potential deluge of content that perpetrators can generate using these tools.