Fighting Deepfakes: What’s Next After legislation?

Unfortunately, many countries still lack comprehensive legislation to effectively combat the abuse of deepfakes, leaving individuals vulnerable, especially those without the resources and support to fight back. For example, similar laws in the United States remain stalled in legislative pipelines—the Disrupt Explicit Forged Images and Non-Consensual Edits (Defiance) Bill and Deepfake Accountability Bill.

Australia offers a strong example of legislative action as it faces similar challenges with deepfake abuse contributing to a chilling effect on women’s activity in public life, affecting underage students and politicians. This abuse not only affects individual privacy but also deters other women from engaging in public and pursuing leadership roles, weakening democratic representation.

In August 2024, Australia passed the Criminal Code Amendment, penalizing the sharing of non-consensual explicit material.

While formulating legislation is the first step, to effectively address this issue, governments must enforce the regulation while ensuring that victims have accessible mechanisms to report abuse and seek justice. Digital literacy programs should be expanded to equip individuals with the tools to identify and report manipulated content. Schools and workplaces should incorporate online safety education to build societal resilience against deepfake threats.

Simultaneously, women’s representation in cybersecurity and technology governance needs to be increased. Women’s participation in shaping policies and technologies ensures that gendered dimensions of digital abuse are adequately addressed.

Although Meta recently decided to cut back on factchecking, social media platforms need to be held to account for hosting and amplifying harmful content. Platforms must proactively detect and remove deepfakes while maintaining transparency about their AI applications and data practices. The EU AI Act’s transparency requirements serve as a reference point for implementing similar measures globally.

Ultimately, addressing deepfake abuse is about creating a safe and inclusive online space. As digital spaces transcend borders, the fight against deepfake abuse must be inherently global. Countries need to collaborate with international partners to establish shared enforcement mechanisms, harmonize legal frameworks and promote joint research on AI ethics and governance. Regional initiatives, such as the EU AI Act and the Association of Southeast Asian Nations’ guidelines for combatting fake news and disinformation, can serve as a means for building capacity in nations lacking the expertise or resources to tackle these challenges alone.

In a world where AI is advancing rapidly, combatting deepfake abuse is more than regulating technology—it is about safeguarding human dignity, protecting democratic processes and ensuring that everyone, including women, can participate in society without fear of intimidation or harm. By working together, we can build a safer, more equitable digital environment for all.

Fitriani is a senior analyst at ASPI. This article is published courtesy of the Australian Strategic Policy Institute (ASPI).