TRUTH DECAYFighting Deepfakes: What’s Next After legislation?
Deepfake technology is weaponizing artificial intelligence in a way that disproportionately targets women, especially those working public roles, compromising their dignity, safety, and ability to participate in public life.
Deepfake technology is weaponizing artificial intelligence in a way that disproportionately targets women, especially those working public roles, compromising their dignity, safety, and ability to participate in public life. This digital abuse requires urgent global action, as it not only infringes on human rights but also affects their democratic participation.
Britain’s recent decision to criminalize explicit deepfakes is a significant step forward. It follows similar legislation passed in Australia last year and aligns with the European Union’s AI Act, which emphasizes accountability. However, regulations alone are not enough, effective enforcement and international collaboration are essential to combat this growing and complex threat.
Britain’s legislation to criminalize explicit deepfakes as part of the broader Crime and Policing Bill that will be introduced to the parliament marks a pivotal step in addressing technology-facilitated gender-based violence. This move is a response to a 400 percent rise in deepfake-related abuse since 2017, as reported by Britain’s Revenge Porn Helpline.
Deepfakes, which fabricate hyper-realistic content, often target women and girls, objectifying and eroding their public engagement. By criminalizing both the creation and sharing of explicit deepfakes, Britain’s law closes loopholes in earlier revenge porn legislation. The legislation places stricter accountability on platforms hosting these harmful images, reinforcing the message that businesses must play a role in combatting online abuse.
The EU has taken a complementary approach by introducing requirements for transparency in its recently adopted AI Act. The regulation does not ban deepfakes outright but mandates that creators disclose their artificial origins and provide details about the techniques used. This empowers consumers to better identify manipulated content. Furthermore, the EU’s 2024 directive on violence against women explicitly addresses cyberviolence, including non-consensual image-sharing, providing tools for victims to prevent the spread of harmful content.
While these measures are robust, enforcement remains a challenge due to fragmented national laws, and deepfake abuse often transcends borders. The EU is working to harmonize its digital governance and promote AI transparency standards to mitigate these challenges.
In Asia, concern over deepfake technology is growing in countries such as South Korea, Singapore and especially Taiwan where it not only targets individual women but is increasingly used as a tool for politically motivated disinformation. Similarly, in the United States and Pakistan, female lawmakers have been targeted with sexualized deepfakes designed to discredit and silence them. Italy’s Prime Minister Giorgia Meloni faced a similar attack but successfully brought the perpetrators to court.