DEEPFAKESAustralia’s Deepfake Dilemma and the Danish Solution
Countries need to move beyond simply pleading with internet platforms for better content moderation and instead implement new legal frameworks that empower citizens directly. For a model of how to achieve this, policymakers should look to the innovative legal thinking emerging from Denmark.
Australia needs to move beyond simply pleading with internet platforms for better content moderation and instead implement new legal frameworks that empower citizens directly. For a model of how to achieve this, policymakers should look to the innovative legal thinking emerging from Denmark.
Australia’s modern, multicultural society is built on high trust and social cohesion. This quiet asset now faces a profound challenge: the rise of generative AI and deepfakes.
The fundamental threat is not the technology itself, but rather its unchecked proliferation as technology platforms fail to self-regulate. After a decade of broken promises, we can’t keep waiting for the tech industry to solve problems it created. Instead, the responsibility falls to democratic governments to pioneer an effective policy solution.
The danger of deepfake technology is its capacity to dissolve the shared basis of facts that helps society function. The business models of our largest platforms, optimized for engagement, have created the perfect incubator for digital pollution. Their attempts at content moderation have proven to be an endless task, always one step behind the next viral falsehood. This environment of non-interference enables harassment, political disinformation and fraud on an unprecedented scale. Existing policies, such as defamation laws, are ill-suited to fighting this.
Denmark’s proposition could be an essential way forward. The Danes are exploring a legal framework that shifts the focus from policing fake content to empowering the authentic original by granting citizens intellectual property rights over their unique biometric identities—their faces and voices.
The genius of this approach lies in its reframing. Rather than being a wholly novel idea, it repurposes one of our most established legal tools—intellectual property—for a new and urgent purpose.
Under current copyright laws, the most people can do is claim a copyright infringement on specific images used to create deepfakes. This can be difficult to prove due to the opacity of deeper AI processes. It also relies on the individual owning the original image, which may not be the case, even when the image is of that individual. Under the Danish model, however, individuals would own the rights to their likenesses as though they were copyrighted images, affording them far greater control over their digital identities.