Democracies Must Regulate Digital Agents of Influence

We would never want governments to control the information sphere as happens under authoritarian systems. But the philosophy of relying on the free market no longer works when the market is distorted by the Chinese Communist Party, the Putin regime and others who have collapsed the public–private distinction and are interfering heavily in what we once hoped might be a free marketplace of ideas.

The combined lesson of chatbots and TikTok is that we face a future challenge of technology that can establish a convincing sense of intimacy comparable to a companion, friend or mentor, but that is controlled by authoritarian regimes. Although AI and social media are distinct, the content users receive would ultimately be dictated by architecture built by humans beholden to an authoritarian system.

Let’s say the chatbot spends a few weeks establishing a relationship with you. Then it casually drops into a conversation that a particular candidate in a coming election has a policy platform remarkably in tune with your political beliefs. That might feel creepy, wherever the chatbot comes from.

Now let’s say it was built not by an independent outfit such as OpenAI, but by a Beijing- or Moscow-backed start-up. It notices you’ve been reading about Xinjiang and helpfully volunteers that it has reviewed social media posts from Uyghur people and concluded most of them actually feel safe and content.

Or you’ve been looking at news about Ukraine, so the chatbot lets you know that many experts reckon NATO made a mistake in expanding eastwards after 1991, and therefore you can hardly blame Moscow for feeling threatened. So really there are faults on both sides and a middle-ground peace settlement is needed.

The power of chatbots demonstrates that the debate over TikTok is the tip of the iceberg. We are going to face an accelerating proliferation of questions about how we respond to the development of AI-driven technology. The pace of change will be disorientating for many people, to the point that governments find it too difficult to engage voters. That would be a terrible failure.

Some 1,300 experts, even Elon Musk, are sufficiently troubled that they have called for a six-month pause on developing AI systems more powerful than GPT-4. And the Australian Signals Directorate just last week recognized the need to balance adoption of new technology with governance and issued a set of ‘ethical principles’ to manage its use of AI.

This is a welcome step, but leaves open the question of what are we doing to manage Beijing’s or Moscow’s use of AI?

We need an exhaustive political and public debate about how we regulate such technologies. That is why ASPI has this week hosted the Sydney Dialogue, a global forum to discuss the benefits and challenges of critical technologies.

One starting point would be to treat applications that shape public opinion in the same way as media providers, involving restrictions if they cannot demonstrate independence from government. This would separate the likes of the ABC and BBC—funded by government but with editorial or content independence—from government-controlled technologies that could exercise malign influence. This would also maintain a country-agnostic regulatory approach.

We will need to wade through many moral conundrums and find solutions we can realistically implement. Part of it will be national regulation; part will be international agreements on rules and norms. We don’t want to stifle innovation, nor can we leave our citizens to fend for themselves in what will be, at best, a chaotic and, at worst, a polluted and manipulated information environment.

Government involvement is not interference. A tech future based on anarchy with no rules will itself be ruled by authoritarian regimes.

Justin Bassi is ASPI’s executive director. A version of this article was published in the Australian Financial Review.This article is published courtesy of the Australian Strategic Policy Institute (ASPI).