Artificial Intelligence Is a Totalitarian’s Dream – Here’s How to Take Power Back

Freedom Through Tyranny
The philosopher Isaiah Berlin foresaw this in 1958. He identified two types of freedom. One type, he warned, would lead to tyranny.

Negative freedom is “freedom from”. It is freedom from the interference of other people or government in your affairs. Negative freedom is no one else being able to restrain you, as long as you aren’t violating anyone else’s rights.

In contrast, positive freedom is “freedom to”. It is the freedom to be master of yourself, freedom to fulfil your true desires, freedom to live a rational life. Who wouldn’t want this?

But what if someone else says you aren’t acting in your “true interest”, although they know how you could. If you won’t listen, they may force you to be free – coercing you for your “own good”. This is one of the most dangerous ideas ever conceived. It killed tens of millions of people in Stalin’s Soviet Union and Mao’s China.

The Russian Communist leader, Lenin, is reported to have said that the capitalists would sell him the rope he would hang them with. Peter Thiel has argued that, in AI, capitalist tech firms of Silicon Valley have sold communism a tool that threatens to undermine democratic capitalist society. AI is Lenin’s rope.

Fighting for Ourselves
We can only prevent such a dystopia if no one is allowed to know us better than we know ourselves. We must never sentimentalize anyone who seeks such power over us as well-intentioned. Historically, this has only ever ended in calamity.

One way to prevent a self-knowledge gap is to raise our privacy shields. Thiel, who labelled AI as communistic, has argued that “crypto is libertarian”. Cryptocurrencies can be “privacy-enabling”. Privacy reduces the ability of others to know us and then use this knowledge to manipulate us for their own profit.

Yet knowing ourselves better through AI offers powerful benefits. We may be able to use it to better understand what will make us happy, healthy and wealthy. It may help guide our career choices. More generally, AI promises to create the economic growth that keeps us from each other’s throats.

The problem is not AI improving our self-knowledge. The problem is a power disparity in what is known about us. Knowledge about us exclusively in someone else’s hands is power over us. But knowledge about us in our own hands is power for us.

Anyone who processes our data to create knowledge about us should be legally obliged to give us back that knowledge. We need to update the idea of “nothing about us without us” for the AI-age.

What AI tells us about ourselves is for us to consider using, not for others to profit from abusing. There should only ever be one hand on the tiller of our soul. And it should be ours.

Simon McCarthy-Jones is Associate Professor in Clinical Psychology and Neuropsychology, Trinity College Dublin. This article is published courtesy of The Conversation.