Large-Scale Facial Recognition Is Incompatible with a Free Society

However, facial recognition at the other end of these scales – one-to-many or all-to-all, real-time, integrated – amounts to face surveillance, which has less obvious benefits. Several police forces in the UK have trialled real-time one-to-many facial recognition to seek persons of interest, with mixed results. The benefits of integrated real-time all-to-all face surveillance in China are yet to be seen.

And while the benefits of face surveillance are dubious, it risks fundamentally changing the kind of society we live in.

Face Surveillance Often Goes Wrong, but It’s Bad Even Ehen It Works
Most facial recognition algorithms are accurate with head-on, well-lit portraits, but underperform with “faces in the wild”. They are also worse at identifying black faces, and especially the faces of black women.

The errors tend to be false positives – making incorrect matches, rather than missing correct ones. If face surveillance were used to dole out cash prizes, this would be fine. But a match is almost always used to target interventions (such as arrests) that harm those identified.

More false positives for minority populations means they bear the costs of face surveillance, while any benefits are likely to accrue to majority populations. So using these systems will amplify the structural injustices of the societies that produce them.

Even when it works, face surveillance is still harmful. Knowing where people are and what they are doing enables you to predict and control their behavior.

You might believe the Australian government wouldn’t use this power against us, but the very fact they have it makes us less free. Freedom isn’t only about making it unlikely others will interfere with you. It’s about making it impossible for them to do so.

Face Surveillance Is Intrinsically Wrong
Face surveillance relies on the idea that others are entitled to extract biometric data from you without your consent when you are in public.

This is false. We have a right to control our own biometric data. This is what is called an underived right, like the right to control your own body.

Of course, rights have limits. You can lose the protection of a right – someone who robs a servo may lose their right to anonymity – or the right may be overridden, if necessary, for a good enough cause.

But the great majority of us have committed no crime that would make us lose the right to control our biometric data. And the possible benefits of using face surveillance on any particular occasion must be discounted by their probability of occurring. Certain rights violations are unlikely to be overridden by hypothetical benefits.

Many prominent algorithms used for face surveillance were also developed in morally compromised ways. They used datasets containing images used without permission of the rightful owners, as well as harmful images and deeply objectionable labels.

Arguments for Face Surveillance Don’t Hold Up
There will of course be counterarguments, but none of them hold up.

You’ve already given up your privacy to Apple or Google – why begrudge police the same kind of information? Just because we have sleepwalked into a surveillance society doesn’t mean we should refuse to wake up.

Human surveillance is more biased and error-prone than algorithmic surveillance. Human surveillance is indeed morally problematic. Vast networks of CCTV cameras already compromise our civil liberties. Weaponizing them with software that enables people to be tracked across multiple sites only makes them worse.

We can always keep a human in the loop. False positive rates can be reduced by human oversight, but human oversight of automated systems is itself flawed and biased, and this doesn’t address the other objections against face surveillance.

Technology is neither good nor bad in itself; it’s just a tool that can be used for good or bad ends. Every tool makes some things easier and some things harder. Facial recognition makes it easier to oppress vulnerable populations and violate everyone’s basic rights.

It’s Time for a Moratorium
Face surveillance is based on morally compromised research, violates our rights, is harmful, and exacerbates structural injustice, both when it works and when it fails. Its adoption harms individuals, and makes our society as a whole more unjust, and less free.

A moratorium on its use in Australia is the least we should demand.

Seth Lazar is Professor, Australian National University. Claire Benn is Research Fellow, Humanizing Machine Intelligence Grand Challenge, Australian National University. Mario Günther is Research Fellow, Humanizing Machine Intelligence Grand Challenge, Australian National University. This article is published courtesy of The Conversation.