SurveillanceLarge-Scale Facial Recognition Is Incompatible with a Free Society
In the U.S., tireless opposition to state use of facial recognition algorithms has recently won some victories. Outside the U.S., however, the tide is heading in the other direction. To decide whether to expand or limit the use of facial recognition technology, nations will need to answer fundamental questions about the kind of people, and the kind of society, they want to be. Face surveillance is based on morally compromised research, violates our rights, is harmful, and exacerbates structural injustice, both when it works and when it fails. Its adoption harms individuals, and makes our society as a whole more unjust, and less free. A moratorium on its use is the least we should demand.
In the U.S., tireless opposition to state use of facial recognition algorithms has recently won some victories.
Some progressive cities have banned some uses of the technology. Three tech companies have pulled facial recognition products from the market. Democrats have advanced a bill for a moratorium on facial recognition. The Association for Computing Machinery (ACM), a leading computer science organisation, has also come out against the technology.
Outside the U.S., however, the tide is heading in the other direction. China is deploying facial recognition on a vast scale in its social credit experiments, policing, and suppressing the Uighur population. It is also exporting facial recognition technology (and norms) to partner countries in the Belt and Road initiative. The UK High Court ruled its use by South Wales Police lawful last September (though the decision is being appealed).
Here in Australia, despite pushback from the Human Rights Commission, the trend is also towards greater use. The government proposed an ambitious plan for a national face database (including wacky trial balloons about age-verification on porn sites). Some local councils are adding facial recognition into their existing surveillance systems. Police officers have tried out the dystopian services of Clearview AI.
Should Australia be using this technology? To decide, we need to answer fundamental questions about the kind of people, and the kind of society, we want to be.
From Facial Recognition to Face Surveillance
Facial recognition has many uses.
It can verify individual identity by comparing a target image with data held on file to confirm a match – this is “one-to-one” facial recognition. It can also compare a target image with a database of subjects of interest. That’s “one-to-many”. The most ambitious form is “all-to-all” matching. This would mean matching every image to a comprehensive database of every person in a given polity.
Each approach can be carried out asynchronously (on demand, after images are captured) or in real time. And they can be applied to separate (disaggregated) data streams, or used to bring together massive surveillance datasets.
Facial recognition occurring at one end of each of these scales – one-to-one, asynchronous, disaggregated – has well-documented benefits. One-to-one real-time facial recognition can be convenient and relatively safe, like unlocking your phone, or proving your identity at an automated passport barrier. Asynchronous disaggregated one-to-many facial recognition can be useful for law enforcement – analyzing CCTV footage to identify a suspect, for example, or finding victims and perpetrators in child abuse videos.