Predictive policingAI profiling: the social and moral hazards of “predictive” policing
While the use of AI predictions in police and law enforcement is still in its early stages, it is vital to scrutinize any warning signs that may come from its use. One standout example is a 2016 ProPublica investigation which found that COMPAS software was biased against black offenders. Society needs to maintain a critical perspective on the use of AI on moral and ethical grounds. Not least because the details of the algorithms, data sources and the inherent assumptions on which they make calculations are often closely guarded secrets. Those secrets are in the hands of the specialist IT companies that develop them who want to maintain confidentiality for commercial reasons. The social, political and criminal justice inequalities likely to arise should make us question the potential of predictive policing.
A U.K. police force which was using an algorithm designed to help it make custody decisions has been forced to alter it amid concerns that it could discriminate against poor people.
Durham Constabulary has been developing an algorithm to better predict the risk posed by offenders and to ensure that only the most “suitable” are granted police bail. But the program has also highlighted potential social inequalities that can be maintained through the use of these big data strategies.
This might seem surprising, since an apparent feature of such programs is that they are apparently neutral: technocratic evaluations of risk based on information that is “value-free” (based on objective calculation, eschewing subjective bias).
In practice, the apparent neutrality of the data is questionable. It has been reported that Durham Police will no longer use postcodes as one of the data points in their model, since it has been argued that doing so perpetuates stereotypes about neighborhoods that have negative consequences for all residents. For example, the increase in house insurance premiums and decrease in house prices.
“The ratchet effect”
Even so, algorithms rely on data that reflects – and so perpetuates – inequalities in criminal justice practice. A powerful critique of these methods by U.S. law professor Bernard Harcourt, notes that they “…serve only to accentuate the ideological dimensions of the criminal law and hardens the purported race, class, and power relations between certain offences and certain groups”.
Using models of risk as a basis for police decision-making means that those already subject to police attention will become increasingly profiled. More data on their offending will be uncovered. The focus on them will be intensified, leading to more offending identified – and so the cycle continues.