AI profiling: the social and moral hazards of “predictive” policing
A U.K. police force which was using an algorithm designed to help it make custody decisions has been forced to alter it amid concerns that it could discriminate against poor people.
Durham Constabulary has been developing an algorithm to better predict the risk posed by offenders and to ensure that only the most “suitable” are granted police bail. But the program has also highlighted potential social inequalities that can be maintained through the use of these big data strategies.
This might seem surprising, since an apparent feature of such programs is that they are apparently neutral: technocratic evaluations of risk based on information that is “value-free” (based on objective calculation, eschewing subjective bias).
In practice, the apparent neutrality of the data is questionable. It has been reported that Durham Police will no longer use postcodes as one of the data points in their model, since it has been argued that doing so perpetuates stereotypes about neighborhoods that have negative consequences for all residents. For example, the increase in house insurance premiums and decrease in house prices.
“The ratchet effect”
Even so, algorithms rely on data that reflects – and so perpetuates – inequalities in criminal justice practice. A powerful critique of these methods by U.S. law professor Bernard Harcourt, notes that they “…serve only to accentuate the ideological dimensions of the criminal law and hardens the purported race, class, and power relations between certain offences and certain groups”.
Using models of risk as a basis for police decision-making means that those already subject to police attention will become increasingly profiled. More data on their offending will be uncovered. The focus on them will be intensified, leading to more offending identified – and so the cycle continues.