AIArtificial Intelligence and Policing: It’s a Matter of Trust

By Nick Evans

Published 30 August 2022

From Robocop to Minority Report, the intersection between policing and artificial intelligence has long captured attention in the realm of high-concept science fiction.AI is currently primarily used for statistical inferencing used to make (or inform) decisions—in other words, technology that falls broadly into the category of “predictive policing.”

From Robocop to Minority Report, the intersection between policing and artificial intelligence has long captured attention in the realm of high-concept science fiction. However, only over the past decade or so has academic research and government policy begun to focus on it.

Teagan Westendorf’s ASPI report, Artificial intelligence and policing in Australia, is one recent example. Westendorf argues that Australian government policy and regulatory frameworks don’t sufficiently capture the current limitations of AI technology, and that these limitations may ‘compromise [the] principles of ethical, safe and explainable AI’ in the context of policing.

My aim in this article is to expand on Westendorf’s analysis of the potential challenges in policing’s use of AI and offer some solutions.

Westendorf focuses primarily on a particular kind of policing use of AI, namely, statistical inferencing used to make (or inform) decisions—in other words, technology that falls broadly into the category of ‘predictive policing’.

While predictive policing applications pose the thorniest ethical and legal questions and therefore warrant serious consideration, it’s important to also highlight other applications of AI in policing. For example, AI can assist investigations by expediating the transcription of interviews and analysis of CCTV footage. Image-recognition algorithms can also help detect and process child-exploitation material, helping to limit human exposure. Drawing attention to these applications can help prevent the conversation from becoming too focused on a small but controversial set of uses. Such a focus could risk poisoning the well for the application of AI technology to the sometimes dull and difficult (but equally important) areas of day-to-day police work.

That said, Westendorf’s main concerns are well reasoned and worth discussing. They can be summarised as being the problem of bias and the problem of transparency (and its corollary, explainability).

Like all humans, police officers can have both conscious and unconscious biases that may influence decision-making and policing outcomes. Predictive policing algorithms often need to be trained on datasets capturing those outcomes. Yet, if algorithms are trained on historical datasets that include the results of biased decision-making, it can result in unintentional replication (and in some cases amplification) of the original biases. Efforts to ensure systems are free of bias can also be hampered by ‘tech-washing’, where AI outputs are portrayed (and perceived) as based solely on science and mathematics and therefore inherently free of bias.