AIArtificial Intelligence at War

By Peter Layton

Published 22 August 2024

The Gaza war has shown that the use of AI in tactical targeting can drive military strategy by encouraging decision-making bias. At the start of the conflict, an Israeli Defense Force AI system called Lavender apparently identified 37,000 people linked to Hamas. Its function quickly shifted from gathering long-term intelligence to rapidly identifying individual operatives to target.

There’s a global arms race under way to work out how best to use artificial intelligence for military purposes. The Gaza and Ukraine wars are now accelerating this. These conflicts might inform Australia and others in the region as they prepare for a possible AI-fueled ‘hyperwar’ closer to home, given that China envisages fighting wars using automated decision-making under the rubric of what it calls ‘intelligentization’.

The Gaza war has shown that the use of AI in tactical targeting can drive military strategy by encouraging decision-making bias. At the start of the conflict, an Israeli Defense Force AI system called Lavender apparently identified 37,000 people linked to Hamas. Its function quickly shifted from gathering long-term intelligence to rapidly identifying individual operatives to target. Foot soldiers were easier to swiftly locate and attack than senior commanders, so they dominated the attack schedule.

Lavender created a simplified digital model of the battlefield, allowing dramatically faster targeting and much higher rates of attacks than in earlier conflicts. Human analysts did review Lavender’s recommendations before authorizing attacks, but they quickly grew to trust it, considering it more reliable. Humans often spent only 20 seconds considering Lavender’s target recommendations before approving them.

These human analysts displayed automation bias and action bias. Indeed, it could be said that Lavender was encouraging and amplifying these biases. In a way, the humans offloaded their thinking to the machine.

Human-machine teams are considered by many, including the Australian Defense Force, to be central to future warfighting. The way Lavender’s tactical targeting drove military strategy suggests that the AI machine part should be designed to work with humans on the task they are undertaking, not be treated as a part able to be quickly switched between different functions. Otherwise, humans might lose sight of the strategic or operational context and instead focus on the machine-generated answers.

For example, the purpose-designed Ukrainian GIS Arta system takes a bottom-up approach to target selection by giving people a well-fused picture of the battlespace, not a recommendation derived opaquely of what to attack. It’s described as ‘Uber for artillery’. Human users apply the context as they understand it to decide what is to be targeted.