Gaza War: Israel Using AI to Identify Human Targets Raising Fears That Innocents Are Being Caught in the Net

The Israel Defense Forces (IDF) were swift to deny the use of AI targeting systems of this kind. And it is difficult to verify independently whether and, if so, the extent to which they have been used, and how exactly they function. But the functionalities described by the report are entirely plausible, especially given the IDF’s own boasts to be “one of the most technological organizations” and an early adopter of AI.

With military AI programs around the world striving to shorten what the US military calls the “sensor-to-shooter timeline” and “increase lethality” in their operations, why would an organization such as the IDF not avail themselves of the latest technologies?

The fact is, systems such as Lavender and Where’s Daddy? are the manifestation of a broader trend which has been underway for a good decade and the IDF and its elite units are far from the only ones seeking to implement more AI-targeting systems into their processes.

When Machines Trump Humans
Earlier this year, Bloomberg reported on the latest version of Project Maven, the US Department of Defense AI pathfinder program, which has evolved from being a sensor data analysis program in 2017 to a full-blown AI-enabled target recommendation system built for speed. As Bloomberg journalist Katrina Manson reports, the operator “can now sign off on as many as 80 targets in an hour of work, versus 30 without it”.

Manson quotes a US army officer tasked with learning the system describing the process of concurring with the algorithm’s conclusions, delivered in a rapid staccato: “Accept. Accept, Accept”. Evident here is how the human operator is deeply embedded in digital logics that are difficult to contest. This gives rise to a logic of speed and increased output that trumps all else.

The efficient production of death is reflected also in the +972 account, which indicated an enormous pressure to accelerate and increase the production of targets and the killing of these targets. As one of the sources says: “We were constantly being pressured: bring us more targets. They really shouted at us. We finished [killing] our targets very quickly”.

Built-in Biases
Systems like Lavender raise many ethical questions pertaining to training data, biases, accuracy, error rates and, importantly, questions of automation bias. Automation bias cedes all authority, including moral authority, to the dispassionate interface of statistical processing.

Speed and lethality are the watchwords for military tech. But in prioritizing AI, the scope for human agency is marginalized. The logic of the system requires this, owing to the comparatively slow cognitive systems of the human. It also removes the human sense of responsibility for computer-produced outcomes.

I’ve written elsewhere how this complicates notions of control (at all levels) in ways that we must take into consideration. When AI, machine learning and human reasoning form a tight ecosystem, the capacity for human control is limited. Humans have a tendency to trust whatever computers say, especially when they move too fast for us to follow.

The problem of speed and acceleration also produces a general sense of urgency, which privileges action over non-action. This turns categories such as “collateral damage” or “military necessity”, which should serve as a restraint to violence, into channels for producing more violence.

I am reminded of the military scholar Christopher Coker’s words: “we must choose our tools carefully, not because they are inhumane (all weapons are) but because the more we come to rely on them, the more they shape our view of the world”. It is clear that military AI shapes our view of the world. Tragically, Lavender gives us cause to realize that this view is laden with violence.

Elke Schwarz is Reader in Political Theory, Queen Mary University of London. This article is published courtesy of The Conversation.