UAV roundupRobot wars are a reality, so we should develop rules to govern them

Published 20 August 2007

More and more, armies give power of life-and-death decisions to machines without reason or conscience; we may want to pause and reflect on this trend

We reported about the deployment to Iraq of the the Air Force UAV squadron — the first-ever such deployment to a battlefield. Noel Sharkey, professor of artificial intelligence and robotics at the University of Sheffield writes, that the UAV deployment is but “the latest step on a dangerous path — we are sleepwalking into a brave new world where robots decide who, where and when to kill.” As we reported a month ago, South Korea and Israel are already deploying preprogrammed armed robot border guards, while China, Singapore, and the United Kingdom are making increasing use of military robots. The biggest player yet is the United States, with robots being an integral part of the country’s $230 billion future combat systems project, a massive plan to develop unmanned air, ground, and water vehicles. Indeed, Congress has set a goal of having one-third of ground combat vehicles unmanned by 2015. More than 4,000 robots are being used in Iraq at present, with hundreds more in Afghanistan — many of them armed.

Note that the changes in robot deployment are not only quantitative, but qualitative as well. In 2002, for example, a semi-autonomous MQ-1 Predator self-navigated above a car in Yemen in which several al-Qaida suspects were riding. The Predator launched Hellfire missiles and destroyed the car, but the decision to launch was made by pilots 7,000 miles away. Fully autonomous robots, programmed to make their own decisions about the use of lethal force, are about to be introduced soon. The U.S. National Research Council advises “aggressively exploiting the considerable warfighting benefits offered by autonomous vehicles.”

Sharkey writes that he has worked in artificial intelligence for decades, and that he finds the idea of a robot making decisions about human termination to be terrifying. Policymakers are aware of the legal — and ethical — issues involved in allowing a machine to make life-and-death decisions, and the it is thus not surprising that the U.S. army is funding a project to equip robots with a conscience to give them the ability to make ethical decisions. “But machines could not discriminate reliably between buses carrying enemy soldiers or schoolchildren, let alone be ethical,” Sharkey writes. “Human soldiers have legal protocols such as the Geneva conventions to guide them. Autonomous robots are only covered by the laws of armed conflict that deal with standard weapons. But autonomous robots are not like other weapons. We are going to give decisions on human fatality to machines that are not bright enough to be called stupid,” Sharkey warns. “It is imperative that we create international legislation and a code of ethics for autonomous robots at war before it is too late.”