Shape of things to comeMilitary robotics moves forward

Published 3 March 2009

The trend toward autonomous military systems is about to reach a new — and important — phase: machines that do not only aim and shoot, but which also make the decision when and at what target to shoot

The idea that robots might one day be able to tell friend from foe is deeply flawed, says roboticist Noel Sharkey of the University of Sheffield in the United Kingdom. He was commenting on a Pentagon report calling for weapon-wielding military robots to be programed with the same ethical rules of engagement as human soldiers (for more on Sharkey’s views, see 20 August 2007 HS Daily Wire, and 28 February 2008 HS Daily Wire)).

The report, written for the U.S. Navy, says firms rushing to fulfill the requirement for one-third of U.S. forces to be unmanned (the term they use now is “uncrewed”) by 2015 risk leaving ethical concerns by the wayside. “Fully autonomous systems are in place right now,” warns Patrick Lin, the study’s author at California State Polytechnic in San Luis Obispo. “The U.S. navy’s Phalanx system, for instance, can identify, target, and shoot down missiles without human authorization.”

While Sharkey applauds the report’s broad coverage of the issue, he says it is far too optimistic: “It trots out the old notion that robots could behave more ethically than soldiers because they don’t get angry or seek revenge.” Robots, however, do not have human empathy and ca not exercise judgment, he says, and as they can not discriminate between innocent bystanders and soldiers they should not make judgments about lethal force.

Note that the move toward unmanned military systems has gathered so much speed, that there is now a new line of business: anti-unmanned systems (see “Terminating the Terminators: Anti-robot Defense Company Launched,” 18 September 2009 HS Daily Wire)