Robotics Researchers Have a Duty to Prevent Autonomous Weapons

Easy to Modify Systems
When developing machines that can make own decisions – typically called autonomous systems – the ethical questions that arise are arguably more concerning than those in object recognition. AI-enhanced autonomy is developing so rapidly that capabilities which were once limited to highly engineered systems are now available to anyone with a household toolbox and some computer experience.

People with no background in computer science can learn some of the most state-of-the-art artificial intelligence tools, and robots are more than willing to let you run your newly acquired machine learning techniques on them. There are online forums filled with people eager to help anyone learn how to do this.

With earlier tools, it was already easy enough to program your minimally modified drone to identify a red bag and follow itMore recent object detection technology unlocks the ability to track a range of things that resemble more than 9,000 different object types. Combined with newer, more maneuverable drones, it’s not hard to imagine how easily they could be equipped with weapons. What’s to stop someone from strapping an explosive or another weapon to a drone equipped with this technology?

Using a variety of techniques, autonomous drones are already a threat. They have been caught dropping explosives on U.S. troopsshutting down airports and being used in an assassination attempt on Venezuelan leader Nicolas Maduro. The autonomous systems that are being developed right now could make staging such attacks easier and more devastating.

Regulation or Review Boards?
About a year ago, a group of researchers in artificial intelligence and autonomous robotics put forward a pledge to refrain from developing lethal autonomous weapons. They defined lethal autonomous weapons as platforms that are capable of “selecting and engaging targets without human intervention.” As a robotics researcher who isn’t interested in developing autonomous targeting techniques, I felt that the pledge missed the crux of the danger. It glossed over important ethical questions that need to be addressed, especially those at the broad intersection of drone applications that could be either benign or violent.

For one, the researchers, companies and developers who wrote the papers and built the software and devices generally aren’t doing it to create weapons. However, they might inadvertently enable others, with minimal expertise, to create such weapons.

What can we do to address this risk?

Regulation is one option, and one already used by banning aerial drones near airports or around national parks. Those are helpful, but they don’t prevent the creation of weaponized drones. Traditional weapons regulations are not a sufficient template, either. They generally tighten controls on the source material or the manufacturing process. That would be nearly impossible with autonomous systems, where the source materials are widely shared computer code and the manufacturing process can take place at home using off-the-shelf components.

Another option would be to follow in the footsteps of biologists. In 1975, they held a conference on the potential hazards of recombinant DNA at Asilomar in California. There, experts agreed to voluntary guidelines that would direct the course of future work. For autonomous systems, such an outcome seems unlikely at this point. Many research projects that could be used in the development of weapons also have peaceful and incredibly useful outcomes.

A third choice would be to establish self-governance bodies at the organization level, such as the institutional review boards that currently oversee studies on human subjects at companies, universities and government labs. These boards consider the benefits to the populations involved in the research and craft ways to mitigate potential harms. But they can regulate only research done within their institutions, which limits their scope.

Still, a large number of researchers would fall under these boards’ purview – within the autonomous robotics research community, nearly every presenter at technical conferences are members of an institution. Research review boards would be a first step toward self-regulation and could flag projects that could be weaponized.

Living with the Peril and Promise
Many of my colleagues and I are excited to develop the next generation of autonomous systems. I feel that the potential for good is too promising to ignore. But I am also concerned about the risks that new technologies pose, especially if they are exploited by malicious people. Yet with some careful organization and informed conversations today, I believe we can work toward achieving those benefits while limiting the potential for harm.

Christoffer Heckman is Assistant Professor of Computer Science, University of Colorado Boulder. This article is published courtesy of The Conversation.