Robot-human “collaborative autonomy”Researchers join AI-enabled robots in “collaborative autonomy”

Published 27 February 2018

A team of firefighters clears a building in a blazing inferno, searching rooms for people trapped inside or hotspots that must be extinguished. Except this isn’t your typical crew. Most apparent is the fact that the firefighters are not all human. They are working side-by-side with artificially intelligent (AI) robots who are searching the most dangerous rooms, and making life or death decisions. This scenario is potentially closer than you might think, but while AI-equipped robots might be technologically capable of rendering aid, sensing danger or providing protection for their flesh-and-blood counterparts, the only way they can be valuable to humans is if their operators are not burdened with the task of guiding them.

A team of firefighters clears a building in a blazing inferno, searching rooms for people trapped inside or hotspots that must be extinguished. Except this isn’t your typical crew. Most apparent is the fact that the firefighters are not all human. They are working side-by-side with artificially intelligent (AI) robots who are searching the most dangerous rooms, and making life or death decisions.

This scenario is potentially closer than you might think, but while AI-equipped robots might be technologically capable of rendering aid, sensing danger or providing protection for their flesh-and-blood counterparts, the only way they can be valuable to humans is if their operators are not burdened with the task of guiding them.

LLNL says that a team of researchers at Lawrence Livermore National Laboratory (LLNL) is responding to the need by investing in “collaborative autonomy,” a broad term describing a network of humans and autonomous machine partners interacting and sharing information and tasks efficiently and in such a way that it doesn’t distract the human operator.

“The idea with collaborative autonomy is not the human flying the drone, it’s the human in control in the sense of guiding the mission or the task,” said LLNL engineer Reg Beer, who is heading the Lab’s collaborative autonomy effort. “The goal is to employ robotic partners with the ability to direct an autonomous squad-mate and have that squad-mate go achieve something without having to be teleoperated or with intense oversight.”

Reaching that level of human-machine cooperation requires trust, Beer said — the confidence that machines will not only perform their assigned tasks, and not go off-script, but will be able to report back that they’re not functioning properly or that their environment has changed too much to properly gauge.