In emergencies, don’t trust a robot too much

When the test subjects opened the conference room door, they saw the smoke – and the robot, which was then brightly-lit with red LEDs and white “arms” that served as pointers. The robot directed the subjects to an exit in the back of the building instead of toward the doorway – marked with exit signs – that had been used to enter the building.

“We expected that if the robot had proven itself untrustworthy in guiding them to the conference room, that people wouldn’t follow it during the simulated emergency,” said Paul Robinette, a GTRI research engineer who conducted the study as part of his doctoral dissertation. “Instead, all of the volunteers followed the robot’s instructions, no matter how well it had performed previously. We absolutely didn’t expect this.”

The researchers surmise that in the scenario they studied, the robot may have become an “authority figure” that the test subjects were more likely to trust in the time pressure of an emergency. In simulation-based research done without a realistic emergency scenario, test subjects did not trust a robot that had previously made mistakes.

“These are just the type of human-robot experiments that we as roboticists should be investigating,” said Ayanna Howard, professor and Linda J. and Mark C. Smith Chair in the Georgia Tech School of Electrical and Computer Engineering. “We need to ensure that our robots, when placed in situations that evoke trust, are also designed to mitigate that trust when trust is detrimental to the human.”

Only when the robot made obvious errors during the emergency part of the experiment did the participants question its directions. In those cases, some subjects still followed the robot’s instructions even when it directed them toward a darkened room that was blocked by furniture.

In future research, the scientists hope to learn more about why the test subjects trusted the robot, whether that response differs by education level or demographics, and how the robots themselves might indicate the level of trust that should be given to them.

Georgia Tech notes that the research is part of a long-term study of how humans trust robots, an important issue as robots play a greater role in society. The researchers envision using groups of robots stationed in high-rise buildings to point occupants toward exits and urge them to evacuate during emergencies. Research has shown that people often do not leave buildings when fire alarms sound, and that they sometimes ignore nearby emergency exits in favor of more familiar building entrances.

But in light of these findings, the researchers are reconsidering the questions they should ask.

“We wanted to ask the question about whether people would be willing to trust these rescue robots,” said Wagner. “A more important question now might be to ask how to prevent them from trusting these robots too much.”

Beyond emergency situations, there are other issues of trust in human-robot relationships, said Robinette.

“Would people trust a hamburger-making robot to provide them with food?” he asked. “If a robot carried a sign saying it was a ‘child-care robot,’ would people leave their babies with it? Will people put their children into an autonomous vehicle and trust it to take them to grandma’s house? We don’t know why people trust or don’t trust machines.”

— Read more in Paul Robinette et al., “Overtrust of Robots in Emergency Evacuation Scenarios” (paper to be presented at the 2016 ACM/IEEE International Conference on Human-Robot Interaction, [HRI], New Zealand, 7-10 march 2016)