DisastersTeams of computers and humans more effective in disaster response

Published 25 September 2015

Crisis responders need to know the extent of a natural disaster, what aid is required and where they need to get to as quickly as possible — this is what’s known as “situation awareness.” With the proliferation of mass media, a lot of data is now generated from the disaster zone via photographs, tweets, news reports and the like. With the addition of first responder reports and satellite images of the disaster area, there is a vast amount of relevant unstructured data available for situation awareness. A crisis response team will be overwhelmed by his data deluge — perhaps made even worse by reports written in languages they don’t understand. But the data is also hard to interpret by computers alone, as it’s difficult to find meaningful patterns in such a large amount of unstructured data, let alone understand the complex human problems that described within it. Experts say that joint humans-computers teams would be the best way to deal with voluminous, but unstructured, data.

Over the past five years, researchers from Oxford University have been working on a collaborative project called ORCHID to develop new ways for humans and computers to work together.

This week, the team from Oxford joined their academic collaborators from the University of Southampton and University of Nottingham at the Royal Academy of Engineering to showcase their work. Oxford Science blog spoke to Dr. Steven Reece, a Senior Research Fellow at the University’s Pattern Analysis and Machine Learning Research Group, to find out how the Oxford team has been using its research to help disaster response teams.

OxSciBlog: ORCHID attempts to integrate humans — and all of their foibles — with computers, so that they can work together as so-called human-agent collectives. Why is it important?

Steven Reece: Ninety percent of all recorded data that exists in the world has been generated in the past two years. This data is vast and mostly unstructured, made up of all kinds of text documents, photographs and videos. The problem is that humans and computers look at this data very differently. Humans are very good at understanding unstructured data — they can interpret the meaning of text and understand events depicted in a photograph better than any software, for example — but they can’t work through that much of it. Computers, on the other hand, are better than humans at processing and spotting patterns in vast amounts of data very quickly. Human-agent collectives (HACs) take the best of both worlds, creating flexible teams of computers and humans to interpret large, unstructured data sets.

OSB: How do these HACs work?

SR: Traditionally, humans tell computers what to do; HACs turn that relationship on its head and allow computers to take control occasionally and request information from humans. Of course, humans and computers have their foibles: they can be unreliable, malicious, selfish and, in the case of humans, they can even get bored. But it was the goal of ORCHID to figure out how to mitigate these foibles: how to incentivize humans to contribute to the HAC, track performance, maintain the best teams and record the sources of information and decisions that are made.