Autonomous Drones Could Speed Up Search and Rescue after Flash Floods, Hurricanes and Other Disasters

Confirming Objects of Interest
When rescuers search for human beings trapped in disaster areas, the viewers’ minds imagine 3D views of how a person might appear in the scene. They should be able to detect the presence of a trapped human even if they haven’t seen someone in such a position before.

We employ this strategy by computing 3D models of people and rotating the shapes in all directions. We train the autonomous machine to perform exactly like a human rescuer does. That allows the system to identify people in various positions, such as lying prone or curled in the fetal position, even from different viewing angles and in varying lighting and weather conditions.

The system can also be trained to detect and locate a leg sticking out from under rubble, a hand waving at a distance, or a head popping up above a pile of wooden blocks. It can tell a person or animal apart from a tree, bush or vehicle.

Putting the Pieces Together
During its initial scan of the landscape, the system mimics the approach of an airborne spotter, examining the ground to find possible objects of interest or regions worth further examination, and then looking more closely. For example, an aircraft pilot who is looking for a truck on the ground would typically pay less attention to lakes, ponds, farm fields and playgrounds because trucks are less likely to be in those areas. The autonomous technology employs the same strategy to focus the search area to the most significant regions in the scene.

Then the system investigates each selected region to obtain information about the shape, structure and texture of objects there. When it detects a set of features that matches a human being or part of a human, it flags that location, collects GPS data and senses how far the person is from other objects to provide an exact location.

The entire process takes about one-fifth of a second.

This is what faster search-and-rescue operations can look like in the future. A next step will be to turn this technology into an integrated system that can be deployed for emergency response.

We previously worked with the U.S. Army Medical Research and Materiel Command on technology to find wounded individuals in a battlefield who need rescue. We also have adapted the technology to help utility companies monitor heavy equipment that could damage pipelines. These are just a few of the ways disaster responders, companies or even farmers could benefit from technology that can see as humans can see, especially in places humans can’t easily reach.

Vijayan Asari is Professor of Electrical and Computer Engineering, University of Dayton. This article is published courtesy of The Conversation.