Search & rescueAutonomous Drones Could Speed Up Search and Rescue after Flash Floods, Hurricanes and Other Disasters

By Vijayan Asari

Published 31 August 2021

Rescuers already use drones in some cases, but most require individual pilots who fly the unmanned aircraft by remote control. That limits how quickly rescuers can view an entire affected area, and it can delay aid from reaching victims. Autonomous drones could cover more ground faster, especially if they could identify people in need and notify rescue teams.

During hurricanes, flash flooding and other disasters, it can be extremely dangerous to send in first responders, even though people may badly need help.

Rescuers already use drones in some cases, but most require individual pilots who fly the unmanned aircraft by remote control. That limits how quickly rescuers can view an entire affected area, and it can delay aid from reaching victims.

Autonomous drones could cover more ground faster, especially if they could identify people in need and notify rescue teams.

My team and I at the University of Dayton Vision Lab have been designing these autonomous systems of the future to eventually help spot people who might be trapped by debris. Our multi-sensor technology mimics the behavior of human rescuers to look deeply at wide areas and quickly choose specific regions to focus on, examine more closely, and determine if anyone needs help.

The deep learning technology that we use mimics the structure and behavior of a human brain in processing the images captured by the 2-dimensional and 3D sensors embedded in the drones. It is able to process large amounts of data simultaneously to make decisions in real time.

Looking for an Object in a Chaotic Scene

Disaster areas are often cluttered with downed trees, collapsed buildings, torn-up roads and other disarray that can make spotting victims in need of rescue very difficult. 3D lidar sensor technology, which uses light pulses, can detect objects hidden by overhanging trees.

My research team developed an artificial neural network system that could run in a computer onboard a drone. This system emulates some of the ways human vision works. It analyzes images captured by the drone’s sensors and communicates notable findings to human supervisors.

First, the system processes the images to improve their clarity. Just as humans squint their eyes to adjust their focus, this technology take detailed estimates of darker regions in a scene and computationally lightens the images.

In a rainy environment, human brains use a brilliant strategy to see clearly: By noticing the parts of a scene that don’t change as the raindrops fall, people can see reasonably well despite the rain. Our technology uses the same strategy, continuously investigating the contents of each location in a sequence of images to get clear information about the objects in that location.