A team of robots collaborate in exploration, map building

the College of Computing, researchers from the Georgia Tech Research Institute (GTRI), the School of Aerospace Engineering, and the School of Physics are involved in MAST work.

The experiment — developed by the Georgia Tech MAST processing team — combines navigation technology developed by Georgia Tech with vision-based techniques from JPL and network technology from the University of Pennsylvania.

In addition to Christensen, members of the Georgia Tech processing team involved in the demonstration include Professor Frank Dellaert of the College of Computing and graduate students Alex Cunningham, Manohar Paluri and John G. Rogers III. Regents professor Ronald C. Arkin of the College of Computing and Tom Collins of GTRI are also members of the Georgia Tech processing team.

In the experiment, the robots perform their mapping work using two types of sensors — a video camera and a laser scanner. Supported by onboard computing capability, the camera locates doorways and windows, while the scanner measures walls. In addition, an inertial measurement unit helps stabilize the robot and provides information about its movement.

Data from the sensors are integrated into a local area map that is developed by each robot using a graph-based technique called simultaneous localization and mapping (SLAM). The SLAM approach allows an autonomous vehicle to develop a map of either known or unknown environments, while also monitoring and reporting on its own current location.

SLAM’s flexibility is especially valuable in areas where global positioning system (GPS) service is blocked, such as inside buildings and in some combat zones, Christensen said. When GPS is active, human handlers can use it to see where their robots are. In the absence of global location information, though, SLAM enables the robots to keep track of their own locations as they move.

There is no lead robot, yet each unit is capable of recruiting other units to make sure the entire area is explored,” Christensen explained. “When the first robot comes to an intersection, it says to a second robot, ‘I’m going to go to the left if you go to the right.’”

 

Christensen expects the robots’ abilities to expand beyond mapping soon. One capability under development by a MAST team involves tiny radar units that could see through walls and detect objects — or humans — behind them.

Infrared sensors could also support the search mission by locating anything giving off heat. In addition, a MAST team is developing a highly flexible “whisker” to sense the proximity of walls, even in the dark.

The processing team is designing a more complex experiment for the coming year to include small autonomous aerial platforms for locating a particular building, finding likely entry points and then calling in robotic mapping teams. Demonstrating such a capability next year would culminate progress in small-scale autonomy during MAST’s first five years, Christensen said.

In addition to the three universities, other MAST team participants are North Carolina A&T State University, the University of California Berkeley, the University of Maryland, the University of Michigan, the University of New Mexico, Harvard University, the Massachusetts Institute of Technology, and two companies: BAE Systems and Daedalus Flight Systems.