ROBOTICSUsing Novel Approach to Teach Robot to Navigate Over Obstacles

Published 19 May 2023

When it comes to robotic locomotion and navigation, most four-legged robots are trained to regain their footing if an obstacle causes them to stumble. Researchers out to train their robot to walk over clutter.

Quadrupedal robots may be able to step directly over obstacles in their paths thanks to the efforts of a trio of Georgia Tech Ph.D. students.

When it comes to robotic locomotion and navigation, Naoki Yokoyama says most four-legged robots are trained to regain their footing if an obstacle causes them to stumble. Working toward a larger effort to develop a housekeeping robot, Yokoyama and his collaborators — Simar Kareer and Joanne Truong — set out to train their robot to walk over clutter it might encounter in a home.

“The main motivation of the project is getting low-level control over the legs of the robot that also incorporates visual input,” said Yokoyama, a Ph.D. student within the Georgia Tech’s School of Electrical and Computer Engineering. “We envisioned a controller that could be deployed in an indoor setting with a lot of clutter, such as shoes or toys on the ground of a messy home. Whereas blind locomotive controllers tend to be more reactive — if they step on something, they’ll make sure they don’t fall over — we wanted ours to use visual input to avoid stepping on the obstacle altogether.”

To achieve their goal, the researchers took a novel training approach of fusing a high-level visual navigation policy with a visual locomotion policy.

In a paper advised by Interactive Computing Associate Professor Dhruv Batra and Assistant Professor Sehoon Ha, Kareer, Yokoyama, and Truong show that their two-policy approach successfully simulates robotic navigation over obstacles.

They call their approach ViNL (Visual Navigation and Locomotion), and so far, it has guided robots through simulated novel cluttered environments with a 72.6% success rate. The team will present its paper, ViNL: Visual Navigation and Locomotion Over Obstacles, at the IEEE International Conference on Robotics and Automation, which is being held May 29-June 2 in London.

Both policies are model-free — the robot learns on its own simulation and doesn’t mimic any pre-existing behavioral patterns — and can be combined without any additional co-training.

“This work uniquely combines separate locomotion and navigation policies in a zero-shot manner,” said Kareer, who along with Truong is a Ph.D. student within the School of Interactive Computing. “If we come up with an improved navigation policy, we can just take that, do no extra work, and deploy that to our robot. That’s a scalable approach. You can plug and play these things together with very little fine-tuning. That’s powerful.”