UC Berkeley researchers develop a robot that folds towels

Published 6 April 2010

Researchers build a robot that can reliably fold towels it has never “seen”; the solution addresses a key issue in the development of robotics: many important problems in applying robotics and computer vision to real-life missions involve deformable objects, and the challenges posed by robotic towel-folding reflect important challenges inherent in robotic perception and manipulation for deformable objects

A team from Berkeley’s Electrical Engineering and Computer Sciences department has figured out how to get a robot to fold previously unseen towels of different sizes. This may not sound like much, but their approach solves a key problem in robotics — how to deal with flexible, or “deformable,” objects.

A team of Berkeley researchers, for the first time, enabled an autonomous robot reliably to old piles of previously unseen towels. Robots that can do things like assembling cars have been around for decades. The towel-folding robot, though, is doing something very new, according to the leaders of the Berkeley team, doctoral student Jeremy Maitin-Shepard and Assistant Professor Pieter Abbeel of Berkeley’s Department of Electrical Engineering and Computer Sciences.

Robots like the car-assembly ones are designed to work in highly structured settings, which allows them to perform a wide variety of tasks with mind-boggling precision and repeatability — but only in carefully controlled environments, Maitin-Shepard and Abbeel explain. Outside of such settings, their capabilities are much more limited.

Automation of household tasks like laundry folding is somewhat compelling in itself. More significantly, according to Maitin-Shepard, the task involves one that is proved a challenge for robots: perceiving and manipulating “deformable objects” — things that are flexible, not rigid, so their shape is not predictable. A towel is deformable; a mug or a computer is not.

A video shows a robot built by the Menlo Park robotics company Willow Garage and running an algorithm developed by the Berkeley team, faced with a heap of towels it has never “seen” before. The towels are of different sizes, colors, and materials.

The robot picks one up and turns it slowly, first with one arm and then with the other. It uses a pair of high-resolution cameras to scan the towel to estimate its shape. Once it finds two adjacent corners, it can start folding. On a flat surface, it completes the folds — smoothing the towel after each fold, and making a neat stack.

Existing work on robotic laundry and towel folding has shown that starting from a known configuration, the actual folding can be performed using standard techniques in robotic manufacturing,” says Maitin-Shepard.

There has been a bottleneck: getting a towel picked up from a pile where its configuration is unknown and arbitrary, and turning it into a known, predictable shape. This is because existing computer-vision techniques, which were primarily developed for rigid objects, are not robust enough to handle possible variations in three-dimensional shape, appearance, and texture that can occur with deformable objects, the researchers say.

Solving that problem helps a robot fold towels. More significantly, it addresses a key issue in the development of robotics. “Many important problems in robotics and computer vision involve deformable objects,” says Abbeel, “and the challenges posed by robotic towel-folding reflect important challenges inherent in robotic perception and manipulation for deformable objects.”

The team’s technical innovation is a new computer vision-based approach for detecting the key points on the cloth for the robot to grasp, an approach that is highly effective because it depends only on geometric cues that can be identified reliably even in the presence of changes in appearance and texture.

The approach has proven highly reliable. The robot succeeded in all fifty trials that were attempted on previously unseen towels with wide variations in appearance, material, and size, according to the team’s report on its research, which is being presented in May at the International Conference on Robotics and Automation 2010 in Anchorage, Alaska. Their paper is posted online (.pdf).

The system was implemented on a prototype version of the PR2, a mobile robotic platform that was developed by Willow Garage, using the open-source Robot Operating System (ROS) software framework. Two undergraduates, Marco Cusumano-Towner, a junior in EECS, and Jinna Lei, a senior math major, assisted on the project.