Robot special: Get a grip
- 04 February 2006
- From New Scientist Print Edition. Subscribe and get 4 free issues.
- Gregory T. Huang
More Mech-Tech Stories
More Stories
CLAD smartly in a white flight suit, the astronaut is a picture of concentration - carefully grasping a rod with a gloved hand and fastening it to a large aluminium frame by gently twisting the rod until it locks in place. That frame might some day support solar panels to power a space station or a moon colony. This vital job requires precise, deliberate moves and a good deal of strength, but this astronaut is up to the task.
Here at NASA's Johnson Space Center in Houston, Texas, specialists train for all kinds of important missions. But the astronaut in the helmet is the cream of the crop, and the centre's teachers proudly track every move of the exercise. "That looks like the way I would do it," says Ron Diftler. "It's kind of eerie. Sometimes you could swear it's a human doing the task."
It's not a human, of course. Robonaut is the most dexterous robot on Earth, and Diftler is its supervisor, and manager on the project. Robonaut's upper body looks human, with a head, torso, two multi-jointed arms, and two precisely controlled five-fingered hands. It can mimic the dexterity of an astronaut wearing pressurised gloves, and it might one day assist on space-walking missions or even operate in space on its own. The robot could be sent into orbit to spruce up the International Space Station, for instance, or help construct the first human habitat on Mars.
Now an initiative launched last spring by NASA has given Robonaut and its kin a vital boost. The Human-Robot Technology programme aims to develop intelligent machines that can do useful work with their hands. It's all part of what robotics experts call "autonomous mobile manipulation" - and it's one of the hottest fields around. The overarching goal is to build, within two decades, a robot that has the manual dexterity of a 6-year-old child.
What's all the fuss about then? "Autonomous mobile manipulation is important if you want to go fix the Hubble telescope or work as a general-purpose humanoid robot," says Russ Tedrake from the computer science and artificial intelligence laboratory at the Massachusetts Institute of Technology. That means going where humans would rather not, because of safety concerns, cost or just plain laziness. And although today's robots are likelier to be picking up litter on Earth than making repairs in space, even these could help spawn important advances in prosthetics, surgical tools and automated care for the elderly.
Although researchers have worked on robotic dexterity for decades - notably in the US, Germany and Japan - they have tended to focus more resources on tasks like navigation and walking. That has led to machines that can get around impressively well but can't do much else. "After 25 years, we have got the robot to go down the hall without bumping into the walls," says Robert Ambrose, chief of robotics systems technology at Johnson Space Center, "but we forgot why we went."
Now all that is changing, thanks to remarkable advances in the sensors, actuators and computing abilities that robots need for dexterity in the real world. The latest control systems enable robots to sense their environment more accurately, sharpen their fine motor skills and interact more naturally with objects around them.
So why has dexterity been so hard to get to grips with? For one, it requires fast movements and accurate feedback - so that the robot's "brain" can control exactly where its fingers are and how hard they grip something. This is tough. Traditional bots move stiffly, and each joint's position is precisely controlled at all times. That's useful for the fixed repertoire of assembly-line work but not so good in the real world, where things are unpredictable.
Another problem is that different objects require different grips. When you pick up a coffee cup, start your car, or turn the pages of this magazine, you move your fingers very differently. That is hard for a robot to deal with, because it either needs to be programmed to deal with every object it might meet, or else it must learn to adjust its grip depending on what it sees and feels.
The new breed of bots are much more sensitive to the world around them. Their movements are controlled on the basis of the forces they exert, rather than the absolute position of each finger or limb. Each of Robonaut's arms is packed with 150 sensors that detect not only joint positions but contact forces, stresses and strains on the limb, heat flow, and other variables. An on-board computer analyses signals from the sensors and sends commands to the electric motors in the arm. For example, when the robot's hand touches an object, it senses contact and tries to adjust its fingers to fit the shape of the thing, the way your fingers naturally curl around a cup, whatever its size or shape.
Using this approach, Robonaut has already performed some impressive feats. It can use a pair of fine tweezers to pick up a tiny bolt. It can grasp a handrail and attach a hook and tether the way an astronaut would secure a safety line for a space walk. It has used a hand tool to open and close a replica of a port on the Hubble telescope. In terms of manual dexterity, says Ambrose, "that's probably the hardest thing astronauts have done in space".
So the problem is solved? Not quite. There's a bit of what might be called cheating going on here. For now, Robonaut is controlled in part by a "teleoperator" who acts out what he or she wants the robot to do and gets visual and tactile feedback through a virtual-reality headset and gloves. In many cases, the teleoperator is the eyes and brain of the bot; in others, the human controls only one aspect of the robot hand's movement, twisting the wrist, for example, while the robot does the rest.
That's fine for some simple or repetitive tasks. But considering that radio signals to and from the moon are delayed by seconds, rising to a maximum of 42 minutes or so for Mars, remote control will not always be practical in space. "We want a lot of automation, not constant human intervention," says Diftler.
Of course, a truly autonomous robot - one that performs tasks completely on its own - is what everyone is striving for. The key will be upgrading its brain. To get there, the NASA team has collaborated with researchers from MIT, Vanderbilt University in Nashville, Tennessee, the University of Southern California in Los Angeles and the University of Massachusetts at Amherst. Each group has tested its own software to control different aspects of Robonaut. The idea is to teach it to use tools, keep track of objects in its workspace, and even recognise speech and gestures so it can work with people in real environments, all without the need for a teleoperator.
Not to be outdone, each group also has its own robots. At the University of Massachusetts, "Dexter" learns to manipulate objects by playing around with them. The robot watches as lab director Roderic Grupen places a rectangular block and a cylindrical can on the table in front of it, one by one. Dexter reaches out with a large three-fingered hand, picks up the block, and sets it back down. Then it locates the can, reaches out to it, and finds the right grip for that as well.
Unlike Robonaut, Dexter's bulky frame won't be mistaken for a human any time soon. Despite its name, the robot is also less dexterous than the space robot. Dexter consists of a "head" with stereo cameras, two thick arms nearly a metre long, and two hands. Most of these parts were bought off the shelf: industrial components with conventional motors and commercial force and position sensors. But the hardware is not the point here. What matters is the robot's learning "infant brain", says Grupen.
Dexter is designed to learn by accumulating real-world experience. During each manoeuvre, the robot keeps track of how its hand moves and approaches an object, say, and whether its grip is strong enough. Mathematically, this is a tricky feat. "It is not just 'can I pick up that object?' but also 'How do I store the knowledge I've acquired from the environment and then use it?'," says Grupen.
To do this, Dexter looks at a new object to get a sense of its size and shape. Then it compares what it sees to what it remembers about other objects it has handled, and uses statistical inferences to make new reaching and grasping decisions. In one experiment, Dexter places different types of plastic bottles into a paper grocery bag, like a packer at a supermarket checkout, and learns which grip works with which kind of object.
What's more, Dexter employs what Grupen calls "domain generalisation" to tackle new situations. Having learned to use two fingers to grasp an apple, for instance, Dexter tries to grasp a large beach ball with two arms in roughly the same way as it used the two fingers.
The result is that anything a 6-year-old can do, Dexter could eventually do better. Or at least that's the goal - so far nobody, least of all roboticists, seems to understand how children acquire fine motor skills so effortlessly, while it remains difficult to teach a machine to do anything. "We all think we'll get our robot to do things before our kids, but we always lose," says Grupen.
With robots like Dexter and Robonaut, researchers have shown that we have the hardware. What remains is working out how to control them better. In the meantime, they plan to combine the new-found dexterity with mobility and navigation. Grupen's colleagues are mounting an arm and hand like Dexter's onto a robotic platform with wheels that will move around, open doors and fetch objects. And at NASA's Ames Research Center at Moffett Field, California, Robonaut recently did its thing while riding around on a specially modified Segway transporter. It even practised some welding as part of a simulated human-habitat construction project.
The NASA researchers are also testing out new software that coordinates the actions of the robotic astronaut with those of another mobile robot, and which also enables humans working nearby to give the robots instructions. If all goes well, such a system could be ready for space - and other applications closer to home - within a few years.
"There is so much potential to help people in different kinds of environments," says Diftler. "We are entering a new era." One in which, out of the corner of your eye at least, some robots can already pass for human. For now, that's the ultimate compliment.