Creepy robots learn new skills by watching and imitating human babies
In order to teach robots how to do new things, roboticists have traditionally either written code for specific tasks, or physically moved robots to show them how to perform a certain action. Now, University of Washington researchershave discovered a new way to teach robots by simply allowing them to learn from their environments like human infants do.
A collaboration between developmental psychologists and computer scientists, the project shows that robots can be capable of learning in the same way that babies do by watching adults and trying to imitate them—they amass data through exploration.“If you want people who don’t know anything about computer programming to be able to teach a robot, the way to do it is through demonstration — showing the robot how to clean your dishes, fold your clothes, or do household chores,” says Rajesh Rao, the study’s senior author and a professor of computer science and engineering at the University of Washington. “But to achieve that goal, you need the robot to be able to understand those actions and perform them on their own.”
An imitation game
It turns out, much of the play that infants engage in isn’t just flailing about and making messes—it’s a learning process involving imitation. In fact, according to the research paper published in PLOS ONE, children as young as eighteen months can observe an adult’s actions, infer the goal of those actions, and try to reach that goal by themselves.
“Babies engage in what looks like mindless play, but this enables future learning. It’s a baby’s secret sauce for innovation,” UW psychology professor Andrew Meltzoff said. “If they’re trying to figure out how to work a new toy, they’re actually using knowledge they gained by playing with other toys. During play they’re learning a mental model of how their actions cause changes in the world. And once you have that model you can begin to solve novel problems and start to predict someone else’s intentions.”
The researchers created algorithms that allow robots to engage in this same kind of learning. Applying this in one experiment, the researchers had a robot observe a human moving objects on a table. A robot then imitated those actions. However, there’s some more work to be done to make robots that learn like infants—they have to be able to infer goals.
“If the human pushes an object to a new location, it may be easier and more reliable for a robot with a gripper to pick it up to move it there rather than push it,” said lead author Michael Jae-Yoon Chung, a UW doctoral student. “But that requires knowing what the goal is, which is a hard problem in robotics and which our paper tries to address.”
UNDER MAINTENANCE