Teaching Robots New Tricks

Teaching Robots New Tricks

https://youtu.be/OOy5ppz2P3s

If you were to give a computer a special treat for doing well, say like you would do for your dog, what would it be? “A plus one.”

So says Matt Taylor, Allred Distinguished Professor in Artificial Intelligence in the School of Electrical Engineering and Computer Science. Along with researchers from Brown University and North Carolina State University, Taylor recently received a National Science Foundation grant to use ideas from dog training to train robotic agents.

Taylor is a renowned expert on robots, or agents as they’re more generically known, and helping them to learn. His lab features rolling, crawling turtle bots and flying quadcopters. In addition to conducting research, he advises WSU’s robotics club and teaches robotics courses.

If you’re worried about robots taking over the world someday, though, don’t be.

“They’re so dumb,” he says.

Even the most advanced robots get easily confused. And, when they get confused, they stop working. As a researcher, Taylor says it often takes two or three times longer than he thinks it will to get an agent to work at all.

A popular way of teaching robots has been having them learn from demonstration. That is, the robot tries to mimic what a human is doing.

The new grant aims to use simpler dog training techniques to help computers learn. As in dog training, the interactions will be very limited, so that the robot is told only if it is a ‘good’ or ‘bad’ agent. A series of tasks starts with the most simple and become increasingly harder.

“Human dog trainers are really good at this,” says Taylor. They might reward a dog if it starts to sit, he says, and then continues to reward increasingly complex tasks.

The researchers have recruited dog trainers for online studies, having them train a virtual dog and watching how
they use positive and negative feedback
in their training. They are also looking at how trainers are affected by what they’re training. Do they use more friendly and positive techniques if they are training a virtual dog, a robot that looks like a dog, or a robot that looks like a machine?

While other researchers have looked at using negative and positive commands to teach computer agents, Taylor’s project is the first that makes a connection to the way we train dogs.

In a separate project, Taylor is working to have robotic agents teach skills to each other. He has recently published a paper in which one robot taught another robot how to play video games, specifically Pac-Man and a version of StarCraft.

People would like robots to be able to teach each other tasks, so we don’t have to. For example, if we have a robot that cleans our house and we get a new one, it would be ideal for the old robot to train its replacement.

The easiest way to successfully teach a robot new skills is to remove the ‘brains’
of the old one and put it in the new robot. Problems occur, though, when hardware and software don’t work in the new model. Furthermore, one long term goal in robotics is for robots to be able to teach skills to humans. But we can’t simply insert their hard drive into people.

In his paper, published in the journal Connection Science, Taylor tried to have the agents act like true student and teacher. That is, the teaching agent focused on action advice, or telling a student when to act. As anyone with teenagers knows, the trick is in knowing when the robot should give advice. If it gives no advice, the robot is not teaching. But if it always gives advice, the student gets annoyed and doesn’t come to outperform the teacher.

“We have designed algorithms for advice giving, and are trying to figure out when our advice makes the biggest difference,” he says.

The researchers were able to show that their student agent learned the games and, in fact, exceeded the teacher.

Taylor aims to develop a curriculum for the agents that starts with simple work and builds to more complex. Eventually, he hopes to develop a curriculum for computer agents and a better way for people to teach their robotic agents.

“The ultimate goal of this work is to provide a more natural paradigm for humans to tell computers what they would like for them to do,” he says.