Humans take learning for granted. It’s remarkable how quickly we can pick up a new task just by watching someone else do it. Robots meanwhile don’t have it so easy, but researchers from the Massachusetts Institute of Technology (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL) are here to help. They’re teaching robots to teach each other.
The new system, C-LEARN, combines two traditional elements of robotic learning — learning from demonstration and something called motion planning, actions which have to be hard-coded by developers. They say this new technique is meant to make it easier for robots to perform a wide range of tasks with less programming.
“Robots could be of so much help if only more people could use them,” Claudia Perez-D’Arpino, a PhD candidate who worked on the project, told Digital Trends. She explained that the team’s motive was to maintain some of the high-level skills enabled by state-of-the-art programmers, while allowing the system to learn through demonstration.
Programming robots to perform even a single task can be complicated, involving precise instructions that take time to code. Instead, Perez-D’Arpino and her team developed C-LEARN to let experts focus on the tasks most relevant to their respective fields. With this system, non-coders can give robots bits of data about an action and then fill in the gaps by showing the robot a demonstration of the task at hand.
“We wanted to … empower [experts] to teach robots how to plan for tasks that are critical in their field of application,” Pérez-D’Arpino said. “Progress in recent years in learning from demonstrations is moving in this direction,”
C-LEARN works by accumulating a body of experience, which the researchers call a knowledge base. This base contains geometric information about reaching and grasping objects. Next, the human operator shows the robot a 3D demonstrations of the task at hand. By relating its knowledge base to the action it observed, the robot can make suggestions for how best to perform the actions, and the operator can approve or edit the suggestions as she sees fit.
“This knowledge base can be transferred from one robot to another,” Pérez-D’Arpino said. “Imagine your robot is downloading an ‘app’ for manipulation skills. The ‘app’ can adapt to the new robot with a different body thanks to the flexibility of having learned constraints, which are a mathematical representation of the underlying geometrical requirement of the task, which is different from learning a specific path that might not be feasible in the new robot body.”
In other words, C-LEARN allows that knowledge to transfer and adapt to its context — kind of like how an athlete can learn a skill in one sport and alter it slightly to perform better in a different sport, without having to completely relearn the action.
The researchers tested C-LEARN on Optimus, a small two-armed robot designed for bomb disposal, before successfully transferring the skill to Atlas, a six-foot-tall humanoid. They think the system could be help improve the performance of robots in manufacturing and disaster relief, to allow for quicker responses in time-sensitive situations.
- Using simple code, kids can teach the $99 Tello drone their own tricks
- The DroNet algorithm teaches drones to navigate city streets like cars
- Don’t be fooled by dystopian sci-fi stories: A.I. is becoming a force for good
- Kano Computer Kit Complete Review
- Amazon designs a $250 A.I. camera to teach computer vision to developers