Skip to main content

Robots can learn faster by crowdsourcing information from the Internet

robots learn faster crowdsourcing information
Image: University of Washington Image used with permission by copyright holder

In order for robots to learn new skills faster, all they need is a little help from their Internet friends.

At the 2014 Institute of Electrical and Electronics Engineers International Conference on Robotics and Automation in Hong Kong, computer scientists from the University of Washington showed that crowdsourcing information from the online community may be a quick and effective way of teaching robots how to complete tasks, like setting a table or tending a garden.

Yes, let’s use the web to hasten their journey to self awareness.

According to the scientists, robots can learn how to perform tasks by imitating humans, but such an approach can take a lot of time. For example, showing a robot how to load a dishwasher may require many repetitious lessons to demonstrate how to hold different plates or load things in properly.  With this new technique, the robot can turn to the web to get additional input on how to correctly complete the tasks.

“We’re trying to create a method for a robot to seek help from the whole world when it’s puzzled by something,” said Rajesh Rao, an associate professor of computer science and engineering at the UW. “This is a way to go beyond just one-on-one interaction between a human and a robot by also learning from other humans around the world.”

To demonstrate this theory, the researchers had study participants build models — such as cars, trees, turtles, snakes and more — out of colored Lego blocks, and then asked robots to build the same objects.  But since the robots had only witnessed a few examples, they couldn’t fully complete the tasks.

So to finish their projects, they turned to the crowd, hiring people from Amazon Mechanical Turk, a crowdsourcing Internet marketplace, to generate more solutions for building the models. From more than 100 crowd-generated models to choose from, the robots picked the best ones to build based on difficulty and similarity to the original objects.

The robots then built the best models of each participant’s shape.  Such a learning technique is known as “goal-based imitation,” which harnesses the robot’s ability to know what its human operator wants and then come up with the best possible way to achieve that goal.

“The end result is still a turtle, but it’s something that is manageable for the robot and similar enough to the original model, so it achieves the same goal,” said Maya Cakmak, a UW assistant professor of computer science and engineering.

So sure, the online community may be helpful for these robots, just as long as they stay away from all of the comments sections on YouTube.

Editors' Recommendations

Loren Grush
Former Digital Trends Contributor
Loren Grush is a science and health writer living in New York City, having written for Fox News Health, Fox News SciTech and…
You can now feed Sony’s Aibo robot dog with virtual food
Sony Aibo Robot Dog

A new update to version 2.5 of the software powering Sony’s adorable Aibo robot dog enables programmable tasks, as well as the ability to feed it with virtual food.

A previous update for Aibo from earlier this year introduced version 2.0 and the Aibo Patrol feature, which Sony described as "a new service built on the concept of ‘securitainment’ (security and entertainment)." The latest update for the robot dog further delves into Aibo's robotic half.

Read more
Bop it, twist it, pull it, grip it: MIT robot hand can pick up objects with ease
MIT Robot Gripper

Engineers from the Massachusetts Institute of Technology (MIT) have figured out a way to make a robot grasp an object quicker and more efficiently. 

MIT showed off the robot in a GIF exactly of the claw picking up and adjusting its grip on an object, which is more complicated than it looks for a machine. According to the release, it can take a robot tens of minutes to plan out the possibilities of the sequence, but with a new algorithm, it takes less than a second. 

Read more
Astro the dog-inspired quadruped robot can sit, lie down, and… learn?
astro dog robot image 2

Who’s a ‘Good Boy?’ Astro, FAU’s Smart Robodog That’s Who

It’s one thing to claim that you’re the leader in a particular market when you’re one of the only ones competing in it. It’s another altogether to enter a crowded sector and claim that you’ve hit a home run. That’s what researchers from Florida Atlantic University’s Machine Perception and Cognitive Robotics Laboratory say they’ve achieved with their new dog-inspired quadruped robot. Joining the likes of Boston Dynamics’ Spot robot and the oil rig-inspecting Anymal, they have built a new Doberman pinscher-inspired robot dog called Astro -- and they’re confident that they’re onto a winner.

Read more