A team at Carnegie Mellon, however, is trying to fix that. The team, led by assistant professor Abhinav Gupta, is taking a new approach — allowing robots to play with everyday physical objects and explore the world to help them learn — exactly as a human baby would.
“Psychological studies have shown that if people can’t affect what they see, their visual understanding of that scene is limited,” said Lerrel Pinto, a PhD student in the group, in a report by The Verge. “Interaction with the real world exposes a lot of visual dynamics.”
The group first showed off its tech last year, and the demo helped it land a three-year, $1.5 million award from Google, which will be used to expand on the number of robots that are being used in the study. More robots allows the researchers to gather data more quickly, which basically helps the group build increasingly advanced robots.
But the team isn’t just looking toward more robots to help speed up data gathering. It’s also trying to teach robots skills that will, in turn, help the robot learn other skills. The team also uses adversarial learning — which, according to the Verge report, is akin to a parent teaching a child how to catch a ball by pitching increasingly difficult throws. Apparently, taking this approach results in significantly faster learning than alternative methods.
It will certainly be interesting to see what comes of the project, and we’ll likely hear more about it as time goes on. Check out the video below to see the robots in action.
Editors' Recommendations
- Meta is building a space-age ‘universal language translator’
- Google Bard can now speak, but can it drown out ChatGPT?
- All of the internet now belongs to Google’s AI
- What is MusicLM? Check out Google’s text-to-music AI
- Google’s ChatGPT rival just launched in search. Here’s how to try it