A team at Carnegie Mellon, however, is trying to fix that. The team, led by assistant professor Abhinav Gupta, is taking a new approach — allowing robots to play with everyday physical objects and explore the world to help them learn — exactly as a human baby would.
“Psychological studies have shown that if people can’t affect what they see, their visual understanding of that scene is limited,” said Lerrel Pinto, a PhD student in the group, in a report by The Verge. “Interaction with the real world exposes a lot of visual dynamics.”
The group first showed off its tech last year, and the demo helped it land a three-year, $1.5 million award from Google, which will be used to expand on the number of robots that are being used in the study. More robots allows the researchers to gather data more quickly, which basically helps the group build increasingly advanced robots.
But the team isn’t just looking toward more robots to help speed up data gathering. It’s also trying to teach robots skills that will, in turn, help the robot learn other skills. The team also uses adversarial learning — which, according to the Verge report, is akin to a parent teaching a child how to catch a ball by pitching increasingly difficult throws. Apparently, taking this approach results in significantly faster learning than alternative methods.
It will certainly be interesting to see what comes of the project, and we’ll likely hear more about it as time goes on. Check out the video below to see the robots in action.
Editors' Recommendations
- Chinese internet giant to launch own version of ChatGPT, report says
- Microsoft invests billions in ChatGPT maker OpenAI
- Investigation exposes murkier side of ChatGPT and the AI chatbot industry
- The best AI image generators to create art from text
- 5 amazing things people have already done with ChatGPT