Skip to main content

Kid-mounted cameras help A.I. learn to view the world through eyes of a child

 

Talk to any artificial intelligence researcher and they’ll tell you that, while A.I. may be capable of complex acts like driving cars and spotting tiny details on X-ray scans, they’re still way behind when it comes to the generalized abilities of even a 3-year-old kid. This is sometimes called Moravec’s paradox: That the seemingly hard stuff is easy for an A.I., while the seemingly easy stuff is hard.

Recommended Videos

But what if you could teach an A.I. to learn like a kid? And what kind of training data would you need to feed into a neural network to carry out the experiment? Researchers from New York University recently set out to test this hypothesis by using a dataset of video footage taken from head-mounted cameras worn regularly by kids during their first three years alive.

Please enable Javascript to view this content

This SAYcam data was collected by psychologist Jess Sullivan and colleagues in a paper published earlier this year. The kids recorded their GoPro-style experiences for one to two hours per week as they went about their daily lives. The researchers recorded the footage to create a “large, naturalistic, longitudinal dataset of infant and child-perspective videos” for use by psychologists, linguists, and computer scientists.

Training an A.I. to view the world like a kid

The New York University researchers then took this video data and used it to train a neural network.

“The goal was to address a nature vs. nurture-type question,” Emin Orhan, lead researcher on the project, told in an email to Digital Trends. “Given this visual experience that children receive in their early development, can we learn high-level visual categories — such as table, chair, cat, car, etc. — using generic learning algorithms, or does this ability require some kind of innate knowledge in children that cannot be learned by applying generic learning methods to the early visual experience that children receive?”

The A.I. did show some learning by, for example, recognizing a cat that was frequently featured in the video. While the researchers didn’t create anything close to a kid version of Artificial General Intelligence, the research nonetheless highlights how certain visual features can be learned simply by watching naturalistic data. There’s still more work to be done, though.

“We found that, by and large, it is possible to learn pretty sophisticated high-level visual concepts in this way without assuming any innate knowledge,” Orhan explained. “But understanding precisely what these machine learning models trained with the headcam data are capable of doing, and what exactly is still missing in these models compared to the visual abilities of children, is still [a] work in progress.”

A paper describing the research is available to read online.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Deep-learning A.I. is helping archaeologists translate ancient tablets
DeepScribe project 1

 

Deep-learning artificial intelligence is helping grapple with plenty of problems in the modern world. But it also has its part to play in helping solve some ancient problems as well -- such as assisting in the translation of 2,500-year-old clay tablet documents from Persia's Achaemenid Empire.

Read more
A.I. translation tool sheds light on the secret language of mice
ai sheds light on mouse communication

Breaking the communication code

Ever wanted to know what animals are saying? Neuroscientists at the University of Delaware have taken a big leap forward in decoding the sounds made by one particular animal in a way that takes us a whole lot closer than anyone has gotten so far. The animal in question? The humble mouse.

Read more
iPhone 7 owners are getting $200 in class action lawsuit, and here’s how you can track yours
iPhone 7 and iPhone 7 Plus.

Settlement payout from the iPhone 7 class action lawsuit against Apple are starting to roll out. Those who participated in the class action lawsuit have started to receive payments, with amounts varying based on whether you spent any money on repairing the iPhone 7 or the iPhone 7 Plus.

Some of the co-applicants in the lawsuit have started to receive around $200 as part payment from the $35 million settlement, 9to5Mac reported. While the payout is less than the maximum of $350 initially approved by the court, it should still feel satisfactory to the appellants.

Read more