Skip to main content

Toddler robots help reveal how human kids learn about their world

toddler robot learning toddler1
Image used with permission by copyright holder
There’s a lot of focus on looking at the means by which humans learn and using these insights to make machines smarter. This is the entire basis for artificial neural networks, which try to replicate a simple model of the human brain inside a machine.

However, the opposite can be true as well: Examining robots can help reveal how we as humans absorb and make sense of new information.

That’s the basis for a new research project, carried out out by researchers in the United Kingdom. Looking to understand more about how young kids learn new words, they programmed a humanoid robot called iCub — equipped with a microphone and camera — to learn new words.

Their conclusion? That children may well learn new words in a similar way to robots; based less on conscious thought than on an automatic ability to associate objects.

“We were interested in finding out whether it’s possible to learn words without a complex reasoning ability,” Katie Twomey, a psychology department researcher from the U.K.’s Lancaster University, told Digital Trends.

“To explore this we used the iCub humanoid robot, which learns by making simple links between what it sees and what it hears. Importantly, iCub can’t think explicitly about what it knows. We reasoned that if iCub can learn object names like toddlers do, it’s possible that children’s early learning is also driven by a simple but powerful association-making mechanism.”

In the study, a group of kids aged 2 1/2 were given the task of selecting a particular toy of out of lineup consisting of, alternately, three, four, or five different objects. In each case, one of the objects was something unfamiliar to them. The study aimed to get the kids to learn the name of the unknown object using a process of elimination, based on information they already knew.

robot1
Image used with permission by copyright holder

“We know that toddlers can work out what a new word means, based on the words they already know,” Twomey continued. “For example, imagine a 2-year-old sees two toys: their favorite toy car, and a brown, furry toy animal that they’ve never seen before. If the toddler hears a new word ‘bear,’ they will assume that it refers to the new toy, because they already know that their toy is called ‘car’.”

In this case, it is possible that kids are able to think in detail about what they already know, and use reasoning to figure out that their favorite is called a “car,” so the new toy must be a “bear.” However, it’s also possible that children solve this puzzle automatically by simply associating new words with new objects.

The researchers then asked the iCub to carry out the same task. It was trained to recognize 12 items but, like the kids, was then shown a combination of objects it recognized and ones it did not. Intriguingly, it performed exactly the same as the kids when it came to learning new words.

“Critically, iCub learned words by making simple associations between words and objects, rather than using complex reasoning,” Twomey said. “This suggests that we don’t need to assume children reflect in detail about what they know and what words refer to. Instead, early word learning could depend on making in-the-moment links between words and objects.”

It’s an interesting use of robotics to help uncover insights about developmental psychology. It can also reveal previously unconsidered details which may also tell us something about how humans learn.

“In our study, the amount of time it took for the robot to move its head to look at objects affected how easily it learned words,” Twomey concluded. “This suggests that the way objects are set out in children’s visual scene could also affect their early word learning: a prediction we are planning to test in new work with toddlers.”

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Digital Trends’ Top Tech of CES 2023 Awards
Best of CES 2023 Awards Our Top Tech from the Show Feature

Let there be no doubt: CES isn’t just alive in 2023; it’s thriving. Take one glance at the taxi gridlock outside the Las Vegas Convention Center and it’s evident that two quiet COVID years didn’t kill the world’s desire for an overcrowded in-person tech extravaganza -- they just built up a ravenous demand.

From VR to AI, eVTOLs and QD-OLED, the acronyms were flying and fresh technologies populated every corner of the show floor, and even the parking lot. So naturally, we poked, prodded, and tried on everything we could. They weren’t all revolutionary. But they didn’t have to be. We’ve watched enough waves of “game-changing” technologies that never quite arrive to know that sometimes it’s the little tweaks that really count.

Read more
Digital Trends’ Tech For Change CES 2023 Awards
Digital Trends CES 2023 Tech For Change Award Winners Feature

CES is more than just a neon-drenched show-and-tell session for the world’s biggest tech manufacturers. More and more, it’s also a place where companies showcase innovations that could truly make the world a better place — and at CES 2023, this type of tech was on full display. We saw everything from accessibility-minded PS5 controllers to pedal-powered smart desks. But of all the amazing innovations on display this year, these three impressed us the most:

Samsung's Relumino Mode
Across the globe, roughly 300 million people suffer from moderate to severe vision loss, and generally speaking, most TVs don’t take that into account. So in an effort to make television more accessible and enjoyable for those millions of people suffering from impaired vision, Samsung is adding a new picture mode to many of its new TVs.
[CES 2023] Relumino Mode: Innovation for every need | Samsung
Relumino Mode, as it’s called, works by adding a bunch of different visual filters to the picture simultaneously. Outlines of people and objects on screen are highlighted, the contrast and brightness of the overall picture are cranked up, and extra sharpness is applied to everything. The resulting video would likely look strange to people with normal vision, but for folks with low vision, it should look clearer and closer to "normal" than it otherwise would.
Excitingly, since Relumino Mode is ultimately just a clever software trick, this technology could theoretically be pushed out via a software update and installed on millions of existing Samsung TVs -- not just new and recently purchased ones.

Read more
AI turned Breaking Bad into an anime — and it’s terrifying
Split image of Breaking Bad anime characters.

These days, it seems like there's nothing AI programs can't do. Thanks to advancements in artificial intelligence, deepfakes have done digital "face-offs" with Hollywood celebrities in films and TV shows, VFX artists can de-age actors almost instantly, and ChatGPT has learned how to write big-budget screenplays in the blink of an eye. Pretty soon, AI will probably decide who wins at the Oscars.

Within the past year, AI has also been used to generate beautiful works of art in seconds, creating a viral new trend and causing a boon for fan artists everywhere. TikTok user @cyborgism recently broke the internet by posting a clip featuring many AI-generated pictures of Breaking Bad. The theme here is that the characters are depicted as anime characters straight out of the 1980s, and the result is concerning to say the least. Depending on your viewpoint, Breaking Bad AI (my unofficial name for it) shows how technology can either threaten the integrity of original works of art or nurture artistic expression.
What if AI created Breaking Bad as a 1980s anime?
Playing over Metro Boomin's rap remix of the famous "I am the one who knocks" monologue, the video features images of the cast that range from shockingly realistic to full-on exaggerated. The clip currently has over 65,000 likes on TikTok alone, and many other users have shared their thoughts on the art. One user wrote, "Regardless of the repercussions on the entertainment industry, I can't wait for AI to be advanced enough to animate the whole show like this."

Read more