Skip to main content

Robots can peer pressure kids, but don’t think for a second that we’re immune

robots peer pressure study
University of Plymouth

To slightly modify the title of a well-known TV show: Kids do the darndest things. Recently, researchers from Germany and the U.K. carried out a study, published in the journal Science Robotics, that demonstrated the extent to which kids are susceptible to robot peer pressure. TLDR version: the answer to that old parental question: “If all your friends told you to jump off a cliff, would you?” may well be “Sure. If all my friends were robots.”

Recommended Videos

The test reenacted a famous 1951 experiment pioneered by the Polish psychologist Solomon Asch. The experiment demonstrated how people can be influenced by the pressures of groupthink, even when this flies in the face of information they know to be correct. In Asch’s experiments, a group of college students were gathered together and shown two cards. The card on the left displayed an image of a single vertical line. The card on the right displayed three lines of varying lengths. The experimenter then asked the participants which line on the right card matched the length of the line shown on the left card.

“The special thing about that age range of kids is that they’re still at an age where they’ll suspend disbelief.”

So far, so straightforward. Where things got more devious, however, was in the makeup of the group. Only one person out of the group was a genuine participant, while the others were all actors, who had been told what to say ahead of time. The experiment was to test whether the real participant would go along with the rest of the group when they unanimously gave the wrong answer. As it turned out, most would. Peer pressure means that the majority of people will deny information that is clearly correct if it means conforming to the majority opinion.

In the 2018 remix of the experiment, the same principle was used — only instead of a group of college age peers, the “real participant” was a child, aged seven to nine years old. The “actors” were played by three robots, programmed to give the wrong answer. In a sample of 43 volunteers, 74 percent of kids gave the same incorrect answer as the robots. The results suggest that most kids of this age will treat pressure from robots the same as peer pressure from their flesh-and-blood peers.

In the experiment, participants were presented with a group of lines and asked to pick the one with the greatest length. The robotic participants would then unanimously give an incorrect answer in an attempt to influence the answer of the human participant. Anna-Lisa Vollmer, Robin Read, Dries Trippas, and Tony Belpaeme

“The special thing about that age range of kids is that they’re still at an age where they’ll suspend disbelief,” Tony Belpaeme, Professor in Intelligent and Autonomous Control Systems, who helped carry out the study, told Digital Trends. “They will play with toys and still believe that their action figures or dolls are real; they’ll still look at a puppet show and really believe what’s happening; they may still believe in [Santa Claus]. It’s the same thing when they look at a robot: they don’t see electronics and plastic, but rather a social character.”

Interestingly, the experiment contrasted this with the response from adults. Unlike the kids, adults weren’t swayed by the robots’ errors. “When an adult saw the robot giving the wrong answer, they gave it a puzzled look and then gave the correct answer,” Belpaeme continued.

So nothing to worry about then? So long as we stop children getting their hands on robots programmed to give bad responses, everything should be fine, right? Don’t be so fast.

Are adults really so much smarter?

As Belpaeme acknowledged, this task was designed to be so simple that there was no uncertainty as to what the answer might be. The real world is different. When we think about the kinds of jobs readily handed over to machines, these are frequently tasks that we are not, as humans, always able to perform perfectly.

This task was designed to be so simple that there was no uncertainty as to what the answer might be.

It could be that the task is incredibly simple, but that the machine can perform it significantly faster than we can. Or it could be a more complex task, in which the computer has access to far greater amounts of data than we do. Depending on the potential impact of the job at hand, it is no surprise that many of us would be unhappy about correcting a machine.

Would a nurse in a hospital be happy about overruling the FDA-approved algorithm which can help make prioritizations about patient health by monitoring vital signs and then sending alerts to medical staff? Or would a driver be comfortable taking the wheel from a driverless car when dealing with a particularly complex road scenario? Or even a pilot overriding the autopilot because they think the wrong decision is being made? In all of these cases, we would like to think the answer is “yes.” For all sorts of reasons, though, that may not be reality.

Nicholas Carr writes about this in his 2014 book The Glass Cage: Where Automation is Taking Us. The way he describes it underlines the kind of ambiguity that real life cases of automation involve, where the problems are far more complex than the length of a line on a card, the machines are much smarter, and the outcome is potentially more crucial.

nicholas carr
Nicholas Carr is a Pulitzer Prize-winning author, best known for his books “The Shallows: What the Internet is Doing to Our Brains” and “The Glass Cage: How Our Computers are Changing Us” Image used with permission by copyright holder

“How do you measure the expense of an erosion of effort and engagement, or a waning of agency and autonomy, or a subtle deterioration of skill? You can’t,” he writes. “These are the kinds of shadowy, intangible things that we rarely appreciate until after they’re gone, and even then we may have trouble expressing the losses in concrete terms.”

“These are the kinds of shadowy, intangible things that we rarely appreciate until after they’re gone.”

Social robots of the sort that Belpaeme theorizes about in the research paper are not yet mainstream, but already there are illustrations of some of these conundrums in action. For example, Carr opens his book with mention of a Federal Aviation Administration memo which noted how pilots should spend less time flying on autopilot because of the risks this posed. This was based on analysis of crash data, showing that pilots frequently rely too heavily on computerized systems.

A similar case involved a 2009 lawsuit in which a woman named Lauren Rosenberg filed a suit against Google after being advised to walk along a route that headed into dangerous traffic. Although the case was thrown out of court, it shows that people will override their own common sense in the belief that machine intelligence has more intelligence than we do.

For every ship there’s a shipwreck

Ultimately, as Belpaeme acknowledges, the issue is that sometimes we want to hand over decision making to machines. Robots promise to do the jobs that are dull, dirty, and dangerous — and if we have to second-guess every decision, they’re not really the labor-saving devices that have been promised. If we’re going to eventually invite robots into our home, we will want them to be able to act autonomously, and that’s going to involve a certain level of trust.

“Robots exerting social pressure on you can be a good thing; it doesn’t have to be sinister,” Belpaeme continued. “If you have robots used in healthcare or education, you want them to be able to influence you. For example, if you want to lose weight you could be given a weight loss robot for two months which monitors your calorie intake and encourages you to take more exercise. You want a robot like that to be persuasive and influence you. But any technology which can be used for good can also be used for evil.”

What’s the answer to this? Questions such as this will be debated on a case-by-case basis. If the bad ultimately outweighs the good, technology like social robots will never take off. But it’s important that we take the right lessons from studies like the one about robot-induced peer pressure. And it’s not the fact that we’re so much smarter than kids.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Star Wars legend Ian McDiarmid gets questions about the Emperor’s sex life
Ian McDiarmid as the Emperor in Star Wars: The Rise of Skywalker.

This weekend, the Star Wars: Revenge of the Sith 20th anniversary re-release had a much stronger performance than expected with $25 million and a second-place finish behind Sinners. Revenge of the Sith was the culmination of plans by Chancellor Palpatine (Ian McDiarmid) that led to the fall of the Jedi and his own ascension to emperor. Because McDiarmid's Emperor died in his first appearance -- 1983's Return of the Jedi -- Revenge of the Sith was supposed to be his live-action swan song. However, Palpatine's return in Star Wars: Episode IX -- The Rise of Skywalker left McDiarmid being asked questions about his character's comeback, particularly about his sex life and how he could have a granddaughter.

While speaking with Variety, McDiarmid noted that fans have asked him "slightly embarrassing questions" about Palpatine including "'Does this evil monster ever have sex?'"

Read more
Waymo and Toyota explore personally owned self-driving cars
Front three quarter view of the 2023 Toyota bZ4X.

Waymo and Toyota have announced they’re exploring a strategic collaboration—and one of the most exciting possibilities on the table is bringing fully-automated driving technology to personally owned vehicles.
Alphabet-owned Waymo has made its name with its robotaxi service, the only one currently operating in the U.S. Its vehicles, including Jaguars and Hyundai Ioniq 5s, have logged tens of millions of autonomous miles on the streets of San Francisco, Los Angeles, Phoenix, and Austin.
But shifting to personally owned self-driving cars is a much more complex challenge.
While safety regulations are expected to loosen under the Trump administration, the National Highway Traffic Safety Administration (NHTSA) has so far taken a cautious approach to the deployment of fully autonomous vehicles. General Motors-backed Cruise robotaxi was forced to suspend operations in 2023 following a fatal collision.
While the partnership with Toyota is still in the early stages, Waymo says it will initially study how to merge its autonomous systems with the Japanese automaker’s consumer vehicle platforms.
In a recent call with analysts, Alphabet CEO Sundar Pichai signaled that Waymo is seriously considering expanding beyond ride-hailing fleets and into personal ownership. While nothing is confirmed, the partnership with Toyota adds credibility—and manufacturing muscle—to that vision.
Toyota brings decades of safety innovation to the table, including its widely adopted Toyota Safety Sense technology. Through its software division, Woven by Toyota, the company is also pushing into next-generation vehicle platforms. With Waymo, Toyota is now also looking at how automation can evolve beyond assisted driving and into full autonomy for individual drivers.
This move also turns up the heat on Tesla, which has long promised fully self-driving vehicles for consumers. While Tesla continues to refine its Full Self-Driving (FSD) software, it remains supervised and hasn’t yet delivered on full autonomy. CEO Elon Musk is promising to launch some of its first robotaxis in Austin in June.
When it comes to self-driving cars, Waymo and Tesla are taking very different roads. Tesla aims to deliver affordability and scale with its camera, AI-based software. Waymo, by contrast, uses a more expensive technology relying on pre-mapped roads, sensors, cameras, radar and lidar (a laser-light radar), that regulators have been quicker to trust.

Read more
Uber partners with May Mobility to bring thousands of autonomous vehicles to U.S. streets
uber may mobility av rides partnership

The self-driving race is shifting into high gear, and Uber just added more horsepower. In a new multi-year partnership, Uber and autonomous vehicle (AV) company May Mobility will begin rolling out driverless rides in Arlington, Texas by the end of 2025—with thousands more vehicles planned across the U.S. in the coming years.
Uber has already taken serious steps towards making autonomous ride-hailing a mainstream option. The company already works with Waymo, whose robotaxis are live in multiple cities, and now it’s welcoming May Mobility’s hybrid-electric Toyota Sienna vans to its platform. The vehicles will launch with safety drivers at first but are expected to go fully autonomous as deployments mature.
May Mobility isn’t new to this game. Backed by Toyota, BMW, and other major players, it’s been running AV services in geofenced areas since 2021. Its AI-powered Multi-Policy Decision Making (MPDM) tech allows it to react quickly and safely to unpredictable real-world conditions—something that’s helped it earn trust in city partnerships across the U.S. and Japan.
This expansion into ride-hailing is part of a broader industry trend. Waymo, widely seen as the current AV frontrunner, continues scaling its service in cities like Phoenix and Austin. Tesla, meanwhile, is preparing to launch its first robotaxis in Austin this June, with a small fleet of Model Ys powered by its camera-based Full Self-Driving (FSD) system. While Tesla aims for affordability and scale, Waymo and May are focused on safety-first deployments using sensor-rich systems, including lidar—a tech stack regulators have so far favored.
Beyond ride-hailing, the idea of personally owned self-driving cars is also gaining traction. Waymo and Toyota recently announced they’re exploring how to bring full autonomy to private vehicles, a move that could eventually bring robotaxi tech right into your garage.
With big names like Uber, Tesla, Waymo, and now May Mobility in the mix, the ride-hailing industry is evolving fast—and the road ahead looks increasingly driver-optional.

Read more