Robots can peer pressure kids, but don’t think for a second that we’re immune

robots peer pressure study
University of Plymouth

To slightly modify the title of a well-known TV show: Kids do the darndest things. Recently, researchers from Germany and the U.K. carried out a study, published in the journal Science Robotics, that demonstrated the extent to which kids are susceptible to robot peer pressure. TLDR version: the answer to that old parental question: “If all your friends told you to jump off a cliff, would you?” may well be “Sure. If all my friends were robots.”

The test reenacted a famous 1951 experiment pioneered by the Polish psychologist Solomon Asch. The experiment demonstrated how people can be influenced by the pressures of groupthink, even when this flies in the face of information they know to be correct. In Asch’s experiments, a group of college students were gathered together and shown two cards. The card on the left displayed an image of a single vertical line. The card on the right displayed three lines of varying lengths. The experimenter then asked the participants which line on the right card matched the length of the line shown on the left card.

“The special thing about that age range of kids is that they’re still at an age where they’ll suspend disbelief.”

So far, so straightforward. Where things got more devious, however, was in the makeup of the group. Only one person out of the group was a genuine participant, while the others were all actors, who had been told what to say ahead of time. The experiment was to test whether the real participant would go along with the rest of the group when they unanimously gave the wrong answer. As it turned out, most would. Peer pressure means that the majority of people will deny information that is clearly correct if it means conforming to the majority opinion.

In the 2018 remix of the experiment, the same principle was used — only instead of a group of college age peers, the “real participant” was a child, aged seven to nine years old. The “actors” were played by three robots, programmed to give the wrong answer. In a sample of 43 volunteers, 74 percent of kids gave the same incorrect answer as the robots. The results suggest that most kids of this age will treat pressure from robots the same as peer pressure from their flesh-and-blood peers.

robots peer pressure kids diagram
In the experiment, participants were presented with a group of lines and asked to pick the one with the greatest length. The robotic participants would then unanimously give an incorrect answer in an attempt to influence the answer of the human participant. Anna-Lisa Vollmer, Robin Read, Dries Trippas, and Tony Belpaeme

“The special thing about that age range of kids is that they’re still at an age where they’ll suspend disbelief,” Tony Belpaeme, Professor in Intelligent and Autonomous Control Systems, who helped carry out the study, told Digital Trends. “They will play with toys and still believe that their action figures or dolls are real; they’ll still look at a puppet show and really believe what’s happening; they may still believe in [Santa Claus]. It’s the same thing when they look at a robot: they don’t see electronics and plastic, but rather a social character.”

Interestingly, the experiment contrasted this with the response from adults. Unlike the kids, adults weren’t swayed by the robots’ errors. “When an adult saw the robot giving the wrong answer, they gave it a puzzled look and then gave the correct answer,” Belpaeme continued.

So nothing to worry about then? So long as we stop children getting their hands on robots programmed to give bad responses, everything should be fine, right? Don’t be so fast.

Are adults really so much smarter?

As Belpaeme acknowledged, this task was designed to be so simple that there was no uncertainty as to what the answer might be. The real world is different. When we think about the kinds of jobs readily handed over to machines, these are frequently tasks that we are not, as humans, always able to perform perfectly.

This task was designed to be so simple that there was no uncertainty as to what the answer might be.

It could be that the task is incredibly simple, but that the machine can perform it significantly faster than we can. Or it could be a more complex task, in which the computer has access to far greater amounts of data than we do. Depending on the potential impact of the job at hand, it is no surprise that many of us would be unhappy about correcting a machine.

Would a nurse in a hospital be happy about overruling the FDA-approved algorithm which can help make prioritizations about patient health by monitoring vital signs and then sending alerts to medical staff? Or would a driver be comfortable taking the wheel from a driverless car when dealing with a particularly complex road scenario? Or even a pilot overriding the autopilot because they think the wrong decision is being made? In all of these cases, we would like to think the answer is “yes.” For all sorts of reasons, though, that may not be reality.

Nicholas Carr writes about this in his 2014 book The Glass Cage: Where Automation is Taking Us. The way he describes it underlines the kind of ambiguity that real life cases of automation involve, where the problems are far more complex than the length of a line on a card, the machines are much smarter, and the outcome is potentially more crucial.

nicholas carr
Nicholas Carr is a Pulitzer Prize-winning author, best known for his books “The Shallows: What the Internet is Doing to Our Brains” and “The Glass Cage: How Our Computers are Changing Us”

“How do you measure the expense of an erosion of effort and engagement, or a waning of agency and autonomy, or a subtle deterioration of skill? You can’t,” he writes. “These are the kinds of shadowy, intangible things that we rarely appreciate until after they’re gone, and even then we may have trouble expressing the losses in concrete terms.”

“These are the kinds of shadowy, intangible things that we rarely appreciate until after they’re gone.”

Social robots of the sort that Belpaeme theorizes about in the research paper are not yet mainstream, but already there are illustrations of some of these conundrums in action. For example, Carr opens his book with mention of a Federal Aviation Administration memo which noted how pilots should spend less time flying on autopilot because of the risks this posed. This was based on analysis of crash data, showing that pilots frequently rely too heavily on computerized systems.

A similar case involved a 2009 lawsuit in which a woman named Lauren Rosenberg filed a suit against Google after being advised to walk along a route that headed into dangerous traffic. Although the case was thrown out of court, it shows that people will override their own common sense in the belief that machine intelligence has more intelligence than we do.

For every ship there’s a shipwreck

Ultimately, as Belpaeme acknowledges, the issue is that sometimes we want to hand over decision making to machines. Robots promise to do the jobs that are dull, dirty, and dangerous — and if we have to second-guess every decision, they’re not really the labor-saving devices that have been promised. If we’re going to eventually invite robots into our home, we will want them to be able to act autonomously, and that’s going to involve a certain level of trust.

“Robots exerting social pressure on you can be a good thing; it doesn’t have to be sinister,” Belpaeme continued. “If you have robots used in healthcare or education, you want them to be able to influence you. For example, if you want to lose weight you could be given a weight loss robot for two months which monitors your calorie intake and encourages you to take more exercise. You want a robot like that to be persuasive and influence you. But any technology which can be used for good can also be used for evil.”

What’s the answer to this? Questions such as this will be debated on a case-by-case basis. If the bad ultimately outweighs the good, technology like social robots will never take off. But it’s important that we take the right lessons from studies like the one about robot-induced peer pressure. And it’s not the fact that we’re so much smarter than kids.

Photography

Sweet 16: Wacom’s Cintiq 16 pen display makes retouching photos a breeze

Wacom’s Cintiq pen displays are usually reserved for the pros (or wealthy enthusiasts), but the new Cintiq 16 brings screen and stylus editing to an approachable price. Does it cut too much to get there?
Gaming

Brace yourself. Sony’s PlayStation 5 is going to be expensive

How much will Sony's PlayStation 5 cost? Official pricing will stay under wraps for months, but early details provide enough information to make a guess. Our estimate suggests the price will be higher than fans expect to pay.
Emerging Tech

Awesome Tech You Can’t Buy Yet: Halfbikes, VR for all your senses, and more

Check out our roundup of the best new crowdfunding projects and product announcements that hit the web this week. You may not be able to buy this stuff yet, but it's fun to gawk!
Computing

Ray tracing on GTX GPUs might sound stupid, but there’s a method to the madness

Ray tracing in a handful of games is now supported by more than Nvidia's RTX graphics cards. If you have a GTX 10-series card you can enjoy it too, but we don't think you'll enjoy it very much.
Emerging Tech

Public vote opens for new planet name, but Planet McPlanetface won’t fly

The largest unnamed world in our solar system needs an official title, and you can help choose it. The scientists who discovered the icy planetoid recently announced details of a public vote offering three choices.
Emerging Tech

Watch the fearsome DroneHunter X3 pluck rogue UAVs out of the sky

How do you stop enemy drones in their tracks? DroneHunter X3 is a new autonomous anti-drone technology which outruns and then captures rogue drones in midair. Check it out in action.
Emerging Tech

How MIT hacked horticulture to cultivate a hyper-flavorful basil plant

At MIT, Caleb Harper used his personal food computers to alter the climate in which he grew basil. Exposing it light for 24 hours a day changed the flavor profile of the plant, making it spicier and stronger.
Emerging Tech

SpaceX’s main Falcon Heavy booster is lost at sea after falling off drone ship

SpaceX has lost the center core of its Falcon Heavy rocket after a successful mission last week that ended with it landing on a drone ship. SpaceX said rough seas resulted in the rocket toppling over and falling into the ocean.
Emerging Tech

Sweden is building a road that recharges electric buses that drive over it

The Swedish transport administration is exploring special roads which will charge vehicles’ batteries as they drive over them. It will test the idea with a short sample stretch of road.
Emerging Tech

Scientists manage to 3D print an actual heart using human cells

Scientists at Tel Aviv University have achieved a world-first by 3D printing a small-scale heart, complete with blood vessels, ventricles, and chambers. Here's why that's so exciting.
Emerging Tech

Drown out noisy neighbors and rest easy with these white noise machines

Some people are more sensitive to sound during sleep than others. Luckily, there are a number of white noise machines on the market to mask the noise. Here are our five of our current favorites.
Emerging Tech

Feast your eyes on the wildest, most elaborate Rube Goldberg machines ever built

Want to see something totally mesmerizing? Check out several of the best Rube Goldberg machines from across the internet, including one that serves cake and others that do ... nothing particularly useful.
Emerging Tech

Watch a pack of SpotMini robot dogs perform a terrifying feat of strength

Boston Dynamics' SpotMini robotic dog is now going around in packs, and the results are somewhat concerning. Check out the video to see what kind of shenanigans 10 of them got up to recently ...
Emerging Tech

Notre Dame fire: How drones and a robot called Colossus helped limit the damage

The fire that devastated the iconic Notre Dame Cathedral on Monday shocked many around the world. In a bid to prevent even worse damage to the structure, Paris firefighters opted to deploy drones and a robot called Colossus.