Skip to main content

MIT is teaching self-driving cars how to psychoanalyze humans on the road

In March 2004, the U.S. Defense Advanced Research Projects Agency (DARPA) organized a special Grand Challenge event to test out the promise — or lack thereof — of current-generation self-driving cars. Entrants from the world’s top A.I. labs competed for a $1 million prize; their custom-built vehicles trying their best to autonomously navigate a 142-mile route through California’s Mojave Desert. It didn’t go well. The “winning” team managed to travel just 7.4 miles in several hours before shuddering to a halt. And catching fire.

A decade-and-a-half, a whole lot has changed. Self-driving cars have successfully driven hundreds of thousands of miles on actual roads. It’s non-controversial to say that humans will almost certainly be safer in a car driven by a robot than they are in one driven by a human. However, while there will eventually be a tipping point when every car on the road is autonomous, there’s also going to be a messy intermediary phase when self-driving cars will have to share the road with human-driven cars. You know who the problem parties are likely to be in this scenario? That’s right: the fleshy, unpredictable, sometimes-cautious, sometimes-prone-to-road-rage humans.

xijian/Getty Images

To try and solve this problem, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have created a new algorithm intended to allow self-driving cars to classify the “social personalities” of other drivers on the road. In the same way that humans (often non-scientifically) try and ascertain the responses of other drivers when we’re say, moving at an intersection, so the autonomous vehicles will attempt to figure out who they’re dealing with to avoid accidents on the road.

“We’ve developed a system that integrates tools from social psychology into the decision-making and control of autonomous vehicles,” Wilko Schwarting, a research assistant at MIT CSAIL, told Digital Trends. “It is able to estimate the behavior of drivers with respect to how selfish or selfless a particular driver appears to be. The system’s ability to estimate drivers’ so-called ‘Social Value Orientation’ allows it to better predict what human drivers will do and is therefore able to driver safer.”

Social Value Orientation

On the whole, our driving frameworks function fairly well; giving priority to one driver over another, dividing us into directional lanes, and so on. But there are still plenty of more subjective moments when multiple parties have to figure out how to coordinate their efforts to complete a maneuver, sometimes at high speeds. Knowing whether you’re dealing with an impatient driver who’s going to cut you up or a patient one who’s going to wait or make way can mean the difference between a successful journey and fraught fender bender. The fact that there are hundreds of thousands of lane-changing, merging and right or left turn accidents each year in the United States alone shows that humans haven’t quite mastered this subtle art.

Social Value Orientation is a part of the field of interdependent decision making, looking at the strategic interactions between two or more people. It is rooted in game theory, whose concepts were first outlined in a 1944 book by Oskar Morgenstein and John von Veumann titled Theory of Games and Economic Behavior.

The broad idea is essentially this: Agents have their own preferences which can be ordered in terms of their utility (level of satisfaction). Within these parameters they will act logically, according to those preferences. Translated into driving behavior, no matter how unpredictable the road might seem at rush hour, by knowing how altruistic, prosocial, egoistic or competitive the drivers around you might be, you can predict behavior to complete your journey without problem.

Social Behavior for Autonomous Vehicles

By observing the way that other cars drive, the MIT algorithm assesses other drivers on the “reward to others” vs. “reward to self” scale. That would mean sorting fellow road-dwellers into “altruistic,” “prosocial,” “egotistic,” “competitive,” “sadistic,” “sadomasochistic,” “masochistic,” and “martyr” categories. Through learning that not all other cars behave in the same way, the team believe their model could prove a welcome addition to self-driving car systems.

“We trained the system first by modeling road scenarios where each driver tried to maximize their own utility and analyzing their most effective responses in light of the decisions of all other agents,” Schwarting said. “The utility incorporates how much a driver weights their own benefit against the benefit of another driver, weighted by the SVO. Based on that tiny snippet of motion from other cars, our algorithm could then predict the surrounding cars’ behavior as cooperative, altruistic, or egoistic during interactions. We calibrated the rewards based on real driving data with machine learning, essentially encoding how much human drivers value comfort, safety, or getting to their goal quickly.”

Predicting the behavior of drivers

In tests, the team showed that their algorithm could more accurately predict the behavior of other cars by a factor of 25%. This helped the vehicle know when it should when at a left turn versus turning in front of an oncoming driver.

“It also allows us to decide how cooperative or egoistic an autonomous vehicle should be depending on the scenario,” Schwarting continued. “Acting overly conservative is not always the safest option because it can cause misunderstandings and confusion among human drivers.”

Volkswagen e-Golf autonomous prototype Hamburg

The team say that the algorithm is not yet ready for prime time in terms of real world road testing. But they are continuing to develop it, and think that its applications could extend even further beyond the one described here. For one thing, observing other cars could help future self-driving vehicles learn to exhibit more human-like traits that will be easier for human drivers to understand.

“[In addition], this could be useful not just for fully self-driving cars, but for existing cars that we use,” Schwarting said. “For example, imagine that a car suddenly enters your blind spot. With the system [we have developed], you might get a warning in the rearview mirror that the car in your blind spot has an aggressive driver, which could be particularly valuable information.”

Next, the researchers hope to apply the model to pedestrians, bicycles and other agents who may appear in driving environments. “We’d also like to look at other robotic systems that need to interact with us, such as household robots,” Schwarting noted.

Editors' Recommendations

Luke Dormehl
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Tesla pulls latest Full Self-Driving beta less than a day after release
The view from a Tesla vehicle.

False collision warnings and other issues have prompted Tesla to pull the latest version of its Full Self-Driving (FSD) beta less than a day after rolling it out for some vehicle owners.

Tesla decided to temporarily roll back to version 10.2 of FSD on Sunday following reports from some drivers of false collision warnings, sudden braking without any apparent reason, and the disappearance of the Autosteer option, among other issues.

Read more
Waymo’s self-driving cars can’t get enough of one dead-end street
waymo

Waymo has been testing its self-driving cars in San Francisco for the last decade. But an apparent change to the vehicles’ routing has caused many of them to make a beeline for a dead-end street in a quiet part of the city, causing residents there to wonder what on earth is going on.

At CBS news crew recently visited the site -- 15th Avenue north of Lake Street in Richmond -- to see if it could work out why so many of Waymo’s autonomous cars are showing up, turning around, and then driving right out again.

Read more
Inside the lab teaching Volkswagen’s born-again Bus how to drive itself
volkswagen unveils argo ai powered id buzz ad electric van

Previewed by the heritage-laced ID.Buzz concept, Volkswagen's born-again Bus will arrive in 2022 with a few cool tech tricks up its sleeve. It will be fully electric, it will ride on the MEB architecture already found under the EVs like the ID.3 and the ID.4, and it will spawn an autonomous shuttle scheduled to start carrying passengers in 2025. Argo A.I. is helping Volkswagen teach the Bus how to drive itself, and Digital Trends got an inside look at the project.

Volkswagen unveiled the first ID.Buzz-based prototype on the sidelines of the 2021 Munich auto show. Fully draped in camouflage to mask its final design, the van is fitted with an armada of sensors, radars, cameras, microphones, and lidars that paint a digital picture of the world around it. Argo A.I. — which Volkswagen and Ford jointly own a stake in — argues its technology is highly advanced: its lidar can detect and avoid potholes by scanning the road surface, and it can see objects that are about 1,300 feet away, even if they're dark (like a black car). Powering this hardware requires tremendous computing power, several backup systems, and a mammoth amount of data.

Read more