Skip to main content

MIT is teaching self-driving cars how to psychoanalyze humans on the road

In March 2004, the U.S. Defense Advanced Research Projects Agency (DARPA) organized a special Grand Challenge event to test out the promise — or lack thereof — of current-generation self-driving cars. Entrants from the world’s top A.I. labs competed for a $1 million prize; their custom-built vehicles trying their best to autonomously navigate a 142-mile route through California’s Mojave Desert. It didn’t go well. The “winning” team managed to travel just 7.4 miles in several hours before shuddering to a halt. And catching fire.

A decade-and-a-half, a whole lot has changed. Self-driving cars have successfully driven hundreds of thousands of miles on actual roads. It’s non-controversial to say that humans will almost certainly be safer in a car driven by a robot than they are in one driven by a human. However, while there will eventually be a tipping point when every car on the road is autonomous, there’s also going to be a messy intermediary phase when self-driving cars will have to share the road with human-driven cars. You know who the problem parties are likely to be in this scenario? That’s right: the fleshy, unpredictable, sometimes-cautious, sometimes-prone-to-road-rage humans.

xijian/Getty Images

To try and solve this problem, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have created a new algorithm intended to allow self-driving cars to classify the “social personalities” of other drivers on the road. In the same way that humans (often non-scientifically) try and ascertain the responses of other drivers when we’re say, moving at an intersection, so the autonomous vehicles will attempt to figure out who they’re dealing with to avoid accidents on the road.

“We’ve developed a system that integrates tools from social psychology into the decision-making and control of autonomous vehicles,” Wilko Schwarting, a research assistant at MIT CSAIL, told Digital Trends. “It is able to estimate the behavior of drivers with respect to how selfish or selfless a particular driver appears to be. The system’s ability to estimate drivers’ so-called ‘Social Value Orientation’ allows it to better predict what human drivers will do and is therefore able to driver safer.”

Social Value Orientation

On the whole, our driving frameworks function fairly well; giving priority to one driver over another, dividing us into directional lanes, and so on. But there are still plenty of more subjective moments when multiple parties have to figure out how to coordinate their efforts to complete a maneuver, sometimes at high speeds. Knowing whether you’re dealing with an impatient driver who’s going to cut you up or a patient one who’s going to wait or make way can mean the difference between a successful journey and fraught fender bender. The fact that there are hundreds of thousands of lane-changing, merging and right or left turn accidents each year in the United States alone shows that humans haven’t quite mastered this subtle art.

Social Value Orientation is a part of the field of interdependent decision making, looking at the strategic interactions between two or more people. It is rooted in game theory, whose concepts were first outlined in a 1944 book by Oskar Morgenstein and John von Veumann titled Theory of Games and Economic Behavior.

The broad idea is essentially this: Agents have their own preferences which can be ordered in terms of their utility (level of satisfaction). Within these parameters they will act logically, according to those preferences. Translated into driving behavior, no matter how unpredictable the road might seem at rush hour, by knowing how altruistic, prosocial, egoistic or competitive the drivers around you might be, you can predict behavior to complete your journey without problem.

Social Behavior for Autonomous Vehicles

By observing the way that other cars drive, the MIT algorithm assesses other drivers on the “reward to others” vs. “reward to self” scale. That would mean sorting fellow road-dwellers into “altruistic,” “prosocial,” “egotistic,” “competitive,” “sadistic,” “sadomasochistic,” “masochistic,” and “martyr” categories. Through learning that not all other cars behave in the same way, the team believe their model could prove a welcome addition to self-driving car systems.

“We trained the system first by modeling road scenarios where each driver tried to maximize their own utility and analyzing their most effective responses in light of the decisions of all other agents,” Schwarting said. “The utility incorporates how much a driver weights their own benefit against the benefit of another driver, weighted by the SVO. Based on that tiny snippet of motion from other cars, our algorithm could then predict the surrounding cars’ behavior as cooperative, altruistic, or egoistic during interactions. We calibrated the rewards based on real driving data with machine learning, essentially encoding how much human drivers value comfort, safety, or getting to their goal quickly.”

Predicting the behavior of drivers

In tests, the team showed that their algorithm could more accurately predict the behavior of other cars by a factor of 25%. This helped the vehicle know when it should when at a left turn versus turning in front of an oncoming driver.

“It also allows us to decide how cooperative or egoistic an autonomous vehicle should be depending on the scenario,” Schwarting continued. “Acting overly conservative is not always the safest option because it can cause misunderstandings and confusion among human drivers.”

Volkswagen e-Golf autonomous prototype Hamburg
Image used with permission by copyright holder

The team say that the algorithm is not yet ready for prime time in terms of real world road testing. But they are continuing to develop it, and think that its applications could extend even further beyond the one described here. For one thing, observing other cars could help future self-driving vehicles learn to exhibit more human-like traits that will be easier for human drivers to understand.

“[In addition], this could be useful not just for fully self-driving cars, but for existing cars that we use,” Schwarting said. “For example, imagine that a car suddenly enters your blind spot. With the system [we have developed], you might get a warning in the rearview mirror that the car in your blind spot has an aggressive driver, which could be particularly valuable information.”

Next, the researchers hope to apply the model to pedestrians, bicycles and other agents who may appear in driving environments. “We’d also like to look at other robotic systems that need to interact with us, such as household robots,” Schwarting noted.

Luke Dormehl
Former Digital Trends Contributor
I'm a UK-based tech writer covering Cool Tech at Digital Trends. I've also written for Fast Company, Wired, the Guardian…
Officers confused as they pull over an empty self-driving car
Cruise

In what appears to be the first incident of its kind, police officers recently pulled over a self-driving car with no one inside it.

The incident, which took place on a street in San Francisco earlier this month, was caught on video by a passing pedestrian. It shows several traffic cops pondering about how to handle the incident after stopping the vehicle for failing to have its front lights on while driving at night.

Read more
How a big blue van from 1986 paved the way for self-driving cars
Lineup of all 5 Navlab autonomous vehicles.

In 1986, a blue Chevy van often cruised around the streets of Pittsburgh, Pennsylvania near Carnegie Mellon University. To the casual observer, nothing about it appeared out of the ordinary. Most people would pass by it without noticing the camcorder peeking out from its roof, or the fact that there were no hands on the steering wheel.

But if any passerby had stopped to inspect the van and peer into its interior, they would have realized it was no ordinary car. This was the world's first self-driving automobile: A pioneering work of computer science and engineering somehow built in a world where fax machines were still the predominant way to send documents, and most phones still had cords. But despite being stuck in an era where technology hadn't caught up to humanity's imagination quite yet, the van -- and the researchers crammed into it -- helped to lay the groundwork for all the Teslas, Waymos, and self-driving Uber prototypes cruising around our streets in 2022.

Read more
Watch folks react to their first ride in GM Cruise’s driverless car
Two people taking their first ride in an autonomous car.

General Motors autonomous car unit, Cruise, has started to offer driverless rides to residents of San Francisco as it moves toward the launch of a full-fledged robo-taxi service.

Following a test run of the service last week, Cruise has released a video (below) showing the reaction of the very first passengers as they rode through the streets of the Californian city in a vehicle that had nobody behind the wheel.

Read more