In March 2004, the U.S. Defense Advanced Research Projects Agency (DARPA) organized a special Grand Challenge event to test out the promise — or lack thereof — of current-generation self-driving cars. Entrants from the world’s top A.I. labs competed for a $1 million prize; their custom-built vehicles trying their best to autonomously navigate a 142-mile route through California’s Mojave Desert. It didn’t go well. The “winning” team managed to travel just 7.4 miles in several hours before shuddering to a halt. And catching fire.
A decade-and-a-half, a whole lot has changed. Self-driving cars have successfully driven hundreds of thousands of miles on actual roads. It’s non-controversial to say that humans will almost certainly be safer in a car driven by a robot than they are in one driven by a human. However, while there will eventually be a tipping point when every car on the road is autonomous, there’s also going to be a messy intermediary phase when self-driving cars will have to share the road with human-driven cars. You know who the problem parties are likely to be in this scenario? That’s right: the fleshy, unpredictable, sometimes-cautious, sometimes-prone-to-road-rage humans.
To try and solve this problem, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have created a new algorithm intended to allow self-driving cars to classify the “social personalities” of other drivers on the road. In the same way that humans (often non-scientifically) try and ascertain the responses of other drivers when we’re say, moving at an intersection, so the autonomous vehicles will attempt to figure out who they’re dealing with to avoid accidents on the road.
“We’ve developed a system that integrates tools from social psychology into the decision-making and control of autonomous vehicles,” Wilko Schwarting, a research assistant at MIT CSAIL, told Digital Trends. “It is able to estimate the behavior of drivers with respect to how selfish or selfless a particular driver appears to be. The system’s ability to estimate drivers’ so-called ‘Social Value Orientation’ allows it to better predict what human drivers will do and is therefore able to driver safer.”
Social Value Orientation
On the whole, our driving frameworks function fairly well; giving priority to one driver over another, dividing us into directional lanes, and so on. But there are still plenty of more subjective moments when multiple parties have to figure out how to coordinate their efforts to complete a maneuver, sometimes at high speeds. Knowing whether you’re dealing with an impatient driver who’s going to cut you up or a patient one who’s going to wait or make way can mean the difference between a successful journey and fraught fender bender. The fact that there are hundreds of thousands of lane-changing, merging and right or left turn accidents each year in the United States alone shows that humans haven’t quite mastered this subtle art.
Social Value Orientation is a part of the field of interdependent decision making, looking at the strategic interactions between two or more people. It is rooted in game theory, whose concepts were first outlined in a 1944 book by Oskar Morgenstein and John von Veumann titled Theory of Games and Economic Behavior.
The broad idea is essentially this: Agents have their own preferences which can be ordered in terms of their utility (level of satisfaction). Within these parameters they will act logically, according to those preferences. Translated into driving behavior, no matter how unpredictable the road might seem at rush hour, by knowing how altruistic, prosocial, egoistic or competitive the drivers around you might be, you can predict behavior to complete your journey without problem.
By observing the way that other cars drive, the MIT algorithm assesses other drivers on the “reward to others” vs. “reward to self” scale. That would mean sorting fellow road-dwellers into “altruistic,” “prosocial,” “egotistic,” “competitive,” “sadistic,” “sadomasochistic,” “masochistic,” and “martyr” categories. Through learning that not all other cars behave in the same way, the team believe their model could prove a welcome addition to self-driving car systems.
“We trained the system first by modeling road scenarios where each driver tried to maximize their own utility and analyzing their most effective responses in light of the decisions of all other agents,” Schwarting said. “The utility incorporates how much a driver weights their own benefit against the benefit of another driver, weighted by the SVO. Based on that tiny snippet of motion from other cars, our algorithm could then predict the surrounding cars’ behavior as cooperative, altruistic, or egoistic during interactions. We calibrated the rewards based on real driving data with machine learning, essentially encoding how much human drivers value comfort, safety, or getting to their goal quickly.”
Predicting the behavior of drivers
In tests, the team showed that their algorithm could more accurately predict the behavior of other cars by a factor of 25%. This helped the vehicle know when it should when at a left turn versus turning in front of an oncoming driver.
“It also allows us to decide how cooperative or egoistic an autonomous vehicle should be depending on the scenario,” Schwarting continued. “Acting overly conservative is not always the safest option because it can cause misunderstandings and confusion among human drivers.”
The team say that the algorithm is not yet ready for prime time in terms of real world road testing. But they are continuing to develop it, and think that its applications could extend even further beyond the one described here. For one thing, observing other cars could help future self-driving vehicles learn to exhibit more human-like traits that will be easier for human drivers to understand.
“[In addition], this could be useful not just for fully self-driving cars, but for existing cars that we use,” Schwarting said. “For example, imagine that a car suddenly enters your blind spot. With the system [we have developed], you might get a warning in the rearview mirror that the car in your blind spot has an aggressive driver, which could be particularly valuable information.”
Next, the researchers hope to apply the model to pedestrians, bicycles and other agents who may appear in driving environments. “We’d also like to look at other robotic systems that need to interact with us, such as household robots,” Schwarting noted.
- Facebook’s new image-recognition A.I. is trained on 1 billion Instagram photos
- Why teaching robots to play hide-and-seek could be the key to next-gen A.I.
- Pigs are smarter than we thought. Scientists taught them to play video games
- The BigSleep A.I. is like Google Image Search for pictures that don’t exist yet
- Scientists are using A.I. to create artificial human genetic code