Anyone who has watched the excellent TV comedy The Good Place will be familiar with the “trolley problem.” In short, imagine you spot a runaway trolley headed toward five incapacitated people in its direct path. You have the power to redirect the trolley onto a side track, thereby saving the lives of the five people. The only problem? There is a single person lying on this side track and, by making the decision to redirect the trolley, you’re consciously sentencing them to death.
Philosophers have been wrestling with similarly thorny ethical dilemmas for centuries, but the arrival of self-driving cars suddenly brings them into real-world focus. That’s because decisions like the trolley problem become practical realities when you’re deciding on how an autonomous vehicle should respond to, for instance, a situation in which damage is going to be done to either pedestrians or passengers depending on how a car acts.
These are unsurprisingly sprawling, complex questions — but the findings of a large-scale survey sheds light on how populations at large view the moral principles which should guide machines. Drawing on the responses of 2.3 million people from around the globe, the study laid out 13 scenarios in which one person’s death was inevitable. Respondents were then asked who they would choose to spare given a wide range of variables — such as age, wealth, and numbers of people.
Some answers were universal constants: Humans were saved instead of pets and large numbers of people instead of individuals. But there was disagreement, too, such as the fact that people in certain countries were more likely to elect to hit people crossing roads illegally than those in others. Religious factors also seemingly played a role in differences, as did issues like economic inequality.
“I think the most surprising [results for me was] the degree to which respondents favored sparing characters of higher status,” Iyad Rahwan, an associate professor of Media Arts and Sciences at the Massachusetts Institute of Technology Media Lab, who worked on the project, told Digital Trends. “I find it especially concerning that this factor had a strong effect compared to other factors. It is important to be aware of such type of bias and quantify it.”
Ultimately, the researchers aren’t convinced that such findings should necessarily be used to directly crowdsource future laws, however. “We think that these findings are meant to inform experts who are working on normative guidelines,” Edmond Awad, a postdoctoral associate at MIT, told us. “We don’t suggest that experts cater to the public’s preferences, especially when they find these preferences concerning. But they need to be at least aware of such preferences and their magnitude to anticipate public reaction.”
- Pigs are smarter than we thought. Scientists taught them to play video games
- The best therapy apps for Android and iOS
- Facebook’s new image-recognition A.I. is trained on 1 billion Instagram photos
- Meet the man on a controversial mission to preserve and digitize your brain
- Why teaching robots to play hide-and-seek could be the key to next-gen A.I.