If you ever worry that technology isn’t moving fast enough, imagine telling someone 25 years ago that a project involving the use of virtual reality to train self-driving cars to behave ethically would one day be a real thing. That is exactly what a team of German researchers has been working on in a study published in the journal Frontiers in Behavioural Neuroscience. And it is actually a whole lot more serious than you might initially think.
The idea, essentially, is to explore the kinds of challenging moral questions that self-driving cars will at some point have to make — for example, whether it is better to risk the lives of everyone in a packed car by steering off the road at high speed than to hit a child who has run out into the road.
“Our paper outlines a two-step process comprised of an assessment of human moral behavior and subsequent modeling of the observations made,” Leon René Sütfeld, a Ph.D. candidate in cognitive science at Osnabrück University and lead author of the study, told Digital Trends. “We developed a virtual reality environment depicting a road traffic scenario, in order to assess the moral behavior in the same context as a model of it may be applied. After running the experiment [on 105 human participants with Oculus headsets], we trained three different computer models of different complexities to see how well each of them would describe the observations. The main finding is that one-dimensional value-of-life models are able to describe or predict human behavior in these situations with good accuracy.”
The paper is interesting on its own merits to help unpack some of the decisions we make under stress regarding a sort of hierarchy of life value. Roughly speaking, this equates to children at the top, followed by adults, followed by animals.
However, it is interesting because it hints at some very real work that will be part of vehicle makers’ immediate future — if it’s not already part of what they do.
“Whether or not something like this is missing in current self-driving vehicles is a little tricky to answer,” Sütfeld said. “First off, we don’t know what systems exactly are used in those cars, and how they function in detail. Second, with the low number of self-driving vehicles today, situations like the ones outlined earlier are extremely rare. However, with increasing market saturation, these cases become more and more probable, and that’s when such ethical decision-making systems become more and more important.”
Sütfeld notes that these are still early days for the project, and really serves as a baseline for future studies — rather than in any way a definitive solution to the problem. (If such a thing can ever exist.) Still, it’s fascinating to see how much a part of AI the subject of ethics is becoming.
- Autonomous cars confused by San Francisco’s fog
- Ford and VW close down Argo AI autonomous car unit
- A weird thing just happened with a fleet of autonomous cars
- How a big blue van from 1986 paved the way for self-driving cars
- We now know what the self-driving Apple Car might look like