One of the most interesting aspects of self driving cars, and one too often passed over by car companies extolling the virtues of this new technology, is the ethical dilemmas inherent to autonomous vehicles. Yes, I’m talking about the undergraduate philosophical thought experiment called the trolley dilemma. In this old chestnut, a runaway trolley is barreling towards five people and as a bystander you have a choice to pull a lever, changing the direction of the trolley and putting it on a course that will result in a single person’s death. What do you do?
Such choices are occasionally the stuff of roadside reality, and autonomous vehicles will have to be equipped with some method for dealing with them. Now a new study published in the Frontiers in Behavioral Neuroscience offers a model of human morality, with the goal of teasing out our own internal ethics engine and suggesting how it might be transposed onto cars.
One of the problems with the trolley dilemma, and one that makes it difficult to use for modeling any sort of ethical decision, is that people are notorious liars. Faced with such a hypothetical thought experiment, we routinely answer how we wish we’d act rather than how we actually would act. The reason for this probably owes in part to the fact that we are not comprised of a single self, but rather a multitude of selves, a fact thoroughly documented in the pioneering research of Nobel prize winning economist Daniel Kahneman. His extensive research on the “narrating self” versus the “experiential self” put to bed any notion of the human as a single consistent locus of decision-making.
For this reason, such thought experiments rarely make any sense, at least so far as they pertain to self-driving cars. Which is not to say humans are not without an ethical code — just that we require more inventive ways than thought experiments to discern it. Now, thanks to virtual reality, researchers at the University of Osnabrück claim to have drawn a bead on how to model such ethical problems in a way that reveals our inner ethics engine and makes it applicable to self-driving cars.
With the aid of immersive virtual reality, the researchers placed study participants in unexpected unavoidable crash situations while driving a virtual car. Faced with choices between hitting people, cars, trees, and inanimate objects, the participants had to make split-second decisions revealing how they valued each item. Using a multitude of such scenarios, the authors were able to create a value-of-life table for every human, animal, and inanimate object the drivers experienced.
Hypothetically, when faced with an unavoidable crash, a self-driving car could simply consult the value-of-life table and choose to hit the entity with lowest value of life. This at least would succeed in transposing a fairly accurate facsimile of human values onto automobiles. This is all assuming a branch in the self-driving car’s logic that reads “hit person or bridge abutment?” when in fact millions of data points may be analyzed in a second, and in an order such that the decision may never come up as stated. So the situation may not apply as such in most real-life situations, but the ethical dilemma does remain in an extreme case.
Still, there are at least two major flaws with the value-of-life table approach. One is choosing which cross section of the population to use when creating the table. Different cultures and regional groups may have very different value-of-life tables. The second, and more pernicious flaw is that the sum of our ethical choices may reveal us to be despicably selfish and racist, broadly known as implicit bias. Applying a value-of-life system informed by our implicit biases to autonomous vehicles would only succeed in transferring our most embarrassing ethical shortcomings onto machines.
Naturally, the temptation to correct for such biases would be overwhelming. But once we start correcting for these biases, we’re back where we started: facing the question of whether to have machines behave how we wish we would act, or as we actually would act. Going forward, this is sure to remain one of the most interesting questions in artificial intelligence and robotics, with no simple answers on the horizon.
Now read: How Self-Driving Cars Work, and When They’ll Get Real