AI Moral Machine and Driverless Cars: Revealing Global Differences

Dr. Lance B. Eliot, AI Insider

Image for post
Image for post
Global views on ethics of AI self-driving cars are varied and notable

We are not all the same. In Brazil, they eat winged queen ants that they fry or dip into chocolate. In Ghana, they eat termites in rural areas, which provide proteins, fats, and oils into their diets. Thailand is known for munching on grasshoppers and crickets, doing so in the same manner that Americans might snack on nuts and potato chips.

Let’s agree then that there are international differences among peoples. There is no single food-eating code that the entire world has reached an agreement to abide with.

You might say that we are making ethical or moral decisions about what we believe is proper to eat and what is not proper to eat. One dimension of this ethical or moral judgment is based on what your cultural norm consists of. I bring up the ethical underpinnings about food to help bring attention to something else that also involves ethical and moral elements, but for which at first glance it might not seen to do so.

Automated systems and the emergence of widespread applications of Artificial Intelligence (AI) are also laden with ethical and moral conundrums.

For most AI developers, they are likely steeped in the technology of trying to craft AI applications, for which the ethical and moral elements are not quite so apparent to them.

Let’s combine together the aspects of AI systems that have ethical or moral elements and/or consequences with the notion that there are international differences in ethics and moral choices and preferences.

If you are an AI developer in country X, and you are developing an AI system, you might fall into the mental trap of crafting that AI system as based on your own cultural norms of being in country X. This means that you might by default be embedding into the AI system the ethics or moral elements that are let’s say acceptable in that country X.

This at first might not be even noticed by you. You are doing this without any particular conscious thought or attempt to bias the AI system. It is merely a natural consequence of your ingrained cultural norms as a member of country X. I’ve written and spoken extensively about the internationalizing of AI, of which the ethics and morals dimension are often regrettably neglected by AI developers and AI firms.

Ferreting Out Deeply Embedded Ethics and Morals Elements

The tricky part is ferreting out the ethics and morals elements that are perhaps deeply embedded into the AI system.

You need to figure out what those elements are, which might not have ever come up previously regarding the system and therefore the initial hunch is that there aren’t any such embeddings. Even more so as a difficulty is often deciding what to change those embeddings to, regarding what is the appropriate target set of ethics and morals embeddings.

Part of the reason that figuring out the desired target of ethics and moral embeddings is that you often didn’t do so at the start anyway. In other words, you never initially had to endure the difficulty of trying to figure out what ethics and moral embeddings you were going to put into the AI system.

There is another factor too that comes to play, namely whether the AI system is a real-time one, and whether it has any serious or severe consequences in what it does. The more that the AI system operates in real-time and has potential life-or-death choices to make, if this also dovetails into the ethics or moral embeddings realm, it is a twofer.

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. Auto makers and tech firms are faced with the dilemma of how to have the AI make life-or-death driving choices, and these choices could be construed as being based on ethics or morals elements, of which those can differ by country and culture.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task.

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too.

Returning to the topic of ethics and moral elements embedded in AI systems, let’s take a closer look at how this plays out in the case of AI self-driving cars and especially in a global context.

Those within the self-driving car industry are generally aware of something that ethicists have been bantering around called the Trolley problem.

Philosophers and ethicists have been using the Trolley problem as a mental experiment to try and explore the role of ethics in our daily lives. In its simplest version, the Trolley problem is that you are standing next to a train track and the train is barreling along and heading to a juncture where it can take one of two paths. In one path, it will ultimately strike and kill five people that are stranded on the train tracks. On the other path there is one person. You have access to a track switch that will divert the train from the five people and instead steer it into the one person. Would you do so? Should you do so?

Some say that of course you should steer the train toward the one person and away from the five people.

The answer is “obvious” because you are saving four lives, which is the net difference of killing the one person and yet saving the five people. Indeed, some believe that the problem has such an apparent answer that there is nothing ethically ambiguous about it at all.

Ethicists have tried numerous variations to help gauge what the range and nature of our ethical decision-making is. For example, suppose I told you that the one person was Einstein and the five people were all evil serial killers. Would it still be the case that the saving of the five and the killing of the one is so easily ascertained by the sheer number of lives involved?

We are on the verge of asking the same ethical questions of AI self-driving cars. I say on the verge, but the reality is that we are already immersed in this ethical milieu and just don’t realize that we are. What actions do we as a society believe that a self-driving car should take to avoid crashes or other such driving calamities? Does the Artificial Intelligence that is driving the self-driving car have any responsibility for its actions?

One might argue that the AI is no different than what we expect of a human driver. The AI needs to be able to make ethical decisions, whether explicitly or not, and ultimately have some if not all responsibility for the driving of the car.

Abstracting Vs. Naming Individuals in an Ethical Dilemma

One of the most significant factors that seems to alter a person’s answer is whether you depict the problem in an abstract way without offering any names per se versus if you tell the person that they or someone they know is involved in the scenario.

In the case of the problem being abstract, the person seems likely to answer in a manner that offers the least number of deaths that might arise. If you tell the person that they are let’s say inside the self-driving car, they tend to shift their answer to aim at having the car occupants survive. If you tell the person they are outside the self-driving car and standing on the street, and will be run over, they tend to express that the AI self-driving car should swerve, even if it means the likely death of some or all of the self-driving car occupants.

I mention this important point because there are a lot of these kinds of polls and surveys that seem to be arising lately, partially because AI self-driving cars continue to increase in attention to society, and the manner of how the question is asked can dramatically alter the poll or survey results. This explains too why one poll or survey appears to at times have quite different results than another.

Another facet involves whether or not the people responding to the questions take the poll or survey seriously. If someone perceives the questions to be silly or inconsequential, they might answer off-the-cuff or maybe even answer in a manner intended to purposely shock the responses or distort the results. You have to consider the motivation and sincerity of those responding.

In the case of AI self-driving cars, there has been an ongoing large-scale effort to try and get a handle on the ethics and moral aspects of making choices when driving a car, via an online experiment referred to as the Moral Machine experiment.

A recent recap of the results accumulated by the online experiment were described in an issue of Nature magazine and indicated that around 2.3 million people had taken the survey. The survey presented various scenarios akin to the Trolley problem and asked the survey respondent what action they would take. There were over 40 million “decisions” that these two million or so respondents rendered in undertaking the survey. Plus, it was undertaken by respondents from 233 countries and territories.

Before I go over the results, I’d like to remind you of the various limitations and concerns about any such kind of survey. Those that went to the trouble to do the online survey were a self-selected segment of society. They had to have online access, which not everyone in the world yet has. They had to be aware that the online survey existed, of which not many people that are online would have known about. They had to be willing to take the time needed to complete the survey. Etc.

Similar to the Trolley problem, the respondents were confronted with an unavoidable car accident that was going to occur. They were to indicate how an autonomous AI self-driving car should react. I point out this facet since many studies have tended to focus on what the person would do, or what the person thinks other people ought to do, and not per se on what the AI should do.

A fundamental question to be pondered is whether people want the AI to do something other than what they would want people to do.

Often times, these studies assume that if you say that the AI should swerve or not swerve, you are presumably also implying that if it was a person in lieu of the AI that was driving the car, the person is supposed to take that same action. But, perhaps people perceive that the AI should do something for which they don’t believe people would do, or maybe even could do.

If there is one human passenger in the self-driving car, this implies that the AI will need to make a decision about whether to spare the life of the passenger or to spare the life of the child. Is your answer different if the driver was the parent? I suppose you could say that the case of the parent with a passenger involves two human lives inside the car, while the non-AI instance of the parent driving the car does involve two human lives inside the car.

For the large-scale online experiment, here’s the kinds of scenarios it used:

  • Sparing humans versus sparing animals that are presumed to be pets
  • Staying on course straight ahead versus swerving away
  • Sparing passengers inside the car versus pedestrians on the roadway
  • Sparing more human lives versus fewer human lives
  • Sparing males versus females
  • Sparing young people versus more elderly people
  • Sparing legally-crossing pedestrians versus illegally jaywalking pedestrians
  • Sparing those that appear to be physically fit versus those appearing to be less fit
  • Sparing those with seemingly higher social status versus those with seemingly lower status

They also added aspects such as in some cases the pretend people depicted in the scenarios were labeled as being medial doctors, or perhaps wanted criminals, or stating that a woman was pregnant, and so on.

These factors were combined in a manner to provide 13 accident pending scenarios to each respondent.

There was also an attempt to collect demographic data directly from the respondents, such as their gender, their age, income, education level, religious affiliation, political preference, and so on.

What makes this study rather special, besides the large-scale nature of it, involves the aspect that the online access was available globally.

This potentially provides a glimpse into the international differences that might come to play in the Trolley problem answers. To-date, most studies have tended to be done within a particular country. As such, it has made it harder to try and compare across countries, which means it has been difficult to compare across cultures, which means it likewise has tended to be difficult to compare across ethics and moral norms.

As an aside, I am not saying that a country is always and only one set of ethics and moral norms. Obviously, a country can contain a diversity of ethics and moral norms.

Well, you might wonder, what did the results seem to show?

Humans Over Pets, for the Most Part

Respondents tended to spare humans over pets.

I know you might think it should be 100% of humans over pets, but that’s not the case. This could be interpreted to suggest that the life of an animal is considered by some cultures and ethics/morals as the equal to a human. Or, it could be that some weren’t paying attention to the scenario. Or, it could be that the respondent was fooling around. There are a multitude of interpretations.

That’s just one aspect. The auto makers and tech firms would likely say that if they waited to try and produce AI self-driving cars until the world caught up with figuring out these ethics/morals rules, we probably wouldn’t have AI self-driving cars until a hundred years from now, if ever, since you would have a devil of a time with getting people to come together and reach agreement on these rather thorny matters.

Indeed, it is believed that ultimately, we might see AI self-driving cars being marketed based on the kinds of ethics/morals rules that a particular brand or model encompasses. If you want the version that considers animals to be the equal to humans, you can get the auto maker Y’s brand or model, otherwise you’d get auto maker Z’s brand or model.

I hope I’ve made the case that we are heading towards a showdown about the ethics/morals embedded rules in AI self-driving cars. It isn’t happening now because we don’t have true Level 5 AI self-driving cars. Once we do have them, it will be a while before they become prevalent. My guess is that no one is going to be willing to put up much effort and energy to consider these matters until it becomes day-to-day reality and people realize what is occurring on their streets, under their noses, and within their eyesight.

Let’s take a look at some more results of the Moral Machine online experiment.

Respondents tended to spare humans by saving more rather than less.

Another factor can be the age of the people in the pretend scenarios. Generally, the respondents tended to spare a baby, or a little girl, or a little boy, more so than adults.

Considering Age as a Factor in AI Action Planning Determinations

Suppose you are an AI developer and your AI system for your brand of self-driving cars merely counts people as people. There is no distinction about age. Guess what, you have a rule! You have left out age as a factor. Thus, you have a rule that people are counted only as people and that age is not considered.

You might complain that you never even contemplated using age.

Furthermore, imagine that some country decides they want to allow only AI self-driving cars in their country that do take into account the age of a person when making these horrific kinds of untoward decisions. I know some will say they could adjust their AI code with one line and it would then encompass the age factor. I doubt this.

There are a number of other results of the online experiment that are indicative of the difficult AI ethics/morals discussions we yet are to confront.

For example, there was a preference of sparing those more physically fit over those that were less physically fit..

In terms of countries, the researchers opted to try and undertake a cluster analysis that incorporated Ward’s minimum variance method and used Euclidean distance calculations related to the AMCE’s of each country, doing so to see if there were any significant differences in the country-based results.

They came up with three major clusters, which they named as Western, Eastern, and Southern. The Western cluster was mainly encompassing countries that has Protestant, Catholic and Orthodoxy underpinnings, such as the United States and Europe. The Eastern cluster consisted of the Islamic and Confucian oriented cultures, including Japan, Taiwan, Saudi Arabia, and others. The Southern cluster was indicated as having a stronger preference result for sparking females in comparison to the Western and Eastern clusters, and encompassed South America, Central America, and others.

For AI self-driving cars, the researchers suggest that this kind of clustering might mean that the AI will need to be adjusted accordingly to those dominant ethics/morals in each respective cluster.

When you take a moment to consider your daily driving, you are likely to realize that you are quite a bit making life-or-death decisions about driving and that those decisions encompass a kind of moral compass. The moral compass is based on your own personal ethics/morals, along with whatever the stated or implied ethics/morals are in the place that you are driving, and this all gets baked together into your mind as you are driving a car.

Conclusion

The advent of AI self-driving cars raises substantive aspects about how the AI will be making split-second decisions of a real-time nature involving multi-ton cars that can cause life-or-death consequences to humans within the self-driving car and for other humans nearby in either other cars or as pedestrians or in other states of movement such as via bicycles, scooters, motorcycles, etc.

We humans make these kinds of judgements while we are driving a car. Society has gotten used to this stream of judgements that we all make. The expectation is that the human driver will use their judgement as shaped around the culture of the place they are driving and as based on the prevalent ethics/morals therein. When someone gets into a car incident and makes such choices, we are often sympathetic to their plight since the person typically had only a split-second to decide what to do.

We aren’t likely to consider that the AI has an excuse that the decision made was time-boxed into a split second. In other words, the AI ought to have beforehand been established to have some set of ethics/morals rules that guide the overarching decision making and then in a moment when a situation arises, we would expect the AI to apply those rules.

You can bet that any AI self-driving car that gets into an untoward situation and makes a choice or by default takes an action that we would consider a form of choice, this is going to be second-guessed by others. Lawyers will line-up to go after the auto makers and tech firms and get them to explain how and why the AI did whatever it opted to do.

The auto makers and tech firms would be wise to systematically pursue the embodiment of ethics/morals rules into their AI systems rather than letting it happen by chance alone. The head-in-the-sand defense is likely to lose support by the courts and the public. From a business and cost perspective, it will be a pay me know or pay me later kind of aspect for the auto makers, namely either invest now to get this done properly or later on pay a likely much higher price that they didn’t do it right at the start.

Another way to consider this matter is to take into account the global market for AI self-driving cars. If you are developing your AI self-driving car just for the U.S. market right now, you’ll later on kick yourself that you didn’t put in place some core aspects that would have made going global a lot easier, less costly, and more expedient. In that sense, the embodiment of the ethics/rules needs to be formulated in a manner that would allow for accommodating different countries and different cultural norms.

The Moral Machine online experiment needs to be taken with a grain of salt. As mentioned, as an experiment it is suffers from the usual kinds of maladies that any survey or poll might encounter. Nonetheless, I applaud the effort as a wake-up call to bring attention to a matter that otherwise is going to be sadly untouched until it is at a point of becoming an utter morass and catastrophe for the emergence of AI self-driving cars. AI self-driving cars are going to be a kind of “moral machine” whether you want to admit it or not. Let’s work on the morality of the moral machine sooner rather than later.

For free podcast of this story, visit:

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see:

Copyright 2019 Dr. Lance Eliot

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store