Rough Choice: Lost Lives Versus Saved Lives to Achieve AI Self-Driving Cars

Dr. Lance B. Eliot, AI Insider

Image for post
Image for post
Linear Non-Threshold (LNT) graph options

The controversial Linear No-Threshold (LNT) graph has been in the news recently. LNT is a type of statistical model that has been used primarily in health-related areas such as dealing with exposures to radiation and other human-endangering substances such as toxic chemicals.

Essentially, the standard version of an LNT graph posits that any exposure at all is too much and therefore you should seek to not have any exposure, avoiding even the tiniest bit of exposure. You might say it is a zero-tolerance condition (using modern day phrasing). Strictly speaking, if you believe the standardized version of an LNT graph, it means there isn’t any level of exposure that is safe.

In the classic LNT graph, the line starts at the origin point of the graph and the moment that the line starts to rise it is indicating that immediately you are being endangered since any exposure is considered bad. That’s the “no-threshold” part of the LNT. There isn’t any kind of initial gap or buffer portion that is considered safe. Any exposure is considered unsafe and ill-advised to encounter.

Succinctly stated by Nobel Prize winner Hermann Muller in the 1940s, he was the discoverer of the ability of radiation to cause genetic mutation, and he emphatically stated that radiation is a “no threshold dose” kind of contaminant.

The standard LNT has been a cornerstone of the EPA (Environmental Protection Agency), and a latest twist could be that the classic linear no-threshold might become instead a threshold-based variant. That’s kicking up a lot of angst and controversy. By-and-large, the EPA has typically taken the position that any exposure to pollution such as carcinogens is a no-threshold danger, meaning that the substance is dangerous at any level, assuming it is dangerous at some level.

Regulations are usually built on the basis of the no-threshold principle.

You might at first glance think that this LNT makes a lot of sense. Sure, any exposure to something deadly would seem risky and unwise.

Not so fast, some say.

There is an argument to be made that sometimes a minor amount of exposure of something is not necessarily that bad, and indeed in some instances it might be considered good.

What might that be, you wonder?

Some would cite drinking and alcohol as an example.

For a long time, health concerns have been raised that drinking alcohol is bad for you, including that it can ruin your liver, it can harm your brain cells, it can become addictive, it can make you fat, it can lead to getting diabetes, it can increase your chances of getting cancer, you can blackout, and so on. The list is rather lengthy. Seems like something that should be avoided, entirely.

Meanwhile, you’ve likely heard or seen the studies that now say that alcohol can possibly increase your life expectancy, it can overcome undue shyness and enable you to be bolder and more dynamic, and it might reduce your risk of getting heart disease. There are numerous bona fide medical studies that have indicated that drinking red wine, for example, might be able to prevent coronary artery diseases and therefore lessen your chances of getting a heart attack. In essence, there are health-positive benefits presumably to drinking.

One concern you might have about touching on the benefits of drinking is that it might be used by some to justify over-drinking, such as those wild college drinking binges that seem to occur (as a former professor, I had many occasions of students showing up to class that had obviously opted to indulge the night before and they were zombies while in the classroom).

By allowing any kind of signaling that drinking is Okay, you might be opening up Pandora’s box. Perhaps it might be better to just state that no drinking is safe and therefore you can close-off any chance of others trying to wiggle their way into becoming alcoholics by claiming you might have led them down that primrose path.

You might be tempted therefore to make your graph show a no-threshold indication. That’s pretty much the logic used by the EPA. Historically, the EPA has tended to side with the no-threshold perspective since they have been concerned that allowing any amount of threshold, even a small one, could start the floodgates.

The counter-argument is that this is like the proverbial tossing out the baby with the bath water. You are apparently willing to get rid of the potential “good” for the sake of the potential “bad,” and therefore presumably won’t have any chance at even experiencing the good.

When I’ve been referring to having a (relatively) small initial threshold, there’s a word that commonly is used to refer to such a phenomenon, namely it is called hormesis.

Linear No-Threshold (LNT) Graph and Hormesis Process

We could take a traditional Linear No-Threshold (LNT) graph, and place onto the graph an indication of a hormesis process, meaning something that allows for having a neutral or possibly positive reaction when at small levels of exposure. The first part of the hormesis’s line or curve would showcase that at the low doses, the result is neutral or possibly positive. That area of the line or curve that contains this neural or positive result is considered the hormetic zone.

There is an entire body of research devoted to hormesis and it is a popular word among those that study these kinds of matters.

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. I am a frequent speaker at industry conferences and one of the most popular questions that I get has to do with the societal and economic rationale for pushing ahead on AI self-driving cars. The crux of the matter involves lives saved versus lives lost. As you’ll see in a moment, this is quite related to the Linear No-Threshold (LNT) that I’ve introduced you to.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task.

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too.

Returning to the topic of the Linear No-Threshold (LNT) model, let’s consider how the LNT might apply to the matter of AI self-driving cars.

One of the most noted reasons to pursue AI self-driving cars involves the existing dismal statistic that approximately 37,000 deaths occur in conventional car accidents each year in the United States alone, and it is hoped or assumed that the advent of AI self-driving cars will reduce or perhaps completely do away with those annual deaths.

There are of course other reasons to seek the adoption of AI self-driving cars. One often cited reason involves the mobility that could be presumably attained by society as a result of readily available AI self-driving cars.

Let’s though focus on the notion of AI self-driving cars being a life saver by seemingly ensuring that we will no longer have any deaths due to car accidents.

You might ponder for a moment what it is about AI self-driving cars that will apparently avoid deaths via car accidents. The usual answer is that there won’t be any more drunk drivers on the roads, since the AI will be doing the driving, and therefore we can eliminate any car accidents resulting from humans that drink and drive.

Likewise, we can seemingly eliminate car accidents due to human error, such as failing to hit the brakes in time to avoid crashing into another car or perhaps into a pedestrian. For the moment, I’ll hesitantly say that we can agree that those kinds of deaths due to car accidents can be eliminated by the use of AI self-driving cars, though I make this concession with reservations in doing so.

My reservations are multi-fold.

For example, as mentioned earlier, we are going to have a mixture of human driven cars and AI self-driving cars for quite a long time to come, and thus it will not be as though there are only AI self-driving cars on the public roadways.

Even if we somehow remove all human driving and human drivers from the equation of driving, this does not mean that we would necessarily end-up at zero fatalities in terms of AI self-driving cars. As I’ve repeatedly emphasized in my writings and presentations, goals of having zero fatalities sound good, but the reality is that there is a zero chance of it. When an AI self-driving car is going down a street at 45 miles per hour, let’s assume completely legally doing so, and a pedestrian steps suddenly and unexpectedly into the street, with only a split second before impact, the physics belie any action that the AI self-driving car can take to avoid hitting and likely killing that pedestrian.

In any case, I’d like to walk you through the type of debate that I usually encounter when discussing this aspect of car-related deaths and AI self-driving cars.

Logical Perspectives on Matters of Life and Death and AI Self-Driving Cars

Much of the time, those involved in the debate are not considering the full range of logical perspectives on the matter.

Take a look at my Figure 1 that shows the range of logical perspectives on this matter of lives and deaths related to AI self-driving cars.

Image for post
Image for post

We’ll start this discussion by considering those that insist on absolutely no deaths to be permitted by any AI self-driving car. Ever. Under no circumstances do they see a rationalization for an AI self-driving car being involved in the death of a human.

That’s quite a harsh position to take.

You could say that it is a no-threshold position. This is comparable to suggesting that the toxicity (in a sense) of an AI self-driving car must be zero before it can be allowed on our roads. The person taking this stance is standing on the absolutely and utterly “no risks” allowed side of things. For them, a Linear No-Threshold (LNT) graph would be a fitting depiction of their viewpoint about AI self-driving cars.

I’d like to qualify the aspect of the LNT in their case is somewhat different than say radiation or a toxic chemical. They are willing to allow AI self-driving cars once they have presumably been “perfected” and are guaranteed (somehow?) to not cause or produce any car-related deaths.

This position would be that you can keep trying to perfect AI self-driving cars in other ways, just not on the public roadways.

Test those budding AI self-driving cars on special closed-tracks that are made for the purposes of advancing AI self-driving cars. Use extensive and large-scale computer-based simulations to try and iron out the kinks. Do whatever can be done, except for being on public roadways, and when that’s been done, and in-theory the AI self-driving car is finally ready for death-free driving on the public streets, it can be released into the wild.

The auto makers and tech firms claim that without using AI self-driving cars on the public roadways, there will either not be viable AI self-driving cars until a far distant future, or it might not ever come to pass at all.

For those that are in the camp of no-deaths, they reply that go ahead and take whatever time you need. If it takes 20 years, 50 years, a thousand years, and you still aren’t ready for the public roadways, so be it. That’s the price to pay for ensuring the no-deaths perspective.

But this seems reminiscent once again of the LNT argument.

Suppose that while you wait for AI self-driving cars to be perfected, meanwhile those 37,000 deaths per year with conventional cars is continuing unabated. If you wait say 50 years for AI self-driving cars to be perfected, you are also presumably offering that you are willing to have perhaps nearly 200,000 people die during that period of time.

This hopefully moves the discussion into one that attempts to see both sides of the equation. There are presumably deaths or lives to be saved, as a result of the adoption of AI self-driving cars, though it is conceivable that those AI self-driving cars will still nonetheless be attributable to some amount of car-related deaths.

Are you willing or not to seek the “good” savings of lives (or reductions in deaths), in exchange for the lives (or deaths) that will be lost while AI self-driving cars are on our roadways and being perfected (if there is such a thing)?

If you could get to AI self-driving cars sooner, such as in 10 years, during which in-theory without any AI self-driving cars on the roadways you would have lost say 370,000 lives, would you do so, if you also were willing to allow for some number of car-related deaths that were attributable to the still being perfected AI self-driving cars. That’s the rub.

Refer again to my Figure 1.

I’m going to consider direct deaths and also indirect deaths.There are direct deaths, such as an AI self-driving car that rear-ends another car, and either a human in the rammed car dies or a human passenger in the AI self-driving car dies (or, of course, it could be multiple human deaths), and for which we could investigate the matter and perhaps agree that it was the fault of the AI self-driving car.

There are indirect deaths that can also occur. Suppose an AI self-driving car swerves into an adjacent lane on the freeway. There’s a car in that lane, and the driver gets caught off-guard and slams on their brakes to avoid hitting the lane-changing AI self-driving car. Meanwhile, the car behind the brake-slamming car is approaching at a fast rate of speed and collides with the braking car. This car, last in the sequence, rolls over and the human occupants are killed.

I refer to this as an indirect death.

Okay, let’s return to my Figure 1.

There’s the first row, the showstopper, consisting of the no-deaths perspective. This viewpoint tends to be blind to the net lives that might be saved, during an interim period of AI self-driving cars being on the roadway and won’t consider the net lives saved and nor the net less deaths possibilities.

Some criticize that camp and use the old proverb that perfection is the enemy of good. By not allowing AI self-driving cars to be on our public roadways until they are somehow guaranteed not to produce any deaths, indirect or direct, you are apparently seeking perfection and will meanwhile be denying a potential good along the way. Plus, maybe the good won’t ever materialize because of that same stance.

For the remainder of the chart, I provide eight variations of those that would be considered the some-threshold camp. This takes us into the hormetic zone.

There are four distinct stances or positions about indirect deaths (see the chart rows numbered as 2, 3, 4, 5), and are all instances that involve a willingness to “accept” the possibility of incurring indirect deaths due to AI self-driving cars being on the roadways during this presumed interim period.

For the columns, there is the situation of a belief that there will be a net savings of lives (the number of lives “saved” from the predicted number of usual deaths is greater than the number of indirect deaths generated via the AI self-driving cars), or there will be a net less-deaths (the number of indirect deaths will be greater than the number of lives “saved” in comparison to the predicted number of usual deaths).

One tricky and argumentative aspect about the counting of net lives or net deaths is the time period that you would use to do so.

There are some that would say they would only tolerate this matter if the aggregate count in any given year produces the net savings. Thus, if AI self-driving cars are allowed onto our roadways, it means that in each year that this takes place, the net lives saved must bear out in that year. Every year.

This though might be problematic. If we picked a longer period of time, say some X number of years (use 5 years as a plug-in example), maybe the net savings would come out as you hoped, though during those five years there might have been any of those particular years that the net savings was actually be a net loss.

Would you be so restrictive that it had to be just per-year, or would you be willing to take a longer time period of some kind and be satisfied if the numbers came out over that overall time period — you decide.

Per my chart, we have these four positions about indirect deaths:

  • Highly Restrictive = indirect deaths with net life savings each year mandatory (savings > losses)

I realize you might be concerned and confounded about the notion of having net less deaths. Why would anyone agree to something that involves the number of losses due to AI self-driving cars being greater than the number of lives “saved” by the use of AI self-driving cars? The answer is that during this hormetic zone, we are assuming that this is something that might indeed occur, and we are presumably willing to allow it in exchange for the future lives savings that will arise once we get out of the hormetic zone.

Without seeming to be callous, take the near-term pain to achieve the longer-term gain, some might argue.

To get a “fairer” picture of the matter, you should presumably count the ongoing number of lives saved, forever after, once you get out of the hormetic zone, and plug that back into your numbers.

Let’s say it takes 10 years to get out of the hormetic zone, and then thereafter we have AI self-driving cars for the next say 100 years, and during that time the number of predicted deaths by conventional cars would be entirely (or nearly so) avoided. If so, using a macroscopic view of the matter, you should take the 100 years’ worth of potential deaths that were avoided, so that’s 100 x 37,000, which comes to 3,700,000 deaths avoided, and add those back into the hormetic zone years. It certainly makes the hormetic zone period likely more palatable. This requires a willingness to make a lot of assumptions about the future and might be difficult for most people to find credible.

Four Categories of Direct and Indirect Deaths

The remain four positions are about direct deaths. It would seem that anyone likely willing to consider direct deaths would also be willing to consider indirect deaths, and thus it makes sense to lump together the indirect and direct deaths for these remaining categories.

Here they are:

  • Mild Restrictive = direct + indirect deaths with net life savings each year mandatory (savings > losses)

You can use this overall chart to engage someone into a hopefully intelligent debate about the advent of AI self-driving cars, doing so without a lot of hand waving and yelling that seems ill-served, amorphous, lacks structure, and often seems to generate more heat than substance.

I usually hold back a few other notable aspects that I figure can regrettably turn the discussion nearly immediately upside down.

For example, suppose that we never reach this nirvana of perfected AI self-driving cars, in spite of perhaps allowed them to be used on our public roadways, and in the end they are still going to be involved in car-related deaths. That’s a lot to take in.

Conclusion

Whether you know it or not, we are currently in the hormetic zone. AI self-driving cars are already on our public roadways.

So far, most of the tryouts include a human back-up driver, but as I’ve repeatedly stated, a human back-up driver does not translate into a guarantee that an AI self-driving car is not going to be involved in a car-related death. The Uber self-driving car incident in Arizona is an example of that unfortunate point.

Per my predictions about the upcoming status of AI self-driving cars, we are headed toward an inflexion point. There are going to be more deaths involving AI self-driving cars, including direct and indirect deaths.

How many such deaths will be tolerated before the angst causes the public and regulators to decide to bring down the hammer on AI self-driving cars tryouts on our roadways?

If the threshold is going to be a small number such as one death or two deaths, it pretty much means that AI self-driving cars will no longer be considered viable on our public roadways. This then means that it will be up to using closed-tracks and simulations to try and “perfect” AI self-driving cars. Yet, as per my earlier points, the pell-mell rush to get AI self-driving cars possibly off the roadways could dampen the pace of advancing AI self-driving cars, which as mentioned could imply that we’ll be incurring conventional car deaths that much longer.

Can the public and regulators view this advent of AI self-driving cars as an LNT type of problem? Is there room to shift from a no-threshold to some-threshold? Can the use of hormesis approaches give guidance of looking at a larger picture?

As an aside, one unfortunate element of referring to LNT is that it in a sense is not well regarded by those that are dealing with truly toxic substances, and for which they tend to make the case that indeed no-threshold is the way to go. I don’t want to overplay the LNT analogy since I don’t want others to somehow ascribe to AI self-driving cars that they are a type of radiation or carcinogen that needs to be abated. Please do keep that in mind.

Can the scourge of any deaths at the “hands” of an AI self-driving car be tolerated as long as there is progress toward reducing conventional cars deaths?

It’s a lot for anyone to consider. It certainly isn’t going to lend itself to a debate in 140-characters at a time basis. It’s more complex than that. Might as well start thinking about the threshold problem right now, since we’ll soon enough find ourselves completely immersed in the soup of it. Things are undoubtedly coming to come to a boil.

For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru

Copyright 2019 Dr. Lance Eliot

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store