Some Say That Emergency-Only AI Is Sorely Needed For Self-Driving Cars

Dr. Lance Eliot, AI Insider

Image for post
Image for post

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

I had just fallen asleep in my hotel room when the fire alarm rang out.

It was a few minutes past 2 a.m. and the hotel was relatively full and mainly had been quiet at this hour as most of the hotel guests had earlier retired for the evening. The sharp twang of the fire alarm pierced throughout the hallways and walls of the hotel. I could hear the sounds of people moving around in their hotel rooms as they quickly got up to see what was going on.

Fortunately,there was no sign of a fire, no smoke, no flames, and gradually even the most skeptical of guests went back to sleep.

It was a false alarm.

No complaints on my part since I’d rather have false alarms rather than no fire alarm system at all.

Emergency Systems That Save Lives

This discussion about fire alarms and fire protection illuminates some important elements about systems that are designed to help save human lives.

In particular:

  • A passive system like the fire alarm pull won’t automatically go off and instead the human needs to overtly activate it
  • For a passive system, the human needs to be aware of where and how to activate it, else the passive system otherwise does little good to help save the human
  • An active system like the smoke alarm is constantly detecting the environment and ready to go off as soon as the conditions occur that will activate the alarm
  • Some system elements are intended to simply alert the human and it is then up to the human to take some form of action
  • Some system elements such as a fire sprinkler are intended to automatically engage to save human lives and the humans being saved do not need to directly activate the life-saving effort
  • These emergency-only systems are intended to be used only when absolutely necessary and otherwise are silent, being somewhat out-of-sight and out-of-mind of most humans
  • Such systems are not error-free in that they can at times falsely activate even when there isn’t any pending emergency involved
  • Humans can undermine these emergency-only systems by not abiding by them or taking other actions that reduce the effectiveness of the system
  • Humans will at times distrust an emergency-only system and believe that the system is falsely reporting an emergency and therefore not take prescribed action

I’m invoking the use case of fire alarms as a means to convey the nature of emergency-only systems.

There are lots of emergency-only related systems that we might come in contact with in all walks of life. The fire alarm is perhaps the easiest to describe and use as an illustrative aspect to derive the underpinnings of what they do and how humans act and react to them.

Autonomous Cars And Emergency-Only AI

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One approach that some automakers and tech firms are taking toward the AI systems for self-driving cars involves designing and implementing those AI systems for emergency-only purposes.

Allow me to elaborate.

Let’s focus mainly herein on the true Level 4 and Level 5, but also begin by studying the ramifications related to Level 3.

There are various approaches that automakers and tech firms are taking toward the design and development of AI for self-driving cars and one such approach involves an emergency-only AI paradigm.

Emergency-Only AI Driving For Level 3

Let’s say I am driving in a Level 3 self-driving car. I would normally be expecting the AI to be the primary driver and I am there in case the AI needs me to take over (note that the human driver is still considered the responsible party and expected to ultimately ensure the safety of the driving act).

I’ve written and spoken many times about the dangers of this co-sharing arrangement.

As a human, I might become complacent and not be ready to take over the driving task when the moment arises for me to do so. Maybe I was playing a video game on my smartphone, maybe I was reading a book that’s in my lap, and other kinds of distractions might occur.

Instead of having the AI do most of the driving while in a Level 3, suppose we instead said that the human is the primary driver.

The AI is relegated to being an emergency-only driver.

Here’s how that might work.

I’m driving my Level 3 car and the AI is quietly observing what is going on. The AI is using all of its sensors to continuously detect and interpret the roadway situation. The sensor fusion is occurring. The virtual world model is being updated. The AI action planning is taking place. The only thing not happening is the issuance of the car controls commands.

In a sense, the AI is for all practical purposes “driving” the car without actually taking over the driving controls. This might be likened to when I was teaching my children how to drive a car. They would sit in the driver’s seat. I had no ready means to access the driver controls. Nonetheless, in my head, I was acting as though I was driving the car. I did this to be able to comprehend what my teenage novice driver children were doing and so that I could also come to their aid when needed.

Okay, so the Level 3 car is being driven by the human and all of a sudden another car veers into the lane and threatens to crash into the Level 3 car. We now have a circumstance wherein the human driver of the Level 3 car should presumably take evasive action.

Does the human notice that the other car is veering dangerously?

Will the human take quick enough action to avoid the crash?

Suppose that the AI was able to ascertain that the veering car is going to crash with the Level 3 car.

Similar to a fire protection system such as at the hotels, the AI can potentially alert the human driver to take action (akin to a fire alarm that belts out an alarm bell).

Or, the AI might take more overt action and momentarily take over the driving controls to maneuver the car away from the danger (this would be somewhat equivalent to the fire sprinklers getting invoked in a hotel).

If the AI was devised to work in an emergency-only mode, some would assert that it relieves the pressure on the AI developers to try and devise an all-encompassing AI system that can handle any and all kinds of driving situations.

Instead, the AI developers could focus on the emergency-only kinds of situations.

Defining Emergency Driving Situations

This also brings up the notion of defining the nature of an emergency driving situation.

The obvious example of an emergency would be the case of a dog that has darted into the street directly in front of the car and the speed, direction, and timing of the car is such that it will mathematically intersect with the dog if some kind of driving action is not taken to immediately attempt to avoid striking the animal. But this takes us back to the kind of simpleton automated driving assistance systems that are not especially imbued with AI anyway.

If we’re going to consider using AI for emergency-only situations, presumably the kinds of emergency situations will range from rather obvious ones that a knee-jerk reactive driving system could handle and all the way up to much more subtle and harder to predict emergencies.

If the AI is going to be continuously monitoring the driving situation, we’d want it to be acting like a true secondary driver and be able to do more sophisticated kind of emergency situation detection.

You are on a mountain road that curves back-and-forth.

The slow lane has large rambling trucks in it. Your car is in the fast lane that is adjacent to the slow lane. The AI has been observing the slow lane and detected a truck up ahead that periodically has swerved into the fast lane when on a curve. The path of the car is such that in about 10 seconds the car will be passing the truck while on a curve. At this moment there is no apparent danger. But, it can be predicted with sufficient probability that in 10 seconds the likelihood is that the truck will swerve into the lane of the car as it tries to pass the truck on the curve.

Notice that in this example there is not a simple act-react cycle involved.

Most of the automated driving assist systems would only react once the car is actually passing the truck and if perchance as the passing action occurred that the truck then veered into the path of the car. Instead, in my example, the AI has anticipated a potential future emergency and will opt to take action beforehand to either prevent the danger or at least be better prepared to cope with it when (if) it occurs.

The emergency-only AI would be presumably boosted beyond the nature of a traditional automated driving assist system, and likely be augmented by the use of Machine Learning (ML).

How did the AI even realize that observing the trucks in the slow lane was worthwhile to do?

An AI driving system that has learned over time would have the “realization” that trucks often tend to swerve out of their lanes while on curving roads.

This then becomes part-and-parcel of the “awareness” that the AI will have when looking for potential emergency driving situations.

True Autonomous Cars And Emergency-Only AI

Let’s now revisit my earlier comments about the nature of emergency-only systems and my illustrative examples of the fire alarm and fire protection systems.

I present to you those earlier points and then recast them into the context of AI self-driving cars:

  • A passive system like the fire alarm pull won’t automatically go off and instead the human needs to overtly activate it

Would a driving emergency-only AI system be setup for only a passive mode, meaning that the human driver would need to invoke the AI system? We might have a button that the human could press that invokes the AI emergency capability, or the human might have a “safe word” that they utter to ask the AI to step into the picture.

Downsides with this include that the human might not realize they need or even could use the AI emergency option. Or, the human might realize it but enact the AI emergency mode once it is too late to do anything to avert the incident by the AI.

We would also need to have a means of letting the human know that the AI has “accepted” the inception of going into the AI emergency option mode, otherwise the human might be unsure as to whether or not the AI got the signal and whether the AI is actually stepping into the driving.

There is also the matter of returning the driving back to the human once the emergency action by the AI has been undertaken. How would the AI be able to “know” that the human is prepared to resume driving the car? Would it ask the human driver or just assume that if the human is still at the driver controls that it is Okay to disengage by the AI?

  • For a passive system, the human needs to be aware of where and how to activate it, else the passive system otherwise does little good to help save the human

As mentioned, a human driver might forget that the AI is standing ready to take over. Plus, when an emergency arises, the human might be so startled and mentally consumed that they lack a clear-cut mind to be able to turn over the driving to the AI.

  • An active system like the smoke alarm is constantly detecting the environment and ready to go off as soon as the conditions occur that will activate the alarm

With this approach, the AI is ready to step into the driving task and will do so whenever it deems necessary. This can be handy since the human driver might not realize an emergency is arising, or might realize it but not invoke the AI to help, or be perhaps incapacitated in some manner and wanting to invoke the AI but cannot.

Downside here is that the AI might shock or startle the human driver by summarily taking over the driving and catching the human driver off-guard. If so, the human driver might try to take some dramatic action that counters the actions of the AI.

We might also end-up with the human driver become on-edge that at any moment the AI is going to take over. This might cause the human driver to get suspicious of the AI.

It could be that the AI only alerts the human driver and lets the human driver decide what the human driver wants to do. Or, it could be that the AI grabs control of the car.

  • Some system elements are intended to simply alert the human and it is then up to the human to take some form of action

In this case, if the AI is acting as an alert, the question arises as to how best to communicate the alert. If the AI rings a bell or turns on a red light, the human driver won’t especially know what the declared emergency is about. Thus, the human driver might react to the “wrong” emergency in terms of what the human perceives versus what the AI detected.

If the AI tries to explain the nature of the emergency, this can use up precious time. When an emergency is arising, the odds are that there is little available time to try and explain what to do.

I am reminded that at one point my teenage novice driver children were about to potentially hit a bicyclist and I was tongue tied trying to explain the situation. I could just say “swerve to your right!” but this offered no explanation for why to do so. If I tried to say “there is a bicyclist to your left, watch out!” this provided some explanation and the desired action would be up to the driver. If I had said “there is a bicyclist to your left, swerve to your right!” it could be that the time taken to say the first part, depicting the situation, used up the available time to actually make the swerving action that would save the bike rider. Etc.

  • Some system elements such as a fire sprinkler are intended to automatically engage to save human lives and the humans being saved do not need to directly activate the life-saving effort

This approach involves the AI taking over the driving control, which as mentioned has both pluses and minuses.

  • These emergency-only systems are intended to be used only when absolutely necessary and otherwise are silent, being somewhat out-of-sight and out-of-mind of most humans

For emergency-only AI driving systems, they are intended only for use when an emergency driving situation arises. This begs the question though of what is considered an emergency versus not an emergency.

Also, suppose a human believes an emergency is arising but the AI has not detected it, or maybe the AI detected it and determined that it does not believe that a genuine emergency is brewing. This brings up the usual hand-off issues that arise when doing any kind of co-sharing of the driving task.

  • Such systems are not error-free in that they can at times falsely activate even when there isn’t any pending emergency involved

Some AI developers seem to think that their AI driving system is going to work perfectly and do so all the time. This makes little sense. There is a good likelihood that the AI will have hidden bugs. There is a likelihood that the AI as devised will potentially make a wrong move. There is a chance that the AI hardware might glitch. And so on.

If an emergency-only AI system engages on a false positive, it will likely undermine confidence by the human driver that the AI is worthy to have engaged at all. There is also the concern that if the AI gets caught in a false negative and does not take action when needed, this too is worrisome since the human would assert that they relied upon the AI to deal with the emergency, but it failed in its duty to perform.

  • Humans can undermine these emergency-only systems by not abiding by them or taking other actions that reduce the effectiveness of the system

With the co-sharing of the driving task, there is an inherent concern that you have two drivers trying to each drive the car as they see fit.

Imagine that when my children were learning to drive if I had a second set of driving controls. The odds are that I would have kept my foot on the brake nearly all of the time and been keeping a steady grip on the steering wheel. This though would have undermined their driving effort and created confusion as to which of us was really driving the car. The same can be said of the AI emergency-only driving versus the human driving.

  • Humans will at times distrust an emergency-only system and believe that the system is falsely reporting an emergency and therefore not take prescribed action

Would we lockout the driving controls for the human whenever the AI takes over control due to a perceived emergency situation by the AI detection? This would prevent having the human driver fight with the AI in terms of what driving action to take. But the human driver is likely to have qualms about this. Suppose the AI has taken over when there wasn’t a genuine emergency.

We might assume or hope that the AI in the case of acting on a false alarm (false positive) would not get the car into harm’s way. This though is not necessarily the case.

Suppose the AI perceived that the car was potentially going to hit a bicyclist, and so the AI swerved the car to avoid the bike rider. Meanwhile, by swerving the car, another car in the next lane got unnerved and the driver in that car reacted by slamming on their brakes. Meanwhile, by slamming on their brakes, the car behind them slammed into the car that had hit its brakes. All of this being precipitated by the AI that opted to avoid hitting the bicyclist.

Imagine though that the bicyclist took a quick turn away from the car and thus there really wasn’t an emergency per se.

Conclusion

There are going to be AI systems that are devised to work only on an emergency basis.

Astute ones will be designed to silently be detecting what is going on and be ready to step into a task when needed.

We’ll need though to make sure that humans know when and how the AI is going to take action. Those humans too will be imperfect and potentially forget that the AI is there or might even end-up fighting with the AI if the human believes that the AI is wrong to take action or otherwise has qualms about the AI.

We usually think of an emergency as a situation involving the need for an urgent intervention in order to avoid or mitigate the chances of injury to life, health, or property. There is a lot of judgment that often comes to play when declaring that a situation is an emergency. When an automated AI system tries to help out, clarity will be needed as to what constitutes an emergency and what does not.

The Hippocratic Oath states that primum non nocere, meaning first do no harm.

An emergency-only AI system for a self-driving car is going to have a duty to abide by that principle, which I assure you is going to be a high burden to bear.

The emergency-only AI approach is not as easy of a path as some might at initial glance assume, and indeed for some it might even be considered insufficient, while for others it is a step forward toward the goal of a full autonomous AI self-driving car.

For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru

To follow Lance Eliot on Twitter: https://twitter.com/@LanceEliot

For his Forbes.com blog, see: https://forbes.com/sites/lanceeliot/

For his AI Trends blog, see: www.aitrends.com/ai-insider/

For his Medium blog, see: https://medium.com/@lance.eliot

For Dr. Eliot’s books, see: https://www.amazon.com/author/lanceeliot

Copyright © 2019 Dr. Lance B. Eliot

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store