Here’s Why AI Self-Driving Cars Need A Special ‘Crash Mode’ Capability

Dr. Lance Eliot, AI Insider

Image for post
Image for post

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column:]


While innocently sitting at a red light, a car rammed into the rear of my car. I was not expecting it.

Things began to happen so quickly that I barely remember what actually did happen once the crash began.

Within just a few brisk seconds, my car was pushed into a car ahead of me, ripping the back and left-side of my car. The gas tank ruptured and gasoline leaked onto the ground, my airbag deployed, most of the windows fractured and bits of glass flew everywhere. Basically all heck broke loose.

This actually happened some years ago when I was a university professor.

Fortunately, none of us were badly injured, but if you saw a picture of my car after the incident, you’d believe that no one in my car should or could have survived the crash.

My car was totaled.

I think back to that crash and can readily talk about it today, but at the time it was quite a shocker.

Speaking of shock, I am pretty sure that I must have temporarily gone into shock when the crash first started.

I say this because I really do not remember exactly how things went in those few life-threatening seconds. All I can remember is that I kind of “woke-up” in that I consciously realized the airbag had deployed, and that my windshield was busted, otherwise I was utterly confused about what was going on. It was as though a magic wand had transformed my setting into some other bizarre world.

The main aspect about the incident was that my mind was blurry about those key seconds between having gotten hit from behind and the realization that I was sitting in my driver’s seat and glass was around me and my airbag was in front of me.

I wondered whether there was anything I could have done once the impact began.

Suppose I had been forewarned and told or knew that a car was going to violently ram into the back of my car. Let’s further assume that I didn’t have sufficient time to get out of the way or make any kind of evasive maneuver.

It’s an interesting problem to postulate.

Acting As A Car Crash Begins To Emerge

We usually think about the ways to avoid a car accident.

What about trying to cope with an accident that is emerging, supposing you have a brief chance to take some form of action, aiding in perhaps reducing the impact but not being able to fully avoid the incident overall.

In this case, if I had some premonition or clue that the accident was going to happen, maybe I could have tried to turn the wheels of the car so that it might move away from the car ahead of me once my car got rammed.

Or, maybe I might have put on the parking brake in hopes it would further keep my car from being pushed by the ramming action.

The medics at the scene told me that I was probably lucky that I did not realize that the ramming was going to occur, since most people tense up.

Suppose that my mind had remained clearly alert and available during those few seconds in which the accident evolved. I mentioned to you earlier that I have no particular recall and those moments are blurry in my mind, let’s pretend differently.

Pretend that my mind was completely untouched and could act as though it was separate from the severe contortions happening to my physical body.

What then?

Dealing With Out-Of-Control Cars

In essence, there are actions that you can take to bring a car back into control, or you can take no actions and hope for the best, or you can take misguided actions that cause the car to go into a further out-of-control result.

You need to not only determine what is the proper course of action, you need to try to prevent the situation from getting worse, you need to take into account what your car can and cannot do, you need to consider any damage the car is undertaking and how it will limit what you can do, and consider a slew of other factors.

AI Autonomous Cars And Cars Out-Of-Control

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One aspect that few automakers and tech firms are considering at this time is the special characteristics of driving a car while it is in crash-mode and how the AI should be skilled to do so.

In terms of AI self-driving cars and getting involved in car crashes, most of the automakers and tech firms are focused on avoiding car crashes and not particularly considering what the AI should do once a car crash is imminent or underway.

This is troubling.

If the AI is not intentionally established to have special processes or procedures for dealing with a car accident once underway, it means that the AI self-driving car is essentially going to become out-of-control.

Part of the grave concern with this kind of thinking is that it means that when an AI self-driving car does get into a car accident, it will likely do little to try to minimize the impacts and be unable or ill-equipped to find ways to either escape or at least try to “improve” upon a bad situation.

I’ve predicted many times over that when AI self-driving cars get into car crashes, society is going to become hyper-focused on why and how it happened, and the entire future of AI self-driving cars is going to be based on these instances. It becomes the classic “bad apple” that spoils the entire barrel.

Special AI Capability For Crash Mode

There is no doubting that the “crash mode” becomes a highly complex problem.

The self-driving car is likely becoming less drivable as the car crash clock ticks, starting at point in time t=0. At some time, we’ll say is t+1, perhaps the brakes are no longer functioning. At some time, t+2, it could be that the car is now in a slide as a result of the ramming and the wheels are unable to gain traction to redirect the direction of the self-driving car. And so on.

The AI is going to have “driving controls” issues to cope with. Though the AI doesn’t have arms or legs, it does have electronic systems and various means to undertake the driving controls of the self-driving car.

Are those driving controls still available to the AI or might they have been damaged or cut as a result of the underway car crash as it evolves in those split seconds?

In essence, you have these aspects:

  • AI system as “mindset” for driving the car
  • Sensors of the self-driving car that the AI needs to sense what’s happening
  • Car controls for the AI to use to drive or control the self-driving car

The AI system itself might be degraded or faulty during the car crash and must have a provision to ascertain its own status and reliability. This then would be used to try to decide what actions the AI ought to be taking, and also avoid taking actions that the AI ought to not be taking.

The sensors such as the cameras, the LIDAR, radar, and the ultrasonic can become degraded or faulty during the car crash.

The car controls might no longer be accessible by the AI, due to the car crash aspects as they unfold.

Or, maybe the AI can issue car controls commands, but the controls themselves are non-responsive, or the car controls attempt to carry out the order, but the physics of the car and the evolving situation preclude the car from physically being able to carry out the instructions.

Impacts To Human Passengers Inside The Autonomous Car

One aspect that I’ve not brought up herein involves the AI having to decide what to do about any human passengers that are in the AI self-driving car.

This is quite important and must be taken into consideration.

Here’s what I mean.

For the AI to consider what action to take during the car crash, there is the matter of how the humans within the AI self-driving car are going to be impacted too.

Which is better or worse for the passengers, having the AI attempt to accelerate out of the full impact or instead maybe letting the impact happen but steer the car so that the impact happens on one side of the car versus the side that the humans are sitting in?

I know that you might be saying that the AI should seek to save all humans inside of the AI self-driving car. Sorry, that’s too easy an answer.

There is a myriad of options that the AI might be able to consider.

Each of those options will involve uncertainties.

Twists And Turns Involved

The AI has quite an arduous problem to solve during a car crash.

That’s partially why it is being avoided by some AI developers, it is a really tough nut to crack.

I don’t think that this means that we should just shrug it off and wave our hands in the air.

Without any kind of crash mode capability, the AI is going to be potentially useless and merely add fuel to the fire of the self-driving car becoming a kind of unguided missile.

When I say this, there are some that try to retort that the AI might have a crash mode and try to deal with a car crash as it evolves, and yet ultimately be unable to do anything of substance anyway. It might be that the car controls are unavailable or non-functioning. It might be that the choices of what to do are so rotten that doing nothing is a better choice. Etc.

Yes, it is true that the AI might end-up not being able to aid the lessening of the car crash repercussions. Does this mean that the AI should not even try to do so? Are you willing to toss away the chance that the AI might be able to assist? I don’t believe that’s prudent and nor what we might hope a true AI self-driving car will do.

Each situation will have its own particulars that dictate what becomes feasible during the crash.

Was there any in-advance indication of what was about to occur?

Was any preparation possible prior to the actual crash?

Once the crash began, what possibilities existed of still being able to exert control over the self-driving car?

Throughout the car crash, what could be done and what was done?

There’s also the post-crash aspects too.

Use Of Machine Learning

Here’s a bit of a twist that might catch your interest.

Some say that we should be using Machine Learning (ML) or Deep Learning (DL) to cope with and aid the crafting of the special “crash mode” of the AI for a self-driving car.

The notion is that the use of deep or large-scale artificial neural networks might allow the AI to identify patterns in what to do during car crashes. By examining perhaps hundreds or thousands of car crashes, in the same way that the ML or DL studies pictures of street signs to identify what street signs consist of, maybe the AI could become versed in handling car crashes.

This seems sensible.

One question for you, where are we going to get all of this car crash data that will be needed to do the ML or DL training? Right now, the AI self-driving cars and auto makers and tech firms are doing everything they can to avoid car crashes.

There isn’t a vast trove of car crash data available to do this kind of pattern matching and training with.

Sure, there are tons of car crashes daily that are occurring with conventional cars.

This though does not encompass the kind of car crash data that we need to have collected. For today’s car crashes, at best there is info about what happened before a crash and what the end result was.

Whatever happened in-between is not particularly data captured and nor analyzed.

Unfortunately, there is not a handy treasure trove of car crash data that includes what took place during the crash itself.

The closest that we can come would be to use simulations.


We need to have the AI of a self-driving car be able to deal with car crashes.

This includes not just the pre-crash aspects and the post-crash aspects, which is usually where the attention by the AI developers is aimed.

There must be a “crash mode” that is able to cope with the unwinding or evolving elements that happen during a car crash.

The crash mode could be a kind of last-resort core portion that does what it can to try to keep aware of the moment-to-moment situation and exert any car control that it can, doing so in hopes of minimizing injury, death, or damages. Similar to humans, in a manner of speaking, the AI can suffer from a type of “artificial shock” that means it will become degraded in being able to figure out what is taking place and what can be done about the emerging situation.

The complexities during a crash are enormous.

What on the self-driving car is still working and usable?

What is the status of the humans on-board? What is the situation outside the self-driving car?

How can all of these variables be coalesced into a sensible plan of action and carried out by the AI?

The odds are that whatever the AI derives, the plan itself will need to be instantly re-planned, based on the aspect that the situation is rapidly changing.

The AI potentially has fast-enough processing speed to try to find ways to cope with the car crash while it is occurring and take rudimentary actions related to the self-driving car. For the sake of AI self-driving cars, and for the sake of human lives, let’s put some keen focus on having “crash mode” savvy AI.

For free podcast of this story, visit:

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see:

To follow Lance Eliot on Twitter:

For his blog, see:

For his AI Trends blog, see:

For his Medium blog, see:

For Dr. Eliot’s books, see:

Copyright © 2019 Dr. Lance B. Eliot

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store