Self-Driving Cars Need To Deal With Active Shooter Situations

Dr. Lance Eliot, AI Insider

Image for post
Image for post

Regrettably, sadly, the topic of active shooters has been in the news recently.

Upon reflecting about these recent horrific incidents, I remembered one that happened earlier this year that took place on March 27, 2019 in Seattle, and consisted of an active shooter situation that involved a city bus, though fortunately the outcome was not as dire as the recent incidents.

I believe that once the advent of autonomous vehicles begins to fully appear on our public roadways, those self-driving driverless vehicles might find themselves immersed in an active shooter setting by happenstance, and for which the AI of the autonomous vehicle should be prepared to respond accordingly.

Let’s explore what happened in the Seattle incident and then see if there are lessons that can be applied to the development of AI systems for autonomous cars and other driverless vehicles.

Seattle Active Shooter Incident In March 2019

Thank goodness for a heroic bus driver in Seattle.

Things went haywire on an otherwise normal day (this is a real story that happened on March 27, 2019).

The metro bus driver managed to save a bus of unassuming passengers from grave danger. A crazed gunman was walking around on Metro Route 75 and was wantonly firing his pistol at anything and anyone that happened to be nearby.

Characterized as a senseless and random shooting spree, the active shooter took shots at whatever happened to catch his attention. Unfortunately, the bus got into his shooting sphere.The bus was on its scheduled route and unluckily came upon the scene where the gunfire was erupting.

Unsure of what was going on, the bus driver at first opted to bring the bus to a halt. The shooter decided to run up to the bus and then, shockingly, shot pointedly at the bus driver. The bullet hit the bus driver in the chest. Miraculously, he was not killed. In spite of the injury and the intense bleeding, and with an amazingly incredible presence of mind and spirit, the bus driver took stock of the situation and decided that the right thing to do was to escape.

He could have perhaps tried to scramble out of the bus and run away, aiming to save himself and not put any thought towards the passengers on the bus. Instead, he put the bus into reverse and backed-up, which is not an easy task with a bus, and not when you are hemorrhaging from a bullet wound, and not when you have a gunman trying to kill you.

After having driven a short distance away, he then put the bus into forward drive and proceeded to get several blocks away. His effort to get the bus out of the dire situation of being in the vicinity of the meandering shooter was smart, having saved his passengers and himself from becoming clay target-like pigeons cooped up inside that bus.

Fortunately, he lived, and we can thank him for his heroics.

What To Do In An Active Shooter Situation

Often, an active shooter tends to go into a building and wreaks havoc therein.

Perhaps you’ve taken a class on what to do about an active shooter. I’ve done so, which was offered for free by the building management that housed my office.

There is a mantra that they drummed into our heads, namely run-hide-fight, or some prefer the sequence of hide-run-fight.

You can use variants of the three words, such as hide-flight-fight, cover-run-fight, and others, whichever is easiest for you to recall. Some even use the three words of avoid-deny-defend.

Active Shooter Situation That Is Outdoors

What would you do when you are outside, and an active shooter gets underway?

You can still use the handy three words of hide-run-fight.

Let’s revisit the mindset of the heroic bus driver.

The bus driver wasn’t standing around. He wasn’t “outside” per se. He was inside a bus. At first, you might assume that being inside a bus is a pretty good form of protection. Not particularly, especially for the driver. The driver is sitting in the driver’s seat, buckled in. There are glass windows all around, so that the driver can see the roadway while driving. It’s kind of a sitting duck situation.

In any case, the heroic bus driver realized that he was still alive and could drive the bus. With that decision made, the next matter to ponder would have been which way to drive the bus.

If there was a nearby wall, maybe pull the bus behind that wall, attempting to hide the entire bus. It seems doubtful in this case that there was any nearby obstruction large enough to hide the bus behind.

This meant that the next choice would be to consider running away. Apparently, if he had chosen to drive forward, he would have been going toward the gunman.

We can also likely assume that trying to go left or right was not much of an option. The bus was probably on a normal street that would have sidewalks, houses or buildings on either side of the street, making it nearly impossible to simply make a radical left or right turn to escape.

Therefore, the bus driver decided to put the bus into reverse.

You might be wondering whether the third element of the three-word approach might have been used in this situation, namely, could the bus driver have chosen to fight?

I’ll dispense with the kind of fighting in which the bus driver jumps out of the bus and tries to do a hand-to-hand combat with the shooter. The bus driver was already wounded and partially incapacitated.

Maybe he could have tried to run over the gunman, using the bus as a weapon.

That would have been a means to “fight” the shooter.

I’d like to add that the bus driver emphasized afterward that he was especially concerned about the bus passengers. By backing up, this would seem like a means to try and ensure greater safety for the passengers too.

Active Shooter And Reaction By AI Autonomous Cars

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving driverless autonomous cars.

One rather unusual or extraordinary edge or corner case involves what the AI should do when driving a self-driving car that has gotten itself into an active shooter setting. It’s a hard problem to consider and deal with.

I’ll readily concede that the odds of an AI self-driving car coming upon a scene that involves an active shooter is indeed an edge or corner case.

I’m not so willing to concede that there isn’t anything the AI can do about an active shooter situation.

Begin by considering what the AI might do if it was not otherwise developed to cope with the situation. Thus, this is what might happen if we don’t give due attention to this edge case and allow the “normal” AI that’s been developed for the core aspects of driving to deal with the situation at-hand.

Detection And Response Aspects

First, the question arises about detection. Would the sensors of the self-driving car detect that there was an active shooter? Probably not, though let’s clarify that aspect.

The odds are that the sensors would indeed detect a “pedestrian” that was near or on the street. The AI system would be unlikely though to ascribe a hostile intent to the pedestrian, at least not more so than any instance of a pedestrian that might be advancing toward the self-driving car. The gunman won’t necessarily be running at the self-driving car as though he is desiring to ram into it. That’s something that the AI could detect, namely a pedestrian attempting to physically attack or run into the self-driving car.

I’d guess that the gunman is more likely to let his gun do the talking, rather than necessarily charging at the self-driving car on foot.

If the gunman is standing off to the side and shooting, the normally programmed AI for a self-driving car won’t grasp the concept that the person has a gun, and that the gun is aimed at the self-driving car, and that the gunman is shooting, and that there are lethal bullets flying, and that those bullets are hitting the self-driving car.

That kind of logical thinking is something that AI does not yet have per se. There isn’t any kind of everyday common-sense reasoning for AI as yet.

The AI then is not going to do anything special about there being an active shooter. In the bus driver scenario, it’s likely the AI would have just kept driving forward.

If there was an occupant or various passengers in a self-driving car, the situation might make them into sitting ducks. The AI self-driving car would not realize that something is amiss. It would be driving the legal speed limit or less so, trying to drive safely down the street. The passengers would need to either persuade the AI to drive differently, or they might need to hide inside the self-driving car and hope the bullets don’t hit them, or they might need to escape from the AI self-driving car.

For the escape from an AI self-driving car, the occupants might try to tell the AI to slow down or come to a stop, allowing them to leap out. If the AI won’t comply, or if it takes too long to do so, the occupants might opt to get out anyway, even while the self-driving car is in-motion. Of course, jumping out of a moving car is not usually a wise thing to do, but if it means that you can avoid possibly getting shot, it probably would be a worthy risk.

Suppose the occupants try to tell the AI what is happening and do so to persuade the AI to drive the self-driving car in a particular manner, differently than just cruising normally down the street. This is not going to be easy to have the AI “understand” and once again brings us into the realm of common sense reasoning (or the lack thereof).

You could try to make things “easy” for the AI by having the human occupants merely tell the AI to stop going forward and immediately go into reverse, proceeding to back-up as fast as possible. This seems at first glance like a simple way to solve the matter. But let’s think about this. Suppose there wasn’t an active shooter. Suppose someone that was in an AI self-driving car instructed the AI to suddenly go in reverse and back-up at a fast rate of speed.

Would you want the AI to comply?

Maybe yes, maybe no. It certainly is a risky driving maneuver. You could argue that the AI should comply, as long as it can do so without hitting anything or anybody. This raises a thorny topic of what kind of commands or instructions do we want to allow humans to utter to AI self-driving cars and whether those directives should or should not be obediently and without hesitation, carried out by the AI.

It’s a conundrum.

Controversial Use Of A Fight Option

For an AI self-driving car, suppose there are human occupants, and they are in the self-driving car when it encounters an active shooter setting. I’ve just mentioned the idea that the humans might instruct the AI to escape or run away from the situation.

Imagine instead if the human occupants told the AI to “fight” and proceed to run down the active shooter?

Unfortunately, we are now into a very murky area about AI. If the AI has no common-sense reasoning, and it cannot discern that this is a situation of an active shooter, it would be doing whatever the human occupants tell it to do.

What if human occupants tell the AI to run someone down, even though the person is not an active shooter. Maybe the person is someone the occupants don’t like.

Here’s where things are with the current approach to AI self-driving cars and an active shooter predicament:

  • The AI won’t particularly detect an active shooter situation.
  • The AI correspondingly won’t react to an active shooter situation in the same manner that a human driver might.
  • Furthermore, human occupants inside the AI self-driving car are likely to be at the mercy of the AI driving as though it is just a normal everyday driving situation. This would tend to narrow the options for the human occupants of what they might do to save themselves.

And that’s why I argue that we do need to have the AI imbued with some capabilities that would be utilized in an active shooter setting. Let’s consider what that might consist of.

AI Detecting Active Shooter Situation

How would the AI ascertain that an active shooter and an active shooting is underway in its midst?

The answer would seem to be found in examining how a human driver would ascertain the same matter. It is likely that the bus driver in Seattle was able to see that the gunman was in or near the street and was carrying a gun. The gunman might have appeared to be moving in an odd or suspicious manner, which might have been detected by knowing how pedestrians usually would be moving. There might have been other people nearby that were fleeing. In a manner of speaking, it is the Gestalt of the scene.

An AI system could use the cameras and other sensors to try and determine the same kinds of telltale aspects.

Once we have V2V (vehicle-to-vehicle) electronic communications as part of the fabric of AI self-driving cars, it would imply that whichever cars or other vehicles first come upon such a scene are potentially able to send out a broadcast warning to other nearby AI self-driving car to be wary. If the bus driver had gotten a heads-up before driving onto that part of the Metro Route, he undoubtedly would have taken a different path and avoided the emerging dire situation.

I’ve already mentioned that the human passengers, if any, might be able to clue-in the AI about the situation. Such an indication by the passengers would need to be taken with a grain of salt, meaning that those passengers might be mistaken, they might be drunk, they might be pranking the AI, or a slew of other reasons might explain why the passengers could be faking or falsely stating what is taking place.

Another element would be the gunshots themselves. Humans would likely realize there is a gunman shooting due to the sounds of the gun going off, even if the humans didn’t see the gun or see the muzzle blast or otherwise could not visually see that a shooting was underway.

Detection of the active shooting is the key to then deciding what to do next.

If the AI has detected that there is an active shooting, which might be only partially substantiated and therefore just a suspicion, the AI action planning subsystem needs to be ready to plan out what to do accordingly.

The sensors need to be able to detect the situation. The sensor fusion needs to put together multiple clues as embodied in the multitude of sensory data being collected. The virtual world modeling subsystem has to model what the situation consists of. The AI action planner needs to interpret the situation and do what-if’s with the virtual world model, trying to figure out what to do next. The plan, once figured out, needs to be conveyed via the self-driving car controls commands.

What kind of AI action plans might be considered and then undertaken?

Hide-run-fight.

That’s the hallmark of what the AI needs to review.

Should the AI self-driving car consider the “fight” possibilities?

As mentioned earlier, it’s a tough one to include. If a “fight” posturing would imply that the AI would choose to try and run over the presumed active shooter, it opens the proverbial Pandora’s box about purposely allowing or imbuing the AI with the notion of injuring or killing someone. Some critics would say that it is a slippery slope, upon which we should not get started. Once started, those critics worry how far might the AI then proceed, whether the circumstance warranted it or not.

Conclusion

I’m sure we all would hope that we’ll never be drawn into an active shooter setting. Nonetheless, if a human driver were driving a car and came upon such a situation, it’s a reasonable bet that the driver would recognize what is happening, and they would try to figure out whether to hide, run, or fight, making use of their car, if they otherwise did not think that abandoning the car and going on-foot was the better option.

AI self-driving cars are not yet being set up with anything to handle these particular and admittedly peculiar situations. That makes sense in that the focus right now is nailing down the core driving tasks. As an edge or corner case, dealing with an active shooter is a lot further down on the list of things to deal with.

In theory, maybe we will be living in a Utopian society once AI self-driving cars are truly at a Level 5 and no one will ever be confronted by an active shooter. Regrettably, I doubt that society will have changed to the degree that there won’t be instances of active shooters.

For that reason, it would be wise to have an AI self-driving car that is versed in how to contend with those kinds of life-and-death moments.

For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru

To follow Lance Eliot on Twitter: https://twitter.com/@LanceEliot

For his Forbes.com blog, see: https://forbes.com/sites/lanceeliot/

For his Medium blog, see: https://medium.com/@lance.eliot

For Dr. Eliot’s books, see: https://www.amazon.com/author/lanceeliot

Copyright © 2019 Dr. Lance B. Eliot

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store