What The AI Needs To Do When Humans Inside A Self-Driving Car Panic
Dr. Lance Eliot, AI Insider
[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]
Wait, change that, go ahead and panic.
Are you panicked yet?
Sometimes people momentarily lose their minds and opt to panic.
This primal urge can be handy as it invokes the classic fight-or-flight instinctive reaction to a situation, though it can also cause people to make poor decisions and get themselves into a precarious position accordingly.
Contagion And Panic
When a person panics while in a crowd, it can have spreading consequences like a kind of virus.
When one person panics, others often opt to do the same. This could be a monkey-see, monkey-do kind of reaction.
Or, it could be a follow-the-leader reaction, namely they assume that the other person knows something they don’t know.
Ranges Of Panic Behavior
There are ranges of panic.
You’ve got your everyday typical panic.
You’ve got the panic that is severe and the person is really crazed and out of their head.
You’ve got the person that seems to be continually in a semi-panic mode, no matter what the situation.
And so on.
We’ll use these classifications for now:
- No panic
- Mild panic
- Panic (everyday style)
- Severe panic
These forms of panic can be one-time, they can be intermittent, they can be persistent. Therefore, the frequency can be an added element to consider:
- One-time panic (of any of the aforementioned kinds)
- Intermittent panic
- Persistent panic
We can also add another factor, which some would debate fervently about, namely deliberate panic versus happenstance panic.
Most of the time, for most people, when they get into a panic mode, it is happenstance panic. It happens, and they have no or little control over it. It is like a wave of the ocean water that rises, reaches a crescendo, and then dissipates. There are some though that claim they are able to consciously use panic to their advantage. They wield it like a tool. As such, if the circumstance warrants, they force themselves to deliberately go into a panic mode. It is hoped or assumed that doing so might give them herculean strength or otherwise get their adrenaline going. This is somewhat debated about whether you can truly harness panic and use it like a domesticated horse.
In any case, here’s these factors:
- Happenstance panic (most of the time)
- Deliberate directed panic (rare)
Panic Related To Cars
Let’s consider how panic can come to play when driving a car.
If you watch a teenage novice driver, you are likely to see moments of panic.
When they are first learning to drive, they are often quite fearful about the driving task and the dangers involved in driving a car (rightfully so!). As long as the driving task is coming along smoothly, they are able to generally keep their wits about them. This is why it is usually safest to start by having them drive in an empty parking lot. There’s nothing to be distracted by, there are less things that can get hit, etc.
Suppose a teenage novice driver is driving in a neighborhood and a dog darts out from behind some bushes.
For more seasoned drivers, this is something that is likely predictable and that you’ve seen before. You might apply the brakes or take other evasive actions, and do so without much panic ensuing.
In contrast, the novice driver might begin to feel their blood pumping through their body, their heart seems to pound incessantly, their hands grip the steering wheel with a death like grasp, their body tenses up, they lean forward trying to see every inch of the road, and so on.
Autonomous Cars And Human Panic While Inside The Vehicle
What does this have to do with AI self-driving driverless autonomous cars?
At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One important aspect involves considering what humans might do while inside an AI self-driving car and how to cope with their potential panic.
For the case of a dog that darts out into the street, let’s change the scenario and assume that you are in an AI self-driving car.
The AI is driving the car.
You are not driving the car.
There isn’t any provision let’s say for you, as a human, to be able to drive the car. There’s no pedals, there isn’t a steering wheel.
The driving is entirely up to the AI system.
Maybe you were reading the newspaper and enjoying having the AI drive you around the neighborhood. Out of the corner of your eye, you see that a dog has suddenly darted into the street.
What do you do?
For those of us that have grown-up in an era of cars that allow humans to drive the car, I’d bet that you’d be very tempted to want to suddenly take control of the car.
You might instinctively reach for where the steering wheel used to be placed, or you might use your leg and jam downward instinctively as though you are slamming on the brakes. But, in this case, none of that is going to do any good. You are not driving the car.
As an aside, if we do ever become a society where only the AI is the driver, and you have people that have never driven a car themselves, I would guess that they won’t react as you do, in that they aren’t going to be tempted to “drive” the car, since they have always accepted the notion that it’s up to the AI to do so. Eerie, kind of.
Anyway, back to that poor dog that’s run into the street and is facing potential injury or death at the hands of the AI.
You can see that the dog is possibly going to get hit.
You likely are hoping or assume that the AI is going to detect the dog being there, and will take some kind of evasive maneuver. But, in those few seconds between your realization of the situation and before the AI has overtly reacted, you aren’t sure what the AI is going to do.
You don’t even know if the AI realizes that the dog is there.
Perhaps you might have blind faith in the AI and so you again slump back in your seat. You are calm because you know that the AI will make “the right decision” which might be to avoid the dog, or might be to hit the dog since maybe it’s the lesser choice of two evils (perhaps if the AI were to swerve the car, it might injure or kill you, and so it chooses instead to hit the dog).
I’m betting that the odds are high that you’ll actually be very concerned about the welfare of the dog, and also be concerned too about what the AI is going to do as a driving aspect.
If the AI makes a wild maneuver, maybe it goes off the road and runs into a tree, and you get injured. Perhaps the AI doesn’t recognize that there’s a dog ahead and isn’t going to do anything other than straight out hit the dog. This could harm or kill the dog, and likely damage the car, and you might get hurt too.
Well, in this situation, you might panic.
You could potentially wave frantically in hopes that the dog will see you, but this is low odds because the car has tinted windows and the windows are all rolled-up.
What The AI Should Do
Here’s a question for you to ponder — what should the AI do?
Now, I’m not referring to whether the AI should hit the dog or avoid the dog,
I’m instead asking what the AI should do about you, the human occupant of the self-driving car.
Few of the automakers and tech firms are considering that question right now.
They are so focused on getting an AI self-driving car to do the everyday driving task that they consider the aspects of the human occupants to be an “edge” problem. An edge problem is one that is considered not at the core of a problem. It’s something that you figure you’ll get to when you get to it.
The AI in our scenario is presumably focusing on the dog and what to do about the driving.
Should it though also consider the humans inside the self-driving car?
Should it be observing the humans to see how they are doing?
Should it be listening for the humans to possibly say something that maybe the AI needs to know?
If the AI of the self-driving car is only paying attention to the outside world, it might miss something that a passenger inside the AI self-driving car might have noticed that it didn’t notice. Could be that the passenger provides valuable and timely information, similar to my example about the dog running into the street.
As a human driver, you already know that sometimes a passenger in your car might panic. They might see that dog, your passenger yells and screams about the dog, flails their arms, and you meanwhile are trying to keep a cool head.
Would we want the AI to be like that calm driver that also is allowing the passenger(s) in the self-driving car to provide input, which might or might not be useful, which might or might not be timely, or do we want the AI to completely ignore the human occupants?
It is our belief that the AI should be observing the human occupants in a mode that involves gaining their input, but that it also needs to be tempered by the situation and cannot just obediently potentially do whatever the human might utter.
We also believe that it will be important for the AI to at times explain what it is doing and why. If the AI had told the human occupants that there was a dog in the road and that the AI was going to swerve to avoid it, the human occupants would be at least reassured that the AI realized the dog was there and that the AI was going to take action.
Complexities Of Handling Human Panic
Just like you aren’t supposed to yell “Fire!” in a crowded theater (unless there is a fire, presumably), the AI cannot blindly do whatever the human might say.
Suppose the human tells the AI to come to an immediate halt and should slam on the brakes, and yet let’s say the self-driving car is going 80 miles per hour on a crowded freeway and there is a semi-truck right on the heels of the self-driving car?
Does hitting the brakes in that scenario make sense?
So, the AI needs to realize that the input to the driving task by a human occupant will need to be filtered and gauged as based on the situation.
Furthermore, if the human seems to be panicked, this could be a further indicator of being cautious about whatever the human has to say.
Whatever underlies the panic, it could be that the panic somehow becomes pertinent to the driving task.
Suppose the human occupant needs to be taken to the hospital because they believe they are having a heart attack (maybe it’s just a panic attack that feels like a heart attack)?
In essence, the panic of the human occupant could lead to a needed change related to the driving task, whether it be to alter where the self-driving car is going, or even how the self-driving car is being driven (such as slow down, speed-up).
It is anticipated that most AI self-driving cars will have cameras pointed not only outward to detect the surroundings of the car, but also inward too. These inward facing cameras will be handy for when you might have your children in the self-driving car, doing so without adult supervision, and you would want to see how they are doing. Or, if you are using the AI self-driving car as a ridesharing service, you’d likely want to see how people are behaving inside the car and whether they are wrecking it. All in all, there is more than likely going to be inward facing cameras.
With the use of these inward facing cameras, the AI has the possibility of being able to detect that someone is having a panic moment.
Besides the audio detection by the person’s words or noises, the camera could be used in a facial recognition type of mode. Today’s facial recognition can generally ascertain if someone seems happy or sad, or potentially in a panic mode.
Aiding Humans That Are Panicking
The AI could try to aid a person that’s in a panic mode.
For example, the AI system might seek to calm the person and reassure them.
The AI system could offer to connect with a loved one or maybe even 911.
Some believe that we’ll eventually have AI systems that act in a mental therapist manner, which would be easy then to include into the AI add-on’s for the AI self-driving car.
Of course, this calming effort should not detract from the AI that is intended to be operating the self-driving car, and thus any use of processors or system memory for the calming effort would need to be undertaken judiciously.
We assert that the AI needs to be aware of the human occupants and be attentive in case they panic.
The panic might be directly related to some aspect of the AI driving task.
Or, it might not be related, but the AI might end-up having to alter the driving task due to the panic mode of the human.
Furthermore, the AI could potentially try to aid the human in a manner that a fellow human passenger might or that perhaps even a human therapist might.
Some believe that the AI does not have any obligation to placate or aid the human occupants.
Maybe the initial versions of the AI would be that simplistic, but it would seem unwise to stop there.
The AI needs to be fully able to contend with all aspects of the driving task, which means not just the pure mechanics of driving down a street and making turns. It means instead to be the captain of the ship, so to speak, and be able to aid the passengers, even when they go into a panic.
Of course, we also need to make sure that the AI doesn’t itself go into a panic mode.
But, that’s a story for another day.
For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website
The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.
More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru
To follow Lance Eliot on Twitter: https://twitter.com/@LanceEliot
For his Forbes.com blog, see: https://forbes.com/sites/lanceeliot/
For his AI Trends blog, see: www.aitrends.com/ai-insider/
For his Medium blog, see: https://email@example.com
For Dr. Eliot’s books, see: https://www.amazon.com/author/lanceeliot
Copyright © 2019 Dr. Lance B. Eliot