When Self-Driving Cars Become Paralyzed, It’s Not Good

Dr. Lance Eliot, AI Insider

Image for post
Image for post

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

I was in the woods with my family one day and it was getting towards nightfall.

We came upon a wolf, standing in the path ahead of us, staring straight at us, poised for action.

It seemed like an eternity as we stopped in our tracks and stared back at the wolf.

Nobody moved.

You could say we were all paralyzed.

Of course, we weren’t paralyzed in the sense that our limbs were not able to function.

If you are uncomfortable that I use the word paralysis, which I realize many believe should only be used when you are truly physically debilitated, I can use instead the word pseudo-paralysis if that’s more palatable to you.

Suppose we do this, for the rest of this discussion, whenever you see me use the word paralysis, substitute instead the word pseudo-paralysis.

Hope that’s OK with you all.

In a moment, you’ll grasp why I’ve discussed the topic of paralysis and led you to a juncture of considering paralysis as a circumstance involving coming to a halt, being faced with seemingly difficult choices of what to do next, and remaining in a stopped position for some length of time.

Fortunately, the wolf apparently grew tired of the standoff, and it wandered back into the woods.

AI Autonomous Cars And Paralysis

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. This also includes considering scenarios in which the self-driving car might find itself becoming pseudo-paralyzed due to a predicament or particular situation.

First, let me clarify that I am not referring to a circumstance involving the self-driving car having a malfunction.

Similar to my story, I am referring to a situation for which the AI has to make a decision about which way to go, and there doesn’t seem to be a viable choice at hand.

There will be circumstances that an AI self-driving car freezes up due to some kind of potential malfunction, often referred to as the Freezing Robot Problem.

Herein, let’s assume that the AI self-driving car is fully able and can go forwards, backwards, turn, and the like.

You might be wondering what kind of a situation could arise then that would cause a functioning AI self-driving to become pseudo-paralyzed.

Example Of Self-Driving Car Pseudo-Paralysis

One of the most famous examples involves the early days of AI self-driving cars and their actions when coming to a four-way stop.

An AI self-driving car arrived at a four-way stop sign just as other cars did. The other cars were driven by humans. Even though the proper approach would normally be that whichever car arrives first then goes forward first, the other human driven cars weren’t necessarily abiding by this. It’s a dog eat dog world, and I’m sure you’ve had other drivers that have opted to force themselves forward and abridge your “right” to go ahead before they do.

The AI self-driving car kept waiting for the other cars to come to a full and proper halt.

Those other cars kept doing the infamous rolling stop. Each time that the AI self-driving car perceived that maybe it could start to go, one of the other cars moved forward, which then caused the AI to bring the self-driving car to a halt. You might have seen a teenage novice driver get themselves into a similar bind. They sit at the stop sign, politely waiting for their turn, which never seems to arrive.

You could say that this is a form of paralysis.

School Driving And Paralysis

There’s another example of an AI self-driving car paralysis that was recently reported about the real-world trials being undertaken by Waymo in Phoenix, Arizona.

Reportedly, one of their AI self-driving cars drove to the school of a family that was participating in the trial runs and waited for the school children to be released from the school.

You’ve maybe done this or seen this before, wherein cars sit waiting for the bell to ring and the school children to come flying out of the classrooms, and kids will pile into the waiting cars.

If you’ve not had an opportunity to be a human driver in the school setting of this kind, I assure you that it can be one of the most memorable times of your driving career (well, maybe not fondly memorable!).

I used to endure the same situation when I was picking up my children from school.

When the cars first arrived to the school, prior to the bell ringing, it is relatively quiet and everyone jockeys to find a place to temporarily park. Some leave their motor running; some turn off the car. Some read a book while waiting, some watch the school intently. Some actually get out of their cars, as though it is a taxi line at the airport, and converse with other fellow parents waiting likewise to pick-up their children.

That first part of the effort is relatively easy.

The main aspect is that you need to be careful about where you park, and that you don’t cut-off someone else or disturb what has become a kind of daily ritual with everyone seemingly knowing the “rules” about where to park and wait. It can be an unavoidable death sentence to anyone that decides to squeeze their car in front of everyone else that has already been waiting for the last twenty minutes or so. I’m sure the person would be dragged out of their car and beaten senseless.

Well, apparently, an AI self-driving car from Waymo found itself in such a situation.

The AI self-driving car reportedly became pseudo-paralyzed.

Just like with the wolf, it became a wait and see what will happen in the environment that might allow for breaking out of the stalemate.

You might be saying that the AI was just trying to be cautious.

It could have run over the children or parents; it could have rammed into the other cars. Let’s concede that indeed it could have moved if it intended to do so.

Fortunately, the AI was apparently well-programmed enough that it realized those were not seemingly viable options in this case. The need to avoid hitting these surrounding objects had kept the self-driving car from moving.

One current criticism of AI self-driving cars is that they are perhaps overly cautious.

They are actually skittish, which can be a limiting factor when driving a car.

Skittish Autonomous Cars

Do we want our AI self-driving cars to be skittish?

This can be a “safe” way to drive, one might argue, but it also means that there will be lots of real-world driving situations that will inhibit the self-driving car and it will become possibly paralyzed. Imagine the frustration of other human drivers at the skittishly driven car — they honk their horn, and can be blocked by the paralyzed car and unable themselves to move along. Pedestrians can be confused too. Is that self-driving car going to move or not move?

There are some that even have been playing tricks on AI self-driving cars.

I recently went to a baseball game and parked in a very busy parking lot. The entire time in the parking lot, while driving around to find a parking spot, people were not only super close to my car, many people at times touched my car (transgressions!). When I finally found an open spot, I pulled into it, and was within a scant inch or so of the cars on either side.

Most of the AI self-driving cars would become “paralyzed” with that kind of closeness.

There’s going to be a delicate ratcheting up of the risk aspects to allow for closer movement.

Scenario Analysis

In the case of the AI self-driving car among the school children, what should the AI have done?

Let’s first consider the four-way stop sign scenario.

In that situation, the AI self-driving car likely should have played chicken with the other human driven cars and opted to move forward, showcasing that it was wanting to move along. The other human driven cars would inevitably have backed-down and allow the AI self-driving car to go ahead. It was the omission of a clear cut indication that the AI self-driving car was going to “aggressively” make its move that the other human driven cars figured they would just outdo or outrun it.

Some would say that if there’s a politeness meter related to the AI, it’s time to move the needle towards the impolite side of things. Human drivers can be quite impolite. They get used to other drivers being the same way. Therefore, if they see a polite driver, they figure the driver is a sheep. It is worthwhile to be the fox and treat the sheep like sheep, so the impolite driver figures. Right now, AI self-driving cars are perceived as the meek sheep. Easy to exploit.

Does this imply that the AI self-driving car should run amuck?

Should it barrel down a street?

Should it try to take possession of the roadway and make it clear that it is the king of the traffic?

No, I don’t think anyone is suggesting this, at least not now.

Also, let’s be frank, it’s harder to go the impolite route when right now all eyes are on AI self-driving cars and how they are driving.

The moment an AI self-driving car bumps or harms a human, or scrapes against another car, this is going to be magnified a thousand fold as a reason why AI self-driving cars are not to be trusted.

Time Factor Is Crucial

This brings up the importance of the time factor when referring to this pseudo-paralysis.

How much time has to be spent sitting still to declare a paralysis?

This is a hard thing to quantify across all circumstances and situations. If I’m in my car and waiting at a red light, I’ll need to do so for the time it takes for the light to turn green. Are me and my car paralyzed? I don’t think so. I’d suggest that we would all agree this is not quite the circumstance that we’re referring to when we discuss the paralyzed self-driving car.

Let’s consider other scenarios that might lead to paralysis of an AI self-driving car, and consider what to do about it.

The AI self-driving car is driving along an open highway. A group of motorcyclists gradually come up to where the AI self-driving car is driving along. It’s doing 55 miles per hour. The motorcyclists were doing 80 miles per hour to catch-up with the AI self-driving car. Upon reaching the AI self-driving car, they all slow down to 55 miles per hour. They completely surround the AI self-driving car. What should the AI self-driving car do?

If you say that the AI self-driving car should slow down, it then takes us to the next step, imagine that the motorcyclists are going to gradually come to a halt. They could essentially get the AI self-driving car to come to a halt, doing so on an open highway. Is that safe? Would you, the human occupants, inside the AI self-driving car want that to happen? Maybe you feel that the motorcyclists are trying to threaten you, and they are readily using the AI to let it happen.

Here’s another similar kind of scenario.

You are in an AI self-driving car.

Unluckily for you, you’ve wandered into an area that has a riot erupting.

The AI self-driving car has come to a halt, paralyzed, because there are rioters completely surrounding the self-driving car. The rioters bang on the self-driving car and are aiming to get in and harm you. What should the AI self-driving car do?

Avoidance Often Not Feasible

Some would say that the AI self-driving car should not allow itself to get into such a situation.

That’s not much of a helpful answer.

Sure, if there’s an obvious situation that you can avoid, it would be handy if the AI could possibly predict a situation and avoid it.

In the case of the school children, it’s reportedly been indicated that the AI developers advised that the AI self-driving car not go into the muddled area to pick-up the children, and instead find a less crowded area to park and wait. Though this seems perhaps sensible, I’d suggest it has downsides, such as maybe causing the children to walk further to get to the car, increasing their chances of getting hit or other calamity occur. Also, notably, it was not a solution devised by the AI, but instead relied upon the AI developers to suggest or devise.

The point being that having a skittish AI self-driving car that has to avoid situations that can lead to paralysis is certainly something to keep in mind, but it doesn’t seem to fully address the problem.

Also, we’d prefer that the AI is able to “reason” about what to do, rather than hoping or betting that the AI developers can find a workaround. In the real-world, the AI self-driving car has to do what a human driver might do, and not necessarily be able to “phone a friend” to get out of a jam.

Overall Aspects To Deal With

In quick recap:

  • Try to avoid paralyzing situations, if feasible
  • Seek to learn from paralyzing situations, doing so via OTA and cloud-based machine learning
  • Be able to recognize when a paralyzing situation is arising
  • Once in a paralysis, be considering ways out of it
  • Keep watch of the clock to gauge how long it is lasting
  • Tendency toward impoliteness or aggressiveness as a possible paralysis buster
  • Reduce the bubble size but simultaneously increase the driving capability
  • Potentially confer with other AI self-driving cars via V2V about such situations
  • Other

Currently, most of the automakers and tech firms aren’t giving much consideration to the paralysis predicament.

They tend to consider this to be an “edge” problem (one that is not at the core of the driving task per se). Many AI developers tell me that if the AI self-driving car has to wait until the school children disperse or the baseball parking lot becomes empty, it’s fine as a driving strategy, and meanwhile the human occupants can be enjoying themselves in the car during the waiting time. I don’t think this is reasonable, and furthermore it ignores the often adverse consequent aspects of having the self-driving car being in the paralyzed state.

It’s time to make sure AI self-driving cars are able to cope with potentially paralyzing situations.

Conclusion

There is a famous saying that often times people fail at a task due to analysis paralysis.

They over-analyze a situation and thus get stuck in doing nothing.

You might claim that when I was in the woods and facing the wolf, I was overthinking things and had analysis paralysis. I don’t believe so. I was doing analysis and had ascertained that no action seemed to be the best course of action, for the moment, and remained alert and ready to take action, when action seemed suitable.

In the case of pseudo-paralysis for AI self-driving cars that I’ve been depicting here, I’m not herein been focusing on instances where the AI self-driving cars get themselves into an analysis infinite loop and suffer analysis paralysis.

Instead the situation itself is causing paralysis, as dictated by the desire to avoid injuring others, and so the need to remain alert and ready for making a move whenever suitable.

That’s the kind of paralysis we can overcome with better AI.

For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru

To follow Lance Eliot on Twitter: https://twitter.com/@LanceEliot

For his Forbes.com blog, see: https://forbes.com/sites/lanceeliot/

For his AI Trends blog, see: www.aitrends.com/ai-insider/

For his Medium blog, see: https://medium.com/@lance.eliot

For Dr. Eliot’s books, see: https://www.amazon.com/author/lanceeliot

Copyright © 2019 Dr. Lance B. Eliot

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store