Wrong Way Driving Not Always So Wrong, Self-Driving Cars Need To Deal With It

Dr. Lance Eliot, AI Insider

Image for post
Image for post

I sheepishly admit that I have been a wrong way driver.

There have been occasions whereby I drove the wrong way, doing so luckily without leading to any undesirable outcome, and for which I certainly regret having mistakenly gone astray. It has happened on several instances inside parking garages or parking lots.

When you suddenly realize that you are heading in the wrong direction, it can be relatively disorienting.

Should you continue forward, even though there is now a chance that you’ll come head-to-head with a car that is going in the correct direction? Or, should you back-up, which at least then has your then car going in the proper direction, but a lengthy effort of backing up can have its own dangers.

For most people, I’d bet that they usually are quick to consider turning around. In essence, as soon as practical, try to get your car turned around and headed toward the proper direction.

Deadly Serious Cases Of Wrong Way Driving

Wrong way driving can produce untoward and horrific results.

Just about month or so ago, a wrong way driver at 2:00 a.m. got onto two of the major freeways here in Southern California, the I-5 and the I-110, and proceeded to drive at speeds of 60 to 70 miles per hour.

The crazy driver side swiped some other vehicles during the ordeal. The police were brave and actually chased after the driver.

Sadly, the wayward driver rammed into another car and there were several deaths as a result of the wrong way act.

Our Collective Fascination With Wrong Way Driving

As a society, we seem to have a fascination at times with wrong way driving. There are numerous movies and TV shows that depict driving the wrong way.

In terms of why people drive the wrong way, here’s some reasons:

  • Drunk driving
  • Confused driver
  • Inattentive driver
  • Shortcut driver
  • Thrill-seeker driver
  • Etc.

There has been extensive research about how to design off-ramps and on-ramps to try and prevent confused or inattentive drivers from going the wrong way.

It can be relatively easy to get confused when driving in an area that you are unfamiliar with and inadvertently go up an off-ramp. Going down an on-ramp is usually a less likely circumstance since the car driver would need to make some sizable contortions to get their car positioned to do so.

Signs that say wrong way are oftentimes put up in areas where it is known that drivers can get confused about whether a particular road or street can be driven on in a direction that seems enticing. In spite of the signs, sometimes people go the wrong way anyway.

Statistics About Wrong Way Driving Related Deaths

According to statistics by the NHTSA (National Highway Traffic Safety Administration), there are about 350 or so deaths per year in the United States due to wrong way driving.

Any such number of deaths is regrettable, though admittedly it is a relatively smaller number of deaths than by other kinds of driving mistakes (there are about 35,000 car related deaths per year in the U.S.). There doesn’t seem to be any reliable numbers about how many wrong way instances there are per year and such instances are usually unreported unless there is a death involved.

The total number of miles driven in the United States is estimated at around 3.2 trillion miles per year. One would guess that driving the wrong way happens daily and amounts to perhaps some notable percentage of that enormous number of driving miles.

Fortunately, it would seem that the number of actual accidents due to wrong way driving is quite small, but this is likely due to the wrong way driver quickly getting themselves out of their predicament and also the reaction of right-way drivers to help avoid a collision. In essence, it might not be happenstance that the wrong way driving doesn’t produce more problems. It seems more likely that it is due to human behavior of trying to overt problems when a wrong way instance occurs.

AI Autonomous Cars And The Matter Of Wrong Way Driving

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving driverless autonomous cars. There are two key aspects to be considered related to the wrong way driving matter, namely how to avoid having the AI self-driving car go the wrong way, and secondly what to do if the AI self-driving car encounters a wrong way driver.

Plus, a bonus topic, namely what about having an autonomous car go the wrong way, on purpose, if needed (which, for some, seems outright wrong, since they ascribe to a belief that driverless cars should never “break the law”).

I’d first like to clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task.

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too.

Use Case: Autonomous Cars Goes The Wrong Unintentionally

Returning to the topic of driving the wrong way, let’s first consider the possibility of an AI self-driving car that happens to go the wrong way.

Some pundits insist that there will never be the case of an AI self-driving car that goes the wrong way. These pundits seem to think that an AI self-driving car is some kind of perfection machine that will never make any mistakes. I suppose in some kind of Utopian world this might be the case, or perhaps for a TV or movie plot it might the case, but in the real-world there are going to be mistakes made by AI self-driving cars.

You might be shocked to think that an AI self-driving car could somehow go the wrong way. How could this happen, you might be asking. It seems incredible perhaps to imagine that it could happen.

The reality is that it could readily happen.

Suppose there is an AI self-driving car, dutifully using its sensors, and is scanning for street signs, but fails to detect a street sign that indicates the path ahead is considered a wrong way direction. There are lots of reasons this could occur. Maybe the street sign is not there at all and it has fallen down, or vandals had taken it down a while ago. Or, it might be that the street sign is obscured by a tree branch or maybe it is so banged up and has graffiti that the AI system cannot recognize what the sign is. Maybe the sign can be only partially seen and does not present itself sufficiently to get a match to the Machine Learning (ML) that was used to be able to spot such signs. Perhaps the weather conditions are such that it is heavily raining, and the sign cannot be detected or perhaps it is snowing and there is a layer of snow obscuring the signs. And so on.

I assure you, there are lots of plausible and probable reasons that the AI might not detect a street sign that warns that the self-driving car is about to head the wrong way.

You might be thinking that it doesn’t matter if the AI is able to detect a sign, since it would certainly have a GPS and map of the area and would realize that the road ahead is one that would involve going the wrong way.

Though it is certainly handy for the AI to have a map of an area and a GPS capability, you cannot assume that a map will always be available and also that the GPS won’t necessarily have anything to do about warning of a wrong way up ahead. Currently, the focus for the auto makers and tech firms involves developing elaborated maps of localized areas and then having their trials of the AI self-driving cars take place in a geofenced area.

Once we have widespread AI self-driving cars, I don’t think we should be basing their emergence on having mapped every square inch of the world in which they are driving.

In short, I am claiming that there are going to be circumstances in which an AI self-driving car is going to end-up going the wrong way. This would happen due to the AI not being able to discern the roadway situation and not having a prior map that would otherwise forewarn that a wrong way is up ahead.

I’ll offer one other comment about this notion of going the wrong way.

Are you willing to bet that there will never be a situation involving an AI self-driving car that finds itself going the wrong way? I ask because if the AI self-driving car is not ready to tackle such a predicament, and it is because you are so sure that it will never happen, well, I’d not like to be in or near that AI self-driving car that has gotten itself into such a fix and then is unaware of it or does not know what to do about it.

The auto makers and tech firms are so busy trying to get AI self-driving cars to simply drive the right way on roads, they generally have considered this aspect of dealing with driving the wrong way to be an edge problem. An edge problem is one that is not considered at the core of what you trying to solve. We’re not quite so convinced that this should considered an edge per se and that it might well happen more than you might think.

AI Coping With Going The Wrong Way

A proficient AI self-driving car needs to be able to detect that it has gotten itself into a wrong way driving situation.

The detection can potentially occur by the sensory input and interpretations of the AI. Once you are immersed in a wrong way driving situation, there are often telltale clues that something is amiss.

The AI might also detect cars that are coming straight toward the self-driving car, similar to my example earlier of the game of chicken that I had with a wrong way driver. There might be other surroundings related aspects such as the movement of pedestrians, the motion of bicyclists, and other environmental aspects that can be used to detect a wrong way situation.

The sensor fusion is crucial at this juncture since it is often difficult to ascertain via one indicator alone that the AI self-driving car is going the wrong way. It might be a multitude of indications coming from a multitude of the sensors, all of which needs to be combined and considered during the sensor fusion portion of the AI system driving task.

It could be too that the AI self-driving car might get alerted via electronic communications. There could be other AI self-driving cars nearby and those self-driving cars might have detected that your AI self-driving car is going the wrong way. They could potentially communicate via V2V (vehicle-to-vehicle communication) and let the AI of your self-driving car know that it is heading in the wrong direction. There might also be V2I (vehicle-to-infrastructure) that could also be alerting the AI of the self-driving car.

If the AI somehow becomes aware of the matter, it then needs to update the virtual world model and prepare an action plan of what to do.

Use Case: Autonomous Car Going The Wrong Way Purposely

I’d like to provide an additional variation on the rational for the wrong way driving of an AI self-driving car.

There might be situations wherein the AI self-driving car is purposely supposed to go the wrong way.

Recently, a car accident had blocked part of a major coast highway and the police were directing traffic to go the wrong way on a diversion street. They were forcing traffic to go up a one-way road in the wrong direction. I admit a bit of hesitation when I abided by the police officer’s instruction, but I figured the police knew what they were doing. In this case, the police had made sure that this was a safe path to take.

I mention this aspect because suppose that an AI developer has decided that an AI self-driving car should never go the wrong way. This might be done under the naïve belief that since it is dangerous and wrong for an AI self-driving car to go the wrong way that it should be restricted from ever doing so. There are going to be circumstances that involve an AI self-driving car driving “illegally” by doing something such as going the wrong way. This is yet another reason why the AI needs to be prepared to do so.

Use Case: Wrong Way Driving Cars

I think you can probably agree with me that there is a likely chance that an AI self-driving car might ultimately encounter a wrong way driver.

What should the AI do?

It needs to first detect that the wrong way car is headed towards it. Once this detection has occurred, the virtual world model needs to get updated. The AI action planner then can consider various scenarios of how the wrong way situation might play out. This is similar to my harrowing story of being in Hawaii and facing a wrong way driver.

The AI system might decide that it is best to continue forward “as is” or it might decide to take an evasive action. This all depends upon the situation at hand. Is there other nearby traffic? How soon will a collision happen? Which approaches seem to offer the best chances for survival? Etc.

The AI might also be able to confer with other nearby AI self-driving cars. Again, via V2V, it could be that the AI of your self-driving car might either become aware of the wrong way driver by being warned by some other AI self-driving car, or it could be that several AI self-driving cars might band together, momentarily, in an effort to deal with the wrong way driver situation.

There are also the ethical aspects involved in the matter of the AI trying to determine what action to take.

As per the famous Trolley problem, an AI system in such a situation might need to make a “decision” that involves trying to minimize loss of life, and yet it is somewhat ambiguous as to how the AI is supposed to do so. Should the AI opt to swerve into the next lane to avoid the head-on collision with the wrong way driver, but it could be that by swerving into the next lane that the AI self-driving car will collide with another car that is heading in the correct direction.

It might be that any action taken by the AI, even taking no particular action, might end-up with an unavoidable crash. What is the basis for making such a decision?

Some final thoughts about the wrong way driver situations.

Suppose we do eventually have only AI self-driving cars and there are no human driven cars. What then? Well, I still contend that there is a possibility of having an AI self-driving car that can end-up going the wrong way. Thus, the AI systems of self-driving cars should have a provision for dealing with such situations.

I would guess though that by the time we would have all and only AI self-driving cars on our roadways, the odds are that we’d have lots of IoT (Internet of Things) and quite sophisticated V2V, V2I, and even V2P (vehicle-to-pedestrian) electronic communications. As such, the odds of an AI self-driving car going the wrong way would be substantially reduced.

Conclusion

I’ll end with this somewhat mind-boggling thought.

In the movies and TV there are those car chases whereby the spy or crook goes the wrong way, and miraculously lives, doing so by magically avoiding the oncoming cars. This is something unlikely to be realistic in today’s world of human drivers. If you drove on the wrong way of a busy freeway or highway, I’d dare say that someone is going to get hurt.

In a world of only AI self-driving cars, I suppose you could say that it would be feasible to go the wrong way. Assuming that you have generally perfect V2V and the AI of the self-driving cars are all working in concert with each other, you could in theory purposely go the wrong way, even on a busy road, and do so without incurring any collisions.

Some might even say that this might be a means to more efficiently use our roadways. You could allow AI self-driving cars to use the same roads for both directions and let them figure out how to make it happen. The Golden Gate Bridge adjusts some of the lanes during the peak traffic times to allow for traffic to go one direction and then later in the day shift to the other direction. Nonetheless, there is still only one direction allowed at a time. Imagine a situation whereby we allowed self-driving cars to go in any direction on our roads and go straight head-to-head with other self-driving cars.

Even if this seems theoretically possible, I’d suggest that if you were a passenger in such a self-driving car, you’d have a tough time watching this occur. Then again, will we be so accustomed to believing in the AI of the self-driving cars by then that we would also readily accept them playing this game of chicken?

Hard to imagine.

For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru

To follow Lance Eliot on Twitter: @LanceEliot

For his Forbes.com blog, see: https://forbes.com/sites/lanceeliot/

For his Medium blog, see: https://medium.com/@lance.eliot

For Dr. Eliot’s books, see: https://www.amazon.com/Dr.-Lance-Eliot/e/B07Q2FQ7G4

Copyright © 2019 Dr. Lance B. Eliot

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store