Drunk Driving Driverless Cars, When The AI Gets Tipsy

Dr. Lance B. Eliot, AI Insider

Image for post
Image for post
AI self-driving cars can exhibit driving behaviors akin to those of human DUI drivers

During Memorial Day weekend, drunk drivers come out in droves. I was on-the-road after attending a beach BBQ, and saw on Pacific Coast Highway (PCH) a car that was weaving back-and-forth across the lanes of traffic ahead of me. The driver was also sporadically speeding up and then slowing down, in addition to the scary weaving actions.

Though I could not see the actual driver, their driving behavior was a kind of clue or tell that they were likely drunk or generally DUI (there could be other reasons for such driving behavior, such as the person having a seizure, or fighting a bee that’s come into their car, etc., but more likely they are “lit” as they say).

I opted to remain a sizable distance behind the car. Meanwhile, other cars around me weren’t willing to stay behind the weaving car and decided to drive past the suspicious driver. As they did so, the madcap driver nearly veered over into them. It was a very dangerous situation that was playing out at speeds of 40 to 65 miles per hour.

Statistics in the United States suggest that about 30 people die in car crashes in the U.S. each day due to drunk drivers. Sometimes it is reported as an on-the-average as there being a drunk driver death every hour. Sad. Horrific.

Many of the media articles about drunk driving tout that the advent of AI self-driving driverless cars will presumably do away with drunk driving related deaths by humans. This makes sense overall in that we certainly would not expect to see the AI guzzling down beers or having a bit too much wine at dinner. As such, in the case of true Level 5 AI self-driving cars, ones in which the AI is the driver, the AI system should be completely sober.

There is a bit of a potential twist though. Suppose the AI began to drive in a manner that appears to be DUI type of driving. If so, you might claim that the AI was driving like a drunk driver, at least in terms of the driving behaviors that we see human drunk drivers do.

I realize you might be taken aback by that statement. How can a self-driving car ever drive like a drunk driver? The self-driving car isn’t consuming large quantities of alcohol. It can’t go over to the nearby liquor store and get a fifth of scotch. On the surface, the suggestion that a self-driving car might drive like a drunk driver seems outlandish.

I am not suggesting that the AI gets drunk per se, but instead emphasizing that the AI might drive in a manner that we associate with drunk driving.

For example, in my story about driving down PCH, I mentioned that the car ahead of me was weaving across lanes and sporadically speeding up and slowing down. I could not see the actual human driver, and nor could I give a road sobriety test to the driver. All I knew and could detect was that the car was acting in either illegal ways or at least unsafe ways, and I deduced that likely the driver was drunk.

The same can be said of an AI self-driving car. There are circumstances under which a self-driving car might drive the car in a manner that we would infer implies drunk driving.

I realize that some advocates of self-driving cars will get very angry with me about this and will fiercely defend that no self-respecting self-driving car would ever drive in an amiss manner. In their Utopian world, all self-driving cars are driving perfectly all of the time. This is a false assumption.

Why would a self-driving car drive in a drunken manner? There are several ways in which this could readily arise. Let’s take a look at the most common ways that this can happen.

Faulty Sensors

A self-driving car relies upon the sensors that are mounted on and in the car to be able to sense the world around it. These sensors include cameras, radar, LIDAR (see my column on LIDAR), and other capabilities. Suppose that a sensor becomes faulty. The sensor fusion by the AI might be misled into believing that there is a car next to it that is trying to come into its lane, and so the self-driving car suddenly changes lanes, even though the other car is not really there (it is considered a “ghost” concocted by the faulty sensor).

This kind of lane changing and speeding up and slowing down could be undertaken by a self-driving car that is getting fed faulty data by its sensors.

The AI believes it is doing the right thing and protecting the car and its occupants. Meanwhile, if we were watching the self-driving car, we would think it was drunk driving. We would have no ready way of knowing that the faulty sensors are getting the AI confused. This is analogous to the drunk driver that we don’t know for sure is drunk and we need to infer from their wanton behavior that they might be.

Sensor Fusion Issues

The sensory data coming into the self-driving car is being assembled and analyzed via a process known as sensor fusion. The sensor fusion process consists of piecing together the various sensory data coming from the multiple sensors, and then trying to craft a single comprehensive view of the world around the self-driving car. This requires merging together the radar data coming from several radar devices dispersed around the car, merging together camera images and video streaming coming from cameras mounted all around the car, merging together LIDAR data being collected in 360 degree sweeps, and so on.

Software that is doing the sensor fusion can have bugs in it.

These bugs might mislead the system into believing that the outside world is different from reality, and so this is then fed into the AI that has to decide how to drive the car. If the sensor fusion is telling the rest of the AI that there is debris in the roadway up ahead, which maybe is falsely believed based on a mistake in the sensor fusion algorithm, the AI is going to swerve the car to avoid the non-existent debris.

From an outside perspective, all that we would see is the self-driving car making an unnecessary radical swerve and we would be perplexed since there was no apparent reason to do so. We’d think it was drunk driving.

Machine Learning False Learning

The AI of the self-driving car is often learning about how to drive via the use of Machine Learning or Deep Learning. The machine learning is based usually on vast amounts of data that are fed into the system.

Machine learning can be so complex that we don’t know for sure what the system “knows” and nor why it knows what it knows. In a sense, it is like a black box. The behavior of the system is what tells us whether the machine learning is doing a good job or not.

Suppose that the machine learning found a pattern among traffic data that suggested that whenever a red colored car was ahead and going more than 80 miles per hour that it was likely that the red colored car will make a rapid lane change into the lane next to it. Based on this trend, the machine learning might then be triggered that if it detects a red colored car that meets that criteria to then take the “safe” action of preemptively making a lane change to avoid having that red colored car merge into it.

For those of us observing the self-driving car, we’d have no idea as to why it suddenly opted to change lanes. We might think it was drunk driving.

AI Probabilities and Uncertainties

Any true self-driving car must contend with probabilities and uncertainties. The real-world of driving is not a one-hundred percent guaranteed situation.

Will that pedestrian step off the sidewalk and into the path of the self-driving car? Assign a probability to it, and then the self-driving car will react accordingly. Will that big truck to my right not realize I am in its blind spot and it will try to change lanes into me? Assign a probability to it. There are lots and lots of probabilities and uncertainties involved.

When dealing with probabilities and uncertainties, the self-driving car and its AI is going to take actions based on various thresholds. If it believes that the pedestrian is going to step off the sidewalk, the AI will then take evasive action like instructing the self-driving car to come to a sudden halt. Suppose the pedestrian does not attempt to dart into the street. All that we would see is the self-driving car inexplicably coming to a halt. Bizarre, we might think. Drunken driving, we might ascribe.

Computer Processors and Memory Issues

The AI that is driving the self-driving car must rely upon lots of computer processors and lots of computer memory to perform all of its calculations and efforts. These processors and their memory are hardware components that can always have the chance of going faulty or failing. Think about your home PC that often runs out of memory and you need to reboot. I am not saying that the processors and memory of the automobile system are the same per se, but merely pointing out that they are hardware and will gradually and eventually breakdown.

If the computer processors or memory go bad, it can impair the AI software. If the AI software is impaired, it might render decisions to the automotive controls of the car that aren’t intended. The next thing you know, the self-driving car is making seemingly strange turns and actions that we can’t readily explain. Drunk driver.

External Communications

Most of the self-driving cars are relying upon external communications to convey aspects of how they are driving to some kind of centralized system, often referred to as OTA (Over-the-Air) access. The centralized system is collecting data and using that to do galactic style machine learning that can be shared back to the individual cars and whatever individualized machine learning they are doing.

Imagine that the external communications feeds some kind of instructions into your self-driving car and your self-driving car opts to believe that it should take some kind of evasive action, erroneously, but it doesn’t realize it. For example, the centralized system reports that there is a massive pile-up of cars ahead and so get off the freeway right away to avoid it.

If we were watching the self-driving car and saw it dart to a freeway exit, we might not know why and would wonder whether it is exhibiting drunk driving behavior.

Driverless Car Drunk Driving Behaviors and Possible Mitigation

The aforementioned aspects are all realistic ways in which a self-driving car could be considered to be acting like a drunk driver. The types of actions that we might see include these:

  • Swerving across lanes needlessly
  • Straddling a lane without apparent cause
  • Taking wide turns rather than proper tight turns
  • Driving onto the wrong side of the road
  • Driving onto the shoulder of the road
  • Driving in an emergency lane
  • Driving too slowly for the roadway situation
  • Driving too fast for the roadway situation
  • Nearly hitting another car
  • Cutting off another car
  • Nearly hitting a pedestrian, bicyclist, or motor cyclist
  • Being too close to the car ahead of it
  • Stopping when it seems unnecessary
  • Rolling past stop signs
  • Running a red light
  • Other

Any and all of these are actions that a self-driving car might take.

The self-driving car might do these actions by intent, meaning that it thought the action was warranted given the existing driving conditions, or it might do it by mistakenly invoking a routine that should not have been invoked.

What are we to do about self-driving cars that seem to be drunk driving?

First, we need to make self-driving cars as safe as possible so that they won’t do the drunken driving. This involves ensuring that the sensors have sufficient redundancy and are resilient in the real-world. This requires testing the AI systems to make sure that they will not allow buggy behavior to crop up. This requires layers of safety systems that check and re-check the actions of the AI and the self-driving car and its machine learning, double checking the actions to try and ensure that there is a bona fide reason for the movements made by the system. And so on.

Second, we have to acknowledge that drunken behavior can occur by self-driving cars. There are way too many self-driving car makers that are using the head-in-the-sand approach and pretending that this can never happen.

As they say, a key step to being cured or recovery involves acknowledging the failings that one has.

Third, we have to decide whether or not we want to allow some kind of external control over our self-driving cars. There are some that believe that once we have pervasive V2V (vehicle-to-vehicle communications), we will have self-driving cars that operate as a collective. They communicate with each other and regulate each other.

Presumably, in this realm, if a self-driving car detected that another self-driving car was driving drunkenly, it could warn that self-driving car and/or even override what it is doing. Do we want that to happen? It has both promise and peril. Suppose the other self-driving car is wrong, and it mistakenly or “drunkenly” tells a non-drunk self-driving car to take a bad action?

Conclusion

Should a drunken driving self-driving car get a DUI ticket? As preposterous as that seems, we do need to consider what will happen when a self-driving car acts in a wanton fashion.

The point here is that we need to be realistic about the aspect that self-driving cars will have the potential for acting in a manner that we would perceive as drunk driving. Let’s take precautions to anticipate this outcome. Drive safely out there, whether you are a human driver or an AI self-driving car.

For free podcast of this story, visit:

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see:

To follow Lance Eliot on Twitter: @LanceEliot

Copyright © 2019 Dr. Lance B. Eliot.

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store