Self-Driving Cars Coping With Traffic Havoc And Creating Traffic Havoc

Dr. Lance Eliot, AI Insider

Image for post
Image for post

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: and]

Let’s talk about havoc. In sports, one of the more unusual and lesser-known metrics for analyzing football teams consists of their havoc rating.

To calculate a havoc rating, you count up the number of football plays that your defense was able to disrupt against the opposing team, such as plays when your defense was able to intercept the football or forced the opposing team to fumble the ball, or tackled the opposing side for a loss of yardage, and so on. Next, you divide that count of disrupted plays by the total number of plays undertaken by the opposing team.

The resulting fraction is turned into a percentage, allowing you to readily see what percentage of the time that the defense was able to mess-up the opposing side’s offense. For example, if there were 100 plays by the opposing team and your defense was able to undermine the offense on 25 of those plays, you would have a havoc rating of 25% (that’s 25 divided by 100).

The offense wants to keep the havoc rating as low as possible; the defense is aiming to get as high a havoc rating as they can, showcasing how often they can cause the offense to slip-up. If you had a havoc rating of 100%, it would mean that on every play that was run by the opposing team, you managed to confound their efforts. That would be tremendous as a defense. Of course, if you had a havoc rating of zero, it would suggest that your defense is not doing its job and that the opposing side is making plays without being at all disturbed or undermined.

Havoc ratings can be used in other endeavors too. Perhaps we ought to be using a havoc rating when it comes to the emergence of true AI-based autonomous self-driving cars.

Understanding Havoc And Self-Driving Cars

True self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task. These driverless cars are considered a Level 4 and Level 5, while a car that requires a human driver to co-share the driving effort is usually considered at a Level 2 or Level 3. The cars that co-share the driving task are considered semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADAS (Advanced Driver-Assistance Systems).

There is not yet a true self-driving car at Level 5. We don’t yet know if this will be possible or how long it will take to get there.

Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some point out).

Since the semi-autonomous cars require a human driver, I’m not going to try and apply a havoc rating to the efforts of a Level 2 or Level 3 car. We could do so, but it would make the havoc aspects murky because there would be a portion attributable to the human driver and another portion caused by the automation, likely a blur of the two sources.

Instead, let’s focus on the havoc aspects involving true self-driving cars, ones at Level 4, and Level 5. With the AI being the only driver, the havoc aspects can be assigned to the driving system per se.

There are two ways in which havoc can arise:

1) By the actions of human drivers and for which the AI must contend

2) By the action of the AI driving system and for which other drivers need to contend

I think that we would all agree that human drivers often create havoc in traffic. As such, the AI driving system must be able to cope with havoc instigated by nearby human drivers.

You might be somewhat surprised at the second way in which havoc arises, namely by the actions of the AI driving system. Many pundits claim that AI driving systems will be perfect drivers, but as you’ll see in a moment, this is a false and misleading assumption.

Before I jump into the fray, some pundits also assert that we’ll have only self-driving cars on our roadways and therefore there isn’t a need to deal with human drivers. Only someone living in a dream world would believe that we are only going to have self-driving cars and won’t also have human drivers in other nearby cars.

In the United States alone, there are about 250 million conventional cars. All those cars are not going to suddenly be dispatched to the scrap heap upon the introduction of self-driving cars. For a lengthy foreseeable future, there will be both human-driven cars and self-driving cars mixing together on our highways and byways.

It stands to reason.

Human Drivers Create Havoc

Consider the apparent notion that human drivers can create havoc.

You are driving along, minding your own business, when a car that’s to your left opts to dart in front of your car and make a right turn at the corner up ahead.

We’ve all experienced that kind of panicky and curse invoking driving situation.

The lout that shockingly performs such a dangerous driving act is creating havoc.

They are likely to disrupt your driving, forcing you to heavily use your brakes, maybe even causing you to swerve to avoid hitting their car. A car behind you might then need to also take radical actions, trying to avoid you, while you are trying to avoid the transgressor.

There might be pedestrians standing at the corner that see the madcap car heading toward them, forcing them to leap away and cower on the sidewalk.

Imagine then a human driver that throughout a driving journey might undertake some number of havocs producing driving actions. Divide the number of havoc acts by the total number of overall driving actions, and you have a percentage that reveals their havoc rating.

The higher a havoc rating for a driver, the worse a driver they are. For a driver with a low havoc rating, it tends to suggest that they are not creating untoward driving circumstances while on the public roadways.

Are you already thinking about a friend or colleague that you are sure must have a sky-high havoc rating?

I’m sure you know such driving Neanderthals.

Currently, few of the self-driving cars that are being tried out on our roadways are particularly versed in dealing with high havoc-rated human drivers.

Most of the self-driving cars generally assume that the surrounding traffic will be relatively calm and mundane. You can think of those self-driving cars as acting a bit like a timid teenage driver that is just starting to drive a car. Those novice drivers hope and pray that no other driver will do something outlandish.

If other drivers do crazy things, the teenage driver will resort to the simplest possible retort, which might be applicable or might make the situation even worse. In the case of getting cut off by the driver to their left that is darting toward a right turn, the novice driver might jam on the brakes and come to a sudden halt. Doing so might not have been the best choice, and it could end up with the car behind them rear-ending their car.

True self-driving cars need to step-up their game and be able to contend with high havoc human drivers.

This capability can either be hand programmed into the AI driving system or can be “learned” overtime via the use of Machine Learning (ML) and Deep Learning (DL). I don’t want to though suggest that the ML and DL are equivalent to human learning, which they most decidedly are not. There is no kind of common-sense reasoning involved in today’s ML and DL, nor do I expect to see such a capability anytime soon.

Self-Driving Cars Create Havoc

Now that we’ve covered the obvious use case of human drivers that create havoc, let’s explore the lesser realized aspect that self-driving cars can also generate havoc.

Suppose that a true self-driving car is coming down the street. The self-driving car is moving along at the posted speed limit and obeying all the local traffic laws.

A pedestrian on the sidewalk is looking at their smartphone and not paying attention to the traffic, and not noticing the sidewalk activity since their nose is pointed at their phone.

Oops, the distracted pedestrian nearly walks right into a fire hydrant. At the last moment, the pedestrian side steps around the fire hydrant, moving suddenly onto the curb.

The AI driving system of the self-driving car is using its cameras, radar, LIDAR, ultrasonic sensors, and other detection devices to monitor the traffic and nearby pedestrians.

Upon detecting the pedestrian that seems to be bent on entering into the street, and not realizing that the pedestrian was merely avoiding conking into a fire hydrant, the AI calculates that the pedestrian might get into harm’s way and end up in front of the car.

Wanting to be as safe as possible, the AI instructs the car to come to an immediate halt.

Well, it turns out that the sudden stop of the self-driving car then leads to a human-driven car that is behind the driverless car to rear-end the self-driving car.

The point is that the actions of the AI driving system can be well-intended (though don’t ascribe human intention to the AI system, please, at least until someday the “singularity” happens), and yet the efforts produce havoc.

Similar in some respects to the earlier description of a novice teenage driver, the AI system is going to be performing driving acts that have as an adverse consequence the generation of havoc.

Thus, perhaps we ought to be measuring the havoc ratings of self-driving cars.


A driverless car that has a high havoc rating should either be prevented from driving around or at least shunted into specific driving areas whereby the havoc producing actions won’t have serious consequences (such as when moving at very low speeds or driving in lanes devoted exclusively to self-driving cars).

I realize that some of you might be exclaiming that the havoc producing self-driving car can readily be updated with better software by undertaking an OTA (Over-The-Air) electronic communications and downloading improved driving AI.

Yes, that’s true, but you are also mistakenly assuming that somehow those changes are going to be immediately ready and usable.

Not so.

Gradually, over time, presumably, the AI driving systems will be improved.

Meanwhile, we are going to be somewhat at the mercy of whatever havoc producing AI driving systems are on our roadways.


Allow me to quote from Shakespeare (Act 3, Scene 1): “Cry ‘Havoc!,’ and let slip the dogs of war.”

This famous line from the play Julius Caesar is spoken by Mark Antony and indicates that he wanted to go after the assassins that murdered Caesar.

The dogs of war are most likely actual dogs that were trained for warfare, and he was saying that the killer dogs should be let loose to attack the assassins, though the expression might also mean to let loose the military forces overall.

For the self-driving cars that are currently being let loose on our roadways, and once they no longer have back-up human drivers attending to the driving of the AI system, will we be potentially incurring havoc and will those AI systems be able to also contend with the havoc created by human drivers?

Nobody knows, and especially nobody knows if we aren’t measuring the havoc producing and havoc handling capabilities of self-driving cars.

Automakers and tech firms might be well-intended in their spirited efforts to get self-driving cars onto our roads, but let’s not also allow ourselves to fall into the trap of unleashing havoc.

I think Shakespeare if he were alive today, would likely have something to say about that.

For free podcast of this story, visit:

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see:

To follow Lance Eliot on Twitter:

For his blog, see:

For his AI Trends blog, see:

For his Medium blog, see:

For Dr. Eliot’s books, see:

Copyright © 2020 Dr. Lance B. Eliot

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store