How AI System Consistency Is Vital To Self-Driving Car Efficacy

Dr. Lance Eliot, AI Insider

Image for post
Image for post

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column:]


Maybe so, maybe not.

One of the hot topics in the distributed computer systems realm involves notions of consistency.

Simple stated, if you have multiple machines that are intended to store some particular data, what are the various ways that you can keep that data “consistent” across those myriad of machines, meaning that the same data is at those machines whenever you opt to take a look.

This is a lot harder to arrange than it might seem at first glance.

Let’s use an everyday example to illustrate this theme about consistency.

Example Of Consistency Difficulties

In a distributed system, it’s important to consider the latency aspects, meaning how long will it take for the distributed members to communicate with each other.

If I told you that with your chosen bank you’d need to wait at least 24 hours before your updates at one branch ATM propagated to all the other ATMs of that bank, you might be distressed about the rather excessive delay. You might be so upset that you’ll switch to another bank that can get things done much faster.

Another way to phrase things is to say that we want to ensure that the data is consistent across the distributed members.

Immediate Or Strong Consistency

If possible, you’d likely want immediate consistency, which sometimes is referred to as strong consistency.

Suppose we somehow had interconnected all the bank’s ATMs with super-duper fast fiber cable and within a split second of your $60 deposit it was communicated to all the others.

From your perspective, it would seem as though it was instantaneous and utterly consistent.

I think we can all agree though that it would still have been momentarily inconsistent, maybe just for a fraction of a second while the updates were occurring, but, meanwhile, admittedly, for practical purposes of you going to another ATM to check your balance, it sure seemed like there was no gap in time.

The principle of “eventual consistency” now can be considered in our story herein about distributed systems.

Assume that we cannot achieve pure instantaneous consistency, and there’s going to be some amount of delay involved in ensuring that all the distributed members are updated. I might scare you by saying that our distributed system could be designed such that it will never fully achieve consistency, meaning that some of those ATM’s aren’t ever going to get updated about your $100 balance.

That’s ugly, I realize, but it could be a possibility.

Eventual Consistency

Inevitably, even if it takes a large delay, eventual consistency is the notion that you want to make sure that consistency is ultimately reached.

Here’s a typical semi-formal definition for eventual consistency: Eventual consistency is a type of distributed model approach that informally provides that for any given data item, eventually the accesses to that data item will return the latest updated value.

There is a myriad of ways to implement this notion of “eventual consistency,” and also whether or not the distributed system “guarantees” that the consistency will ultimately be achieved or not.

You also can characterize the distributed system as having strong consistency versus weak consistency.

How scalable does it need to be?

What kind of availability is expected?

How complex can it be?


Autonomous Cars And Consistency

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI for self-driving cars, which includes designing and crafting the on-board distributed components of the self-driving car.

An AI self-driving car has tons of computer processors and tons of software components, encompassing aspects that entail the running of the car and the running of the AI, along with the numerous sensors and other devices. It’s a distributed system.

Accordingly, it is important to be concerned about the “consistency” of the data that’s within that distributed system.

The AI self-driving car makes use of various sensors, such as cameras, radar, LIDAR, sonar, and collects data about the world surrounding the self-driving car. Sensor fusion brings together the data to craft a virtual model of the real-world. The AI action planner component then needs to figure out what the next moves of the self-driving car should be.

Confusion Due To Inconsistency

A novice driver tends to be confused by inconsistency.

Suppose the car directly ahead of them is braking, but the car to their right is not. Shouldn’t both of those cars be braking?

And, if they are both braking, the novice figures maybe they should hit the brakes too. But, if only one of them is braking, maybe they shouldn’t be braking.

Or, maybe they should.

The narrowness of the novice’s viewpoint of the traffic and roadway makes it difficult to cope with what seems to be inconsistent behavior (or, if we consider the behavior as something perceived by your senses, we might then say that the data seems to be inconsistent).

Stale data becomes relevant here too.

I was sitting in a car of a novice teenage driver that looked over his shoulder to see if it would be safe to make a lane change. The teenager didn’t see any car in the next lane and so mentally decided it would be OK to make the lane change. Upon the teenager’s gaze coming back to looking forward, he momentarily become attentive to the car ahead that was tapping its brakes.

The teenager then decided that he should quickly make the lane change, avoiding possibly riding up upon the now braking car ahead. Unfortunately, in the few seconds of his looking forward, a car from a third lane had come into the lane that he wanted to get into, and now was sitting right where he would make his lane change.

The data he had in his mind was stale.

It no longer reflected the reality of the situation around him. Without realizing that he needed to refresh the data, he would have for sure made the lane change and likely cut-off the other car. Worse, his car and the other car could have hit each other. I spoke up just as he started to make motions to switch lanes, and gently dissuaded him (it was a gentle caution, since I didn’t want to cause a panic and have him make some dire move!).

Let’s use an even larger scope example of how the consistency or inconsistency of data can emerge over time.

Emergent Consistency Aspects Over Time

You are on the freeway, driving along at full speed.

At first, traffic seems wide open.

You then notice that there is intermittent braking taking place in the traffic up ahead. It’s sporadic. Next, the braking becomes more persistent and widespread. Traffic begins to slow down. The slowing progresses to becoming slower and slower. The traffic then becomes bumper to bumper. It’s now stop and go traffic. Overall, traffic is now moving at a crawl.

I’m sure you’ve experienced this kind of traffic before.

Pretty typical, especially for a morning or evening commute.

What do you make of this traffic situation?

If you are a novice driver, perhaps you are not thinking beyond the fact that the traffic is moving at a crawl.

A more seasoned driver is likely to begin speculating about what is causing the slowing of traffic.

Is the roadway and number of lanes not sufficiently large enough for the volume of traffic?

Is there a bend in the road ahead and it has caused drivers to slow down to be cautious because they cannot see what’s ahead?

Is there perhaps debris on the freeway and cars are slowing to avoid hitting the debris?

Suppose I told you that you could now just barely see some flashing lights up ahead. What would you now guess is happening?

You’d likely be thinking that flashing lights might mean a police car, or a fire truck, or an ambulance. Any of those on the freeway and with their flashing lights on probably suggests an accident up ahead. You can’t say for sure that’s what is occurring, but it’s a reasonable guess.

Next, I tell you that you can now see some flares and red cones on the freeway up ahead.

You are now probably betting that indeed there must have been a car accident. You also are guessing that it must have happened some time ago, in that if it had just happened there wouldn’t yet be cones and flares. The police or other workers that showed up must have put down the flares and cones. All of that would have taken time.

You then see that a fire truck is parked on the freeway, straddling several lanes. At this juncture, without even being able to see beyond the firetruck, you are pretty sure there’s a car accident scene. It makes sense, given the clues so far.

Let’s now revisit what has taken place in this example.

The initial data about the traffic was that it was flowing unimpeded.

Then, the data was that the traffic was starting to use their brakes. Some cars were still going fast, some were slowing down.

In a sense, you are getting data that seems “inconsistent” and you are seeking to make it become “consistent” so that you can put together a cohesive indication of what is taking place.

Part of the macroscopic overarching aspect of the AI system in an AI self-driving car is that it should be dealing with this kind of eventual consistency.

There is a sprinkling of data that at first suggests an inconsistency. From this, there becomes a gradual consistency as the data is further gathered and time progresses. At any moment in time, the AI system can be in a posture of not being sure of what is going to happen next, but it can be constructing a prediction based on what has occurred so far.

The eventual consistency might gradually be achieved, such as in this scenario that led to the realization that a car accident was up ahead. Or, the eventual consistency might not be resolved. I’m sure you’ve had times that the traffic slowed to a crawl and you thought for sure there must be an accident up ahead, and then once you got further ahead there seemed to be no rhyme or reason why the traffic had slowed.


Some AI developers have a mindset that they assume that the AI of the self-driving car will exist in a perfect world of having all needed information and the right information, and the fresh information, whenever needed.

Even a novice teenage driver knows that to not be the case. Driving involves dealing with imperfect information.

Decisions must be made based on sketchy data. Patterns that might eventually arrive at a state of consistency, might not. These are important aspects that any true AI self-driving car is going to need to cope with.

Eventually, for sure.

Sooner, rather than later.

For free podcast of this story, visit:

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see:

To follow Lance Eliot on Twitter:

For his blog, see:

For his AI Trends blog, see:

For his Medium blog, see:

For Dr. Eliot’s books, see:

Copyright © 2019 Dr. Lance B. Eliot

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store