When AI Self-Imposed Constraints Aren’t Good For Self-Driving Cars

Dr. Lance Eliot, AI Insider

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]


They are everywhere.

Seems like whichever direction you want to move or proceed, there is some constraint either blocking your way or at least impeding your progress.

Per Jean-Jacques Rousseau’s famous 1762 book entitled “The Social Contract,” he proclaimed that mankind is born free and yet everywhere mankind is in chains.

Though it might seem gloomy to have constraints, I’d dare say that we probably all welcome the aspect that arbitrarily deciding to murder someone is pretty much a societal constraint that inhibits such behavior.

There are thus some constraints that we like and some that we don’t like.

In the case of our laws, we as a society have gotten together and formed a set of constraints that governs our societal behaviors.

In computer science and AI, we deal with constraints in a multitude of ways.

When you are mathematically calculating something, there are constraints that you might apply to the formulas that you are using.

Optimization is a popular constraint.

You might desire to figure something out and want to do so in an optimal way.

You decide to impose a constraint that means that if you are able to figure out something, the most optimum version is the best.

Hard Versus Soft Constraints

There are so-called “hard” constraints and “soft” constraints.

Some people confuse the word “hard” with the idea that if the problem itself becomes hard that the constraint that caused it is considered a “hard” constraint.

That’s not what is meant though by the proper definition of “hard” and “soft” constraints.

A “hard” constraint is considered a constraint that is inflexible. It is imperative. You cannot try to shake it off. You cannot try to bend it to become softer.

A “soft” constraint is one that is considered flexible and you can bend it. It is not considered mandatory.

This brings us to the topic of self-imposed constraints, and particularly ones that might be undue.

Self-Imposed And Undue Constraints

Problems that are of interest to computer scientists and AI specialists are often labeled as Constraint Satisfaction Problems (CSP’s).

These are problems for which there are some number of constraints that need to be abide by, or satisfied, as part of the solution that you are seeking.

Some refer to a CSP that contains “soft” constraints as one that is considered Flexible.

A classic version of CSP usually states that all of the given constraints are considered hard or inflexible.

If you are faced with a problem that does allow for some of the constraints to be flexible, it is referred to as a FCSP (Flexible CSP), meaning there is some flexibility allowed in one or more of the constraints. It does not necessarily mean that all of the constraints are flexible or soft, just that some of them are.

Autonomous Cars And Self-Imposed Undue Constraints

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars.

One aspect that deserves apt attention is the self-imposed undue constraints that some AI developers are putting into their AI systems for self-driving cars.

As an example of how driving a car can consist of our own mental “constraints” there’s a story I like to tell.

One day, I was driving down a street that was flooded due to a downpour of rain. Unfortunately, I drove into the flooded street thinking that I could just plow my way through the water, but I began to realize that the water was much deeper than I had assumed.

I had cars behind me that were pinning me in and so I couldn’t readily try to back out of the situation.

If I went any further forward, I was going to end-up having my car get so deep into the water that it might pour into the car and likely too stop the engine.

What to do?

There was a raised median in the middle of the road that had grass and was normally off-limits to cars.

I would have never thought to drive onto the median, but I saw another car do so. This allowed the car to stay high enough in the water to make its way down the street.

After a moment’s hesitation, I decided that driving on the median made sense in this situation, and I did likewise.

As a law-abiding driver, I would never have considered driving up on the median of a road.

It was a constraint that was part of my driving mindset.

Was it a “hard” constraint or a “soft” constraint?

In my mind, originally it was considered a “hard” constraint, but now I realize that that I should have classified it as a “soft” constraint.

AI Dealing With Constraints

Let’s now recast this constraint in light of AI self-driving cars.

Should an AI self-driving car ever be allowed to drive up onto the median and drive on the median?

I’ve inspected and reviewed some of the AI software being used in open source for self-driving cars and it contains constraints that prohibit such a driving act from ever occurring. It is verboten by the software.

I would say it is a self-imposed undue constraint.

Sure, we don’t want AI self-driving cars willy nilly driving on medians.

That would be dangerous and potentially horrific.

Does this mean that the constraint though must be “hard” and inflexible?

Does it mean that there might not ever be a circumstance in which an AI system would “rightfully” opt to drive on the median?

I’m sure that in addition to my escape of flooding, we could come up with other bona fide reasons that a car might want or need to drive on a median.

I assert that there are lots of these kinds of currently hidden constraints in many of the AI self-driving cars that are being experimented with in trials today on our public roadways.

The question will be whether ultimately these self-imposed undue or “hard” constraints will limit the advent of true AI self-driving cars.

Machine Learning And Deep Learning Aspects

For AI self-driving cars, it is anticipated that via Machine Learning (ML) and Deep Learning (DL) they will be able to gradually over time develop more and more in their driving skills.

You might say that I learned that driving on the median was a possibility and viable in an emergency situation such as a flooded street.

Would the AI of an AI self-driving car be able to learn the same kind of aspect?

The “hard” constraints inside much of the AI systems for self-driving cars is embodied in a manner that it is typically not allowed to be revised.

The ML and DL takes place for other aspects of the self-driving car, such as “learning” about new roads or new paths to go when driving the self-driving car. Doing ML or DL on the AI action planning portions is still relatively untouched territory. It would pretty much require a human AI developer to go into the AI system and soften the constraint of driving on a median, rather than the AI itself doing some kind of introspective analysis and changing itself accordingly.

There’s another aspect regarding much of today’s state-of-the-art on ML and DL that would make it difficult to have done what I did in terms of driving up onto the median. For most ML and DL, you need to have available lots and lots of examples for the ML or DL to pattern match onto. After examining thousands or maybe millions of instances of pictures of road signs, the ML or DL can somewhat differentiate stop signs versus say yield signs.


The nature of constraints is that we could not live without them, nor at times can we live with them, or at least that’s what many profess to say. For AI systems, it is important to be aware of the kinds of constraints they are being hidden or hard-coded into them, along with understanding which of the constraints are hard and inflexible, and which ones are soft and flexible.

To achieve a true AI self-driving car, I claim that the constraints must nearly all be “soft” and that the AI needs to discern when to appropriately bend them. This does not mean that the AI can do so arbitrarily. This also takes us into the realm of the ethics of AI self-driving cars. Who is to decide when the AI can and cannot flex those soft constraints?

Let’s at least make sure that we are aware of the internal self-imposed constraints embedded in AI systems and whether the AI might be blind to taking appropriate action while driving on our roads.

That’s the kind of undue that we need to undue before it is too late.

For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru

To follow Lance Eliot on Twitter: https://twitter.com/@LanceEliot

For his Forbes.com blog, see: https://forbes.com/sites/lanceeliot/

For his AI Trends blog, see: www.aitrends.com/ai-insider/

For his Medium blog, see: https://medium.com/@lance.eliot

For Dr. Eliot’s books, see: https://www.amazon.com/author/lanceeliot

Copyright © 2019 Dr. Lance B. Eliot

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store