Free-Will, Real Or Not, Humans and AI Might Need It, Driverless Cars Too

Dr. Lance Eliot, AI Insider

Image for post
Image for post
Is there such a thing as free-will, and what about free-won’t

Perhaps one of the oldest questions asked by humans is whether or not there is free-will.

We generally associate free-will with the notion that you are able to act on your own, making your own decisions, and that there isn’t any particular constraint on which way you might go.

Things get muddy quite quickly when we begin to dig deeper into the matter.

As I dig into this, please be aware that some people get upset about how to explain the existence of or the lack of free-will, typically because they’ve already come to a conclusion about it, and therefore any discussion on the matter gets things pretty heated up. I’m not intending to get the world riled up herein.

If I were to suggest that the world is being controlled by a third-party that was unseen and undetected, and that we were all participants in a play being undertaken by this third party, it becomes hard to either prove that you have free-will if you claim you do, or prove that you don’t have free-will.

You can make this into a combo deal by suggesting that the play is only an outline, and you still have some amount of free-will, as exercised within the confines of the play. The problem though with this viewpoint is that someone else might contend that there is either free-will or there is not free-will, and if you are ultimately still under the auspices of the play, you don’t have true free-will.

Another viewpoint is that maybe everything that seems to be happening is already predetermined, as though it was a script and we are simply carrying out the script.

Another viewpoint on the free-will underpinnings relates to cause-and-effect.

Perhaps everything that we do is like a link in a very long chain, each link connecting to the next. Any decision you make at this moment is actually bound by the decision that was made moments earlier, which is bound to the one before that, and so on, tracing all that you do back to some origin point.

In the philosophy field, a concept known as determinism is used to suggest that we are bound by this cause-and-effect aspect. You can find some wiggle room to suggest that you might still have free-will under determinism, and so there’s a variant known as hard determinism that closes off that loophole and claims that dovetailing with the cause-and-effect there is no such thing as free-will.

Some are worried that if you deny that free-will exists, it implies that perhaps whatever we do is canned anyway, and so it apparently makes no difference to try and think things through, you could presumably act seemingly arbitrarily.

This kind of thinking tends to drive people toward a type of fatalism.

One additional twist is the camp that believes in free-won’t.

Maybe you do have some amount of free-will, as per my earlier suggestion that there could be a kind of loosey goosey version, but the manner of how it is exercised involves a veto-like capability.

Here’s how that might work. You non-free-will aims to get you to wave your arm in the air, which accordingly you would undertake to do, since we’re saying for the moment you don’t have free-will to choose otherwise.

The free-won’t viewpoint is that you do have a kind of choice, a veto choice. You could choose to not do the thing that the non-free-will stated, and therefore you might choose to not wave your arm.

Those that are the binary types will quickly say you obviously don’t have free-will in the use case of having free-won’t, in that you don’t have true free-will, and you have this measly free-won’t, a far cry from an unburdened free free-will.

Can Free-Will Be Detected Via Neuroscience

One means to try and break this logjam might be to find one “provable” instance of the existence of free-will, which at least then you could argue that free-will exists, though maybe not all the time and nor everywhere and nor with everyone.

Likewise, some say that if you could find one “provable” instance that there is the existence of non-free-will, you could argue the case the other way.

One key study in neuroscience that sparked quite a lot of follow-on effort was undertaken by Benjamin Libet, Curtis Gleason, Elwood Wright, and Dennis Pearl in 1983 (see

In the study, researchers attempted to detect cerebral activity and per their experiment claimed that there was brain effort that preceded conscious awareness of performing a physical motor-skilled act by the human subjects, as stated by the researchers:

“The recordable cerebral activity (readiness-potential, RP) that precedes a freely voluntary, fully endogenous motor act was directly compared with the reportable time (W) for appearance of the subjective experience of ‘wanting’ or intending to act. The onset of cerebral activity clearly preceded by at least several hundred milliseconds the reported time of conscious intention to act.”

Essentially, if you were told to lift your arm, presumably the conscious areas of the brain would activate and send signals to your arm to make it move, which all seems rather straightforward. This particular research study suggested that there was more to this than meets the eye. Apparently, there is something else that happens first, hidden elsewhere within your brain, and then you begin to perform the conscious activation steps.

You might be intrigued by the conclusion reached by the researchers:

“It is concluded that cerebral initiation of a spontaneous, freely voluntary act can begin unconsciously, that is, before there is any (at least recallable) subjective awareness that a ‘decision” to act has already been initiated cerebrally. This introduces certain constraints on the potentiality for conscious initiation and control of voluntary acts.”

Bottom-line, this study was used by many to suggest that we don’t have free-will. It is claimed that this study shows a scientific basis for the non-free-will basis.

Not everyone sees this study in the same light. For some, it is a humongous leap of logic to go from the presumed detection of brain activity prior to other brain activity that one assumes is “conscious” activity, and then decide that the forerunner activity had anything at all to do with either non-free-will or free-will.

Many would contend that there is such a lack of understanding about the operations of the brain that making any kind of conclusion about what is happening would be treading on thin ice.

There have been numerous other related neuroscience studies, typically trying to further expound on this W and either confirm or disconfirm via related kinds of experiments. You can likely find as many opponents as proponents about whether these neuroscience studies show anything substantive about free-will.

For those of you are intrigued by this kind of neuroscience pursuit, you might keep your eye on the work taking place at the Institute for Interdisciplinary Brain and Behavioral Sciences at Chapman University, which has Dr. Uri Maoz as the project leader for a multi-million dollar non-federal research grant that was announced in March 2019 on the topic of conscious control of our decisions and actions as humans, along with Dr. Amir Raz, professor of brain sciences and director. Participants in the effort include Charité Berlin (Germany), Dartmouth, Duke, Florida State University, Harvard, Indiana University Bloomington, NIH, Monash University (Australia), NYU, Sigtuna (Sweden), Tel Aviv University (Israel), University College London (UK), University of Edinburgh (UK), and researchers at UCLA and Yale.

Stepwise Actions and Processes

Consider that we have a human that is supposed to move their arm, the end result of the effort involves the arm movement, and presumably to get their arm to move there is some kind of conscious brain activity to make it happen.

We have this:

Conscious effort -> Movement of arm

According to some of the related neuroscience research, those two steps are actually preceded by an additional step, and so I need to include the otherwise hidden or unrealized step into the model we are expanding upon herein.

As such:

Unconscious effort -> Conscious effort -> Movement of arm

Let’s add labels to these, as based on what some believe we can so label:

Unconscious effort (non-free-will) -> Conscious effort (free-will that’s free-won’t) -> Movement of arm

Here’s a bit of a question for you, does the conscious effort realize that there is an unconscious effort (namely the unconscious effort that precedes the conscious effort), or is the conscious effort blissfully unaware about the unconscious effort (which presumably launched the conscious effort)?

Maybe the conscious effort is blind to the unconscious effort, and perhaps is acting as though it is under free-will, yet it is actually not.

Or, one counter viewpoint is that maybe the conscious and unconscious work together, knowingly, and are really one overall brain mechanism and it is a fallacy on our part to try and interpret them as separate and disjointed.

How can we make this more concrete?

Notice that I’ve referred to the unconscious effort and the conscious effort as essentially each being a process. If we shift this discussion now into a computer-based model of things, we might say that we have two processes, running on a computer, and for which they might involve one process preceding the other, or not, and they might interact with each other, or not.

These are processes happening in real-time.

It could be that either of the two processes knows about the other. Or, it could be that the two processes do not know about each other.

For anyone that designs and develops complex real-time computer-based systems, you have likely dealt with these kinds of circumstances. You have one or more processes, operating in real-time, and some of which will have an impact on the other processes, at times being in front of some other process, at other times taking place after some other process, and all of which might or might not be directly coordinated.

Consider a modern-day car that has a multitude of sensors and is trying to figure out the roadway and how to undertake the driving task.

You could have a process that involves collecting data and interpreting the data from cameras that are on the car. You might have another process that does data collection and interpretation of radar sensors. The process that deals with the cameras and the process that deals with the radar could be separate and distinct, neither one communicates with the other, neither one happens before or necessarily after the other. They operate in parallel.

AI Free-Will Question and Self-Driving Cars Too

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. The AI system is quite complex and involves thousands of simultaneously running processes, which is important for purposes of undertaking needed activities in real-time, but also offers potential concerns about safety and inadvertent process-related mishaps.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task.

AI Systems With Or Without Free-Will

Can an AI system have free-will?

This is a somewhat hotly debated topic these days. There are some that are worried that we are in the midst of creating AI systems that could become presumably sentient, and as a result, maybe they would have free-will.

Some are suggesting that an AI that has free-will might not toe the line in terms of what we humans want the AI to be or do. It could be that the free-will AI decides it doesn’t like us and using its own free-will opts to wipe us from earth or enslave us.

There are all sorts of twists and turns in that debate.

For today’s AI, tossing into it the best that anybody in AI can do right now in terms of Machine Learning and Deep Learning, along with deep Artificial Neural Networks, it would seem like this is really still a Turing Machine in action. I realize this is a kind of proof-by-reduction, in which I am saying that one thing reduces to the equivalent of another, but I think it is fair game.

Would anyone of any reasonable nature be willing to assert and genuinely believe that a Turing Machine can somehow embody or exhibit free-will?

I dare say it just seems over-the-top to think it has or could have free-will.

Now, I realize that also takes us into the murky waters of what is free-will. Without getting carried away here and having to go on and on, I would shorten this to say that a Turing Machine has no such spark that we tend to believe is part of human related free-will.

I’m sure that I’ll get comments right away and criticized that I’ve said or implied that we cannot ever have AI that might have free-will (if there is such a thing), which is not at all what I’ve said or implied, I believe. For the kind of computer based systems that we use today, I believe I’m on safe ground about this, but I quite openly say that there are future ways of computing that might well indeed go beyond what we can do today, and whether or not that might have a modicum of free-will, well, who’s to say.

AI Self-Driving Cars and Lessons Based on Free-Will Debate

Let’s assume that we are able to achieve Level 5 self-driving cars. If so, does that mean that AI has become sentient? The answer is not necessarily.

Some might say that the only path to a true Level 5 self-driving car involves having the AI be able to showcase common-sense reasoning. Likewise, the AI would need to have Artificial General Intelligence (AGI). If you start cobbling together those aspects and they are all indeed a necessary condition for the advent of Level 5, one supposes that the nearness to some kind of sentience is perhaps increasing.

It seems like a fairly sound bet that we can reach Level 5 without going quite that far in terms of AI advances. Albeit the AI driving won’t perhaps be the same as human driving, yet it will be sufficient to perform the Level 5 driving task.

I’d like to leverage the earlier discussion herein about processes and relate that aspect to AI self-driving cars. This will give a chance to cover some practical day-to-day ground, rather than the otherwise lofty discussion so far about free-will, which was hopefully interesting, and led us to consider some everyday perfunctory matters too.

Let’s start with a use case that was brought up during a recent event by Tesla that was known as their Autonomy Investor Day and involved a car and a bicycle and how the capabilities of automation might detect such aspects (the Tesla event took place on April 22, 2019 at Tesla HQ and was live-steamed on YouTube).

Use Case of The Bike On Or Off The Car

Suppose you have an AI self-driving car that is scanning the traffic ahead. Turns out that there is a car in front of the self-driving car, and this car has a bike that’s sitting on a bike rack, which is attached to the rear of the car. I’m sure you’ve seen this many times.

The variability of these bike racks and mountings can be somewhat surprising.

There are some bike racks that can hold several bikes at once. Some bike racks can only handle one bike, or maybe squeeze in two, and yet the person mounted say four bikes onto it. I’ve seen some mounted bikes that were not properly placed into the rack and looked as though they might fall out at any moment. A friend told me that one time she saw a bike come completely off the bike rack, while a car was in-motion, which seems both frightening and fascinating to have seen.

Suppose you were driving a car and came upon such a madcap bike; it creates difficult choices. A small dropped item like a hubcap you might be willing to simply run over, rather than making a radical and potentially dangerous driving maneuver, but a bike is a sturdier and larger object and one that by striking could do a lot of damage to the car and the bike.

In any case, let’s consider that there is a process in the AI system that involves trying to detect cars that are nearby to the AI self-driving car. This is typically done as a result of Machine Learning and Deep Learning, involving a deep Artificial Neural Network getting trained on the images of cars, and then using that trained capability for real-time analyses of the traffic surrounding the self-driving car.

You might have a second process that involves detecting bicycles. Once again, it is likely the process was developed via Machine Learning and Deep Learning and consists of a deep Artificial Neural Network that was trained on images of bikes.

For the moment, assume then that we have two processes, one to find cars in the camera images and video streaming while the self-driving car is underway, and a second process to find bicycles.

At the Tesla industry event, an image was shown of a car with a bike mounted on a rear bike rack. It was demonstrated that the neural network automation was detecting both the car and the bike, each as independent objects.

Now, this could be disconcerting in one manner, namely if the AI is under the belief that there is a car ahead of the self-driving car, and there is also a bike ahead of the self-driving car, each of which is doing their own thing. You might be startled to think that these would be conceptually two different matters. As a human, you know that the bike is really mounted on the car and not under its own sense of motion or actions. The bike is going along for the ride, as it were.

I guess you could say that the bike has no free-will at this moment and is under the non-free-will exerted control of the car.

If the AI though is only considering the car as a separate matter, and the bike as a separate matter, it could get itself tied into a bit of a knot. The bike is facing in some particular direction, depending upon how it was mounted, so let’s pretend it is mounted with the handle bars on the right-side of the car. The programming of the AI might be that it assumes a bicycle will tend to move in the direction of the handlebars, normally so.

The standard assumption would be that the bike will be moving to the right, and thus it would be a reasonable prediction to anticipate that the bike will soon end-up to the right.

One viewpoint of this matter from an AI systems perspective is that the car ahead should be considered as a large blob that just so happens to have this other thing on it, but that it doesn’t care what that thing is.

So, we have two processes, one finding cars, one finding bikes, and the bike finding process is potentially misleading the rest of the AI system by trying to clamor that there is a bike ahead of the self-driving car.

One reaction by the AI developers involves “fixing” the AI system to ignore a bike when it is seemingly mounted on the back of a car. There is presumably no need to detect such a bike.

I certainly grasp this approach, yet it also seems somewhat worrisome.

A human knows that a bike is a bike. A bike has wheels and it can roll around. A human knows that a bike mounted on the back of a car can come loose. A bike that comes loose can possibly fall onto the roadway like a wooden pallet, making a thud and not going anywhere, or it could potentially move more freely due to the wheels.

This all ties too to the topic of how much should AI systems be doing defensive driving tactics, which most are not yet doing.


When discussing the topic of free-will, it can become quite abstract and tilt towards the theoretical and the philosophical side of things. Such discussions are worthwhile to have.

I’ve tried to also bring some of the topic to a more day-to-day realm. You can think of the free-will and non-free-will discussion as being about control or lack-of-control over processes (in a more pedantic, mundane way, perhaps).

When developing real-time AI systems, such as AI self-driving autonomous cars, you need to be clearly aware of how those processes are running and what kind of control they have, or lack thereof.

For free podcast of this story, visit:

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see:

To follow Lance Eliot on Twitter: @LanceEliot

For my blog, see:

Copyright © 2019 Dr. Lance B. Eliot

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store