An AI Viewpoint Of Mental Disorders, Plus Leveraging For Self-Driving Cars

Dr. Lance Eliot, AI Insider

There are an estimated 1 in 5 of adults that will experience a mental illness or mental disorder in a given year (that’s based on U.S. statistics, about 20% or around 44 million adults so impacted).

Generally, those adults are able to still function sufficiently and continue to operate seemingly “normally” in society. In terms of a quite serious and life altering mental disorder or mental illness that is more debilitating, such a more substantive and deep cognitive impairment will occur to about 1 in 25 of American adults during their life time (that’s about 4% or nearly 10 million adults).

That is a lot of people.

These are rather staggering numbers when you consider the sheer magnitude of the matter and how many humans are being impacted. Not only are those individuals themselves impacted, so too are the other people around them. The odds are that there is a sizable spillover of a particular individual having a mental disorder or mental illness and it causing loved ones and even strangers to be impacted too.

There’s a well-know guide that describes various mental disorders and mental illnesses, known as the DSM (Diagnostic and Statistical Manual of Mental Disorders). I mention the DSM because I sometimes get a reaction from people that seem to think the topic of mental illness or mental disorder is merely when you don’t feel like going to work that day or maybe are in a foul mood. It’s a lot more than that.

The types of mental disorders or mental illness that I’m referring to consist of schizophrenia, dementia, bipolar disorder, PTSD (Posttraumatic Stress Disorder), anorexia nervosa, autism spectrum disorder, and so on. These are all ailments that can dramatically impact your cognitive capabilities. In some instances the illness or disorder might be relatively mild, while in other case it can be quite severe. You can also at times swing into and out of some of these disorders, appearing to have gotten over one and yet it still lingers and can resurface.

Evolutionary Psychologists Help Trace The History Of Human Minds

Evolutionary psychologists ask a fundamental and intriguing question about these mental disorders and mental illnesses, namely, why do they exist?

An evolutionary psychologist specializes in the study of how the mind has evolved over time. Similar to others that consider the role of evolution, it is interesting and useful to consider how the brain and the mind have evolved over time. We know based on Darwin’s theory of evolution that presumably humans and animals have evolved based on a notion of survival of the fittest.

For whatever traits you might have, if it gives you a leg up on survival, you will tend to procreate and pass along those traits, while others that aren’t as strong a fit to the environment will be dying off and thus not passing along those traits. It is not necessarily that the physically strongest people per se will survive, and instead how good a fit they have to the environment that they confront that dictates survival.

This aspect about fit involves not just the physical matters of your body and limbs, but also includes your mental capacities too.

Someone that might be very physically strong could be a poor fit for an environment where being cunning is a crucial element to survival. Suppose I am able to figure out how to make an igloo and can withstand harsh cold weather, while someone much physically stronger is not as clever and tries to live off the snowy landscape without any protective cove or housing. The physically stronger people are likely to die off, while the clever igloo makers won’t die off, and therefore those traits of cleverness would be passed along from generation to generation.

Did we at an earlier time period have a body that was fatter or thinner, maybe shorter or taller, perhaps fingers with more dexterity or less dexterity. Did we have a brain that was larger or smaller, and did it have more neurons or less neurons, was it physically the same shape or different than the shape of our brains today. These are primarily physical manifestations of evolution.

What about our minds?

Did we think the same way in the past as we do today? Were we able to think faster or slower?

In any case, you might have always assumed that the thinking that we do today is the same as the thinking of earlier humans, but we don’t know that’s the case for sure.

Explaining The Basis For Mental Disorders

Why do we have mental disorders or mental illnesses?

Tying this to the aspects of evolution, one might assert that if mental illnesses and mental disorders are a bad thing, which I would guess most people would agree is likely the case, shouldn’t we have mentally evolved in a manner that those mental disorders or mental illnesses would no longer exist today?

Gradually, the population should no longer exhibit mental disorders, one would theorize. It’s an evolutionary psychological phenomenon, we might suppose. Yet, as I mentioned earlier, around 20% of adults will have a mental disorder in a given year, and around 4% will have a debilitating and substantive mental disorder in their lifetime. Doesn’t seem like evolution has led to the eradication of mental disorders.

You could potentially argue that we need to have mental disorders or mental illnesses, since they might be a helpful sign and we just don’t realize it is. Perhaps it is like a mental alarm clock. The mental disorder is forewarning that the mind of the person is having difficulties. The mental disorder is like showcasing a fever when your body is starting to get sick. The fever gets your attention and you then take other efforts to help fight a bodily infection.

Implications Of Mental Disorders As a Mind Sign

Does a mental disorder imply that our minds are fragile and brittle?

Some would say that it is such a sign. Others might claim that it is actually a robust kind of signal, allowing the mind to let us know when something is amiss. We just don’t know today that it is that kind of signal and nor what to do about it.

Perhaps as a population, as a society, we need to have some percentage of humans that have a mental disorder.

We don’t know what society would be like if we did so. You could claim that society would be better off, and we’d no longer have members of the population that are seemingly abnormal in comparison to the rest of the mental status of the population. Maybe we need to have a certain proportion of the society that has a mental disorder or mental illness. Without it, the society perhaps becomes worse off. Our societal capacity might be undermined if we eliminated all mental disorders, some might argue.

Should AI Embody Mental Disorders

If you believe that mental disorders or mental illness is an essential ingredient of thinking, and if AI is hoping to create a form of automation that is the equivalent of human thinking, should AI be incorporating “mental disorders” into AI systems?

When I pose this question, there are some AI developers that immediately gag and start to upchuck their lunch or midday snacks. Say, what? Are you serious, they ask?

These AI developers are striding mightily to make their AI systems as “perfect” as possible. Their vaunted goal is flawlessness. That’s the sacred quest for nearly every AI developer and software engineer on this planet. The system they develop needs to work without errors. It isn’t easy to achieve. It is very hard to achieve. We don’t even know if it possible to have flawless AI systems.

The radical notion that the AI systems should intentionally have “mental disorders” is a kind of high treason statement. It is the antithesis of what developers are trying to do. Oh, so we can not only allow errors to accidently creep into our systems, they say, but we are now supposed to actually build into those systems an on-purpose dysfunctional aspect? It is truly a sign of the apocalypse; some AI developers would lament.

Well, not so fast with those cries of foul.

Perhaps to reach true intelligence we might need to mix both the good and the bad of human mental processing. Suppose those two are inextricably linked. You might not be able to have the good, if you don’t also have the bad.

Mental Disorders As Highlighting AI Error Handling

I’ll try to make this even more seemingly “sensible” by going the route of error handling in AI systems.

Do you believe that your AI system is utterly error free?

Hopefully, most reasonable AI developers would acknowledge that there is a chance that an error exists within their AI system. A reasonable chance and not a zero chance. In theory, there should be a robust error detecting capability of any well-built and well-engineered AI system.

So, here’s where I am taking you. If we can agree that an AI system ought to have some definitive and robust error detection capabilities, we might dovetail into this notion and say that if “mental disorders” are needed to achieve truly intelligent systems, we can abide by that assertion, and still be hopefully be protected by ensuring that the otherwise already-needed error detection capability can cover for whatever untoward action that the “mental disorder” portion might cause.

Revealing Of Tops-Down Versus Bottoms-Up AI Approaches

Here’s another twist for you.

First, be aware that there are two major camps of how we’ll achieve true AI.

One camp is the bottoms-up approach that tends to emphasize the Machine Learning or Deep Learning ways of developing an AI system. Typically using a large-scale or deep artificial neural network, this approach is essentially trying to mimic how the brain physically seems to be composed.

For the other camp, referred to often as the tops-down or symbolist group, the approach consists of pretty much programming our way toward true AI.

Suppose the bottoms-up camp discovers that mental disorders or mental illnesses emerge as part of the Machine Learning or Deep Learning neural networks approach. It just happens. This goes along with the notion that possibly our mental processing involving the “good” is inextricably connected with the “bad” (if we are going to label mental disorders as such).

If that “surprising” emergence happens, it would be quite interesting and would force us to reconsider what to do about the mental disorders and mental illnesses, which would then be ascribed as artificial mental disorders and artificial mental illnesses (artificial meaning as arising in the AI).

Meanwhile, let’s assume that the other camp, the tops-down advocates, either stumble upon the use of artificial mental disorders, perhaps inadvertently arising from the logics of their AI systems, or decide to purposely include mental disorders, in hopes of seeing whether it boosts overall the true AI attainment. They too might need to cope with the nuances of artificial mental disorders and artificial mental illnesses.

That’s some food for thought about the evolution of AI.

Mental Disorders And Aspects Of AI Autonomous Cars

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars.

Returning to the topic of mental disorders and mental illnesses, let’s see how a focus on cognitive impairments might be useful when trying to build robust and reliable AI self-driving cars.

Core Of ABCDEFG Comes To Play

I refer to this as the ABCDEFG, based on the one-word indications that are used to describe each of the seven circumstances.

Let’s start with the letter A and the word Amaurotic.

You might not be familiar with the word amaurotic, which means to have lost your vision or from the Greek meaning to be obscured. This is an apt description of an AI self-driving car that might have some kind of “mental disorder” involving the sensors and their data collection.

The sensors of the self-driving car are the means of the AI being able to detect what is taking place surrounding the AI self-driving car.

An artificial mental disorder or artificial mental illness, which I’m appending the word “artificial” to connote is it something happening within the automation, could cause the sensors to act incorrectly or be interpreted incorrectly.

Imagine that the image processing starts to hallucinate or become delusional. I am using those words in a loose manner and don’t necessarily mean those words in a proper clinical psychological way. In the case of the AI subsystem, let’s suppose it has some kind of error or bug and this causes the AI subsystem to categorize the car in the opposing lane as a motorcycle rather than a car. This seems plausible as a result of some internal error.

The AI subsystem that has the error is in a manner of speaking delusional in that it now is reporting that an upcoming car is actually a dog. We can add the hallucination aspect by suggesting that the AI subsystem error also causes it to report that there is a cow and a horse there too, running next to the dog. There isn’t any other moving object adjacent to the upcoming car, but the errors inside the automation are so out-of-whack that it is adding objects into the scene that aren’t actually there at all.

This provides an example of how an artificial mental disorder or artificial mental illness could impact the AI self-driving car.

I want to therefore make sure to distinguish that the AI is suffering from a kind of “mental disorder” that is not necessarily doing so in the same underlying manner that the human brain and mind do. Instead, we’re focusing herein on the behavioral results that are similar. By using the word “artificial” I am trying to forewarn that we should not make the logic leap that the AI-based mental disorder is necessarily the same as the human mental disorder aspects in terms of the underlying roots, and instead only on the basis of the behavioral results.

Sensor Fusion And Mental Disorder Aspects

Let’s now consider what would happen to the AI self-driving car if the sensor fusion portion suffered from an artificial mental disorder.

I’d say that the result would be a Bewildered system.

When the sensor fusion is fouled up, it might be falsely claiming that the sensors are in disagreement, when they actually all agree as to what is outside of the self-driving car. Or, the sensor fusion might falsely claim that all the sensors are in agreement, when in fact the sensors are differing in terms of what they have each detected. You might characterize this as a kind of being bewildered and unsure of what the surrounding scene contains.

The next word is Chaotic.

If the virtual world model is suffering from an artificial mental disorder, it won’t be able to properly denote where objects in the real-world are. The model is intended to keep track of where objects exist outside of the self-driving car, along with predictions about where those objects are heading. It is kind of like an air traffic control subsystem, wanting to monitor the status of nearby objects.

The word I’d like to cover next is Dysfunctional.

If the AI action planning subsystem of the AI is suffering from an artificial mental disorder, you are going to witness a dysfunctional AI self-driving car. Suppose the sensors are working just fine, the sensor fusion is working just fine, and the virtual world modelling is working just fine. Meanwhile, when the AI action planner inspects the virtual world model, the action planner is messing up and has some form of error in it.

The next word is Errant.

For the car controls commands issuance, this subsystem of the AI is intended to generate instructions to the car as to what it is supposed to physically next do, such as accelerating, braking, and the direction of the steering of the car. Suppose the sensors detected an opposing car that was going to pass alongside safely, the sensor fusion concurred, the virtual world model concurred, the AI action planner concurred, and so up until this point there is no action specified to take.

The next word is Flailing.

For the strategic AI elements of the self-driving car, suppose that an artificial mental disorder arose. For example, maybe the AI self-driving car is supposed to be headed to downtown Los Angeles. An error though in the strategic AI elements gets things messed-up and the AI is led toward Las Vegas, Nevada. Maybe the strategic AI is so error laden that it keeps changing where the destination is supposed to be. The self-driving car seems to be changing from one direction to the other, no rhyme or reason apparent as to it doing so.

The last word to cover is Garbled.

If the self-aware AI aspects aren’t able to do a proper effort toward tracking how well the rest of the AI system is working, perhaps due to an artificial mental disorder, it could lead to a garbling of what the AI self-driving car is going to do. One moment the self-aware AI is informing the rest of the AI it is doing well, and the next moment it is warning that one element or another is fouled up.

Conclusion

Mental disorders and mental illnesses are a substantial part of the human experience.

Why?

Evolution might suggest that we should be rid of those aspects by now. Maybe though it is something still being worked out by evolution and we are merely in the middle of things, and therefore cannot say for sure whether those disorders and illnesses will continue or gradually be diminished based on a survival of the fittest path.

Will AI need to include mental disorders or mental illness if indeed those facets are inextricably tied into human intelligence, and perhaps the only means to reach true intelligence is to include those factors? If so, what does it mean about how we are developing AI systems today. Including artificial mental disorders or artificial mental illnesses seems quite counter-intuitive to the usual belief that AI systems need to be free of any such potential downfalls.

It could be that the basis for including artificial mental disorders or artificial mental illnesses is either of merit on its own, or that we can use the basis to then be more circumspect about how AI systems need to cope with internal “cognitive impairments” or internal errors that might arise in the “thinking” elements of the AI system.

Regardless of whether you think it might be preposterous to consider mental disorders or mental illnesses in the context of building AI systems, you might at least be open to the notion that it brings up the importance of making sure AI systems are as error detecting and correcting as they can be.

I’d say there’s no mental confusion on that key point.

For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru

To follow Lance Eliot on Twitter: @LanceEliot

For his Forbes.com blog, see: https://forbes.com/sites/lanceeliot/

For his Medium blog, see: https://medium.com/@lance.eliot

For Dr. Eliot’s books, see: https://www.amazon.com/Dr.-Lance-Eliot/e/B07Q2FQ7G4

Copyright © 2019 Dr. Lance B. Eliot

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store