Using Artificial Pain As Motivator For AI Systems, Including Self-Driving Cars

Dr. Lance Eliot, AI Insider

Image for post
Image for post

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

The word “pain” is from the Latin poena, meaning a type of penalty or punishment.

The downside of pain is that, well, it is painful.

Some take the viewpoint that pain is a necessity, such as William Penn’s famous statement made while he was being kept in the Tower of London: “No pain, no palm; no thorns, no throne; no gall, no glory; no cross, no crown” (see his book entitled No Cross No Crown, published in 1669).

Today’s version is the oft-repeated shortened assertion that if there’s no pain, there’s no gain, or simply stated (with dramatic emphasis) “no pain, no gain,” while some prefer to take a slightly different tack and say “no guts, no glory.”

Darwin was a staunch believer that pain is a vital element to our survival, both humans and animals.

The logic he used was that pain serves as a means to forewarn when something might undermine your survival. It is a defense mechanism that gets you to respond and alerts you that something untoward is happening to you. As much as we might dislike pain, Darwin was suggesting that we’d be dead before we knew it, if we didn’t have an ability to experience pain.

He had an additional corollary that those aspects that would be most likely to lead to death would be typically tied to fostering the greater pain.

You might find of idle interest that there is a great deal of debate about the nature of pain in animals.

Up until modern times, many asserted that animals could not “feel” pain in the same manner that humans do. Sure, an animal might winch and react to pain, but supposedly they were not mentally equipped to experience the feelings of pain that us thinking humans do. I’m not going to weigh into that debate herein. All I’ll say is that I’ve had pet dogs and cats, and it certainly seemed like they could experience pain in a manner akin to how humans do. Or, was I anthropomorphizing my pets?

Anyway, let’s consider that pain can be physically manifested, and it could be said that pain can also be emotionally or perhaps mentally manifested.

The physical manifestation is the most obvious occurrence of pain.

Noxious Stimuli And Pain

Here’s an example. I was moving a box filled with some aged AI books (they were published in the 1990s, yikes, the AI stone age!), and I accidentally dropped the box onto my big toe.

Yes, it was painful.

In more rigorous language, we could say that I had an unpleasant sensory experience.

A noxious stimulus occurred to my big toe.

The heavy box heaved down upon my skin, bone, and other biological elements. Various specialized nerve-type detectors relayed this impact to my overall nervous system, which relayed this signal to my brain. My brain led to a reaction that included my effort to retract my toe from being under the offending box, and my brain activated my vocal cords to let out an exclamation.

Notice that I carefully tried to trace the point of origin of the pain and walked it back to my brain. There is another kind of fascinating debate going on about pain, namely we might question what “pain” actually feels like and how much the brain is involved in that determination.

Does your toe really experience pain?

Or, is it merely reacting via sending signals that tell the brain that there is something afoot (pun!), and the brain takes the signal and makes us believe there is a thing we call pain.

Pain Can Be Useful

I earlier mentioned that pain is intended to be a survival mechanism.

We normally attempt to protect ourselves from pain.

There is also a “lesson learned” element about pain that boosts our survival too.

This brings up the point that there are individual differences regarding pain.

There are also ways to try to train yourself to cope with pain. Some believe that your DNA dictates a foundational reaction to pain. From that foundation, you can then adjust it as based on training that you might do. One might argue that pain detection and reaction is a combination of nature and nurture.

Pain can occur for a split-second and then disappear. It can last much longer and be persistent. Longer bouts of pain are often described as being chronic. Shorter bursts are considered acute. The chronic version does not necessarily mean that you are continuously experiencing pain.

The pain could be intermittent, and yet be occurring over a longer period of time.

Physical Pain And Mental Pain

Recall that I had earlier indicated that pain can be a physical manifestation, and that it can be an emotional or mental manifestation.

Does a physical manifestation of pain have to result in an emotional or mental manifestation of pain?

That’s a dicey question to answer.

Some might say that whenever you experience physical pain, you will by necessity experience emotional or mental pain, though you can potentially control the emotional or mental aspects and thus mitigate the mental side of it.

If you have a physical manifestation of pain and it routes signals to your brain to let it know, does that constitute mental “pain” or is there only mental pain when the mind overtly recognizes the incidence of the physical pain?

One might argue that the delivery of a message is not the same as reacting or acting upon the message.

Can you have emotional or mental manifestation of pain and yet not have a physical manifestation?

You are likely to right away say that of course you can have mental pain that has no origination in a physical pain. I recall a chemistry class that I took as an undergraduate in college and got a disturbingly low grade.

I was in mental anguish from it! Did the chemistry professor bop me on the head, or did the grade sheet cause me to suffer a dire paper cut?

No, I was in pain mentally without any physical origin.

Does my brain “hurting” constitute a physical manifestation of pain?

We usually only consider our limbs and the preponderance of our body as being subject to physical pain. It seems that we usually discount that the physical elements of the brain can count toward a physical manifestation of pain.

Famous Case Of Phineas Gage

One of the most famous cases of “brain pain” would be the so-called American Crowbar Case involving Phineas Gage.

Phineas was a railroad construction foreman. Before I tell you what happened to him, if you are squeamish then I suggest you skip the next paragraph.

While working on the railroad in 1848, an iron rod shaped like a javelin, measuring 1 ¼ inches in diameter and over 3 feet long, rocketed into and through his head, passing through the left part of his brain and exiting out of his skull. A physician was sought right away, and within 30 minutes a medical doctor arrived.

At that time, Phineas was seated in a chair outside a hotel, and he greeted the doctor by what many consider one of the greatest understatements in medical history, in which Phineas reportedly said: “Doctor, here is business enough for you.” Miraculously, Phineas lived a somewhat normal life until his death in 1860, lasting some dozen years after the astounding incident, and became a medical curiosity still discussed to this day.

In any case, I’m not going to get mired herein about the matter of whether a mental pain must also be associated with a physical pain.

Let’s agree for now that you can have a physical manifestation of pain, you can have a mental or emotional manifestation of pain, and you can have a combined physical-mental manifestation of pain.

The point at which you begin to feel pain is often referred to as your pain perception threshold.

The point at which you begin to react or take action due to the pain is often referred to as your pain-tolerance threshold.

AI And Leveraging The Concept Of Pain

I dragged you through this background about pain to introduce you to the notion of pain in the field of Artificial Intelligence (AI).

I’m not talking about pain as in pain-in-the-neck and maybe being annoyed or finding it “painful” to study or make progress in AI research.

I’m referring to the use of “pain” as a form of penalty or punishment, doing so as a technique in AI.

Let’s suppose you are in the process of training a robot to maneuver in a room.

There are objects strewn throughout the room. The robot has cameras to act as the “eyes” of the robot. Via the images and video streaming into the cameras, the robot is using various image processing algorithms to figure out what objects are in the room, where the objects are, and so on.

If you put a human child into the room, and asked the child to navigate the room, what would the child do?

A toddler might wander into objects and fall over them. I remember when my children were learning to advance from a crawling stage to a walking stage, they would often stand-up, be wobbling and unstable, take a step, and likely trip or fall over something, and plop to the ground. Ouch, they’d utter. They’d look at what they fell over, and you could see their mind calculating to avoid that object in the future.

In short, I am suggesting that at times we need to include “pain” as a factor in an AI system such as a robot that we might be trying to train to sensibly make its way in a room of objects.

The Sentience Question

Now, I’m going to get some AI sentience believers into a bit of a roar about this topic.

Am I claiming that this robot is or can be made to experience pain?

Maybe in the far future we’ll have robots of the type that you see in science fiction stories, ones that can “feel” pain because they have some kind of mysterious elaborated biological mechanisms that have been grafted from humans. In fact, there is research of that kind taking place today.

There are special robotic gloves for example that have sensors to detect “pain” in that they detect pressure, they detect heat, and so on, trying to detect noxious stimuli that would cause a human to have pain.

I don’t want to get stuck herein in the trap that this kind of “pain” is not the same as human experienced pain.

In AI, for the time being, let’s use the human meaning of “pain” to refer to a metaphorical type of “artificial pain” that we will simulate in an AI system.

What do I mean by experiencing some kind of artificial pain?

Suppose we let the robot roam throughout the room and at first provide no indication whatsoever about how to deal with objects in the room. The robot rams into an unmovable object. The robot is stuck and cannot proceed forward. It wants to keep moving, but it cannot, as it is blocked from moving forward.

The robot’s Machine Learning (ML) or Deep Learning (DL) might score a “pain point” that it hit something that was unmovable.

We’ll make this a high number or score as a pain factor.

The robot backs away from the unmovable object.

Turning to the left, the robot proceeds several feet in an open area. It comes upon a lightweight box. The robot rams into the box, which slides out of the way. In this case, the Machine Learning or Deep Learning opts to register another “pain point” though it is a low number or score since the object was readily moved and the robot was able to continue its journey.

Essentially, over time, the robot after making many trial runs throughout the room, will begin to adjust to avoiding objects that are unmovable, and have a willingness to bump into objects that are movable. This will be due to the “learning” that occurs as a result of assigning pain points. Those unmovable objects are a sizable number of pain points, and we’ll setup the robot to want to reduce or minimize the number of pain points it might earn.

I’ve mentioned that pain came from the Latin roots for word penalty or punishment.

We will apply a type of penalty function to the robot AI learning aspects. The penalties are associated with “pains” that we might define for the robot.

Penalty And Reward Functions

For those of you that have dealt with the mathematics of solving constrained optimization problems, you are certainly well-familiar with the use of penalty functions.

As a mathematical algorithm tries to find an optimal path, you apply some form of penalty to the steps chosen. If the optimization is getting worse, the higher the penalty score for the choice chosen. If the optimization is getting better, the less the penalty score assigned at the time.

I probably should also mention the “reward” function if I am mentioning penalty types of functions.

You might say that a reward function is akin to accruing “happiness” points.

When I used to teach courses on AI as a university professor in computer science, I would sometimes get a puzzled look from the students, and they would ask me whether they should be using a reward function or a penalty function. They got themselves into the classic binary world of assuming those two functions are mutually exclusive of each other.

That’s a false notion.

In our everyday world, we are continually trying to maximize our rewards and minimize our penalties.

You can further simplify this rewards versus penalty (or pain) as a formation of the “carrot or the stick” approach to doing things. Using the carrot is the rewards side. Using the stick is the penalty side. You can use just a carrot. You can use just a stick. Most of the time you are likely to employ both the carrot and the stick.

I know that some don’t like referring to the penalties as a “pain” or an “artificial pain” because it perhaps gives an anthropomorphic glow to the use of a penalty function. Isn’t human pain much more complex than this mathematical calculus of rewards and penalties? If so, the use of the word “pain” might overstate the power of the algorithm and the approach in an AI context.

I’d say that the counter-argument is that we are ultimately trying to get AI to become more and more intelligent in the same manner in which we consider humans to be intelligent. Do humans require the inclusion of “pain” as a mode or feature of their physical and mental manifestation in order to be intelligent beings?

If you could make the case that we could entirely strip out “pain” in all its manners, from humans, and be leftover with the same intelligence of humans that we have today, it might imply that in AI we don’t need to concern ourselves with the notions and capabilities of having “pain” for our AI creations.

Another view is that maybe we ought to not be trying to model AI after the likes of humans, and that we can arrive at the equivalence of human intelligence without having to have an AI system that is like that of a human. In that case, we can potentially dispense with the inclusion of “pain” into the AI systems that we are devising.

For the moment, I’d vote that we consider trying to do what we can to model “pain” into AI systems and see how far we get by abiding by what humans seem to do regarding pain. This is merely one path. It does not preclude those that want to dispense with the “artificial pain” and opt to pursue a different course.

As I mentioned earlier, there are efforts of constructing physical “artificial pain receptors” for robots, such as the gloves that I mentioned, and otherwise outfitting a robot with ways to detect some form of “pain” depending upon how you want to define pain.

That’s a physical manifestation of pain for AI systems.

We can also have a “mental” manifestation of pain for AI systems, akin to what I described earlier about the robot in the room that learns as it hits objects, or the chess playing AI that learns as it decides on chess moves.

That’s a “mental” manifestation of pain for AI systems (I put the word mental into quotes to distinguish that I am not referring to the human mental but instead to an artificial or automation mental).

The two can be combined, of course.

The robot in the room might have sensors that detect when it collides with an object, the physical manifestation, which gets relayed to the AI system running the robot (the “mental” manifestation). The robot AI system then commands the robot to turn back from the object and move another way. In a crude manner, this might be akin to my children detecting the heat from the stove top and opting to move their hands away from it.

Example Of Artificial Pain As Applied To AI Self-Driving Cars

This discussion about artificial pain can be further explored via as an application to the field of AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One aspect involves the inclusion of “artificial pain” as a means to advance the AI systems that are used to drive a self-driving car.

Returning to the topic of “artificial pain,” let’s take a look at how this approach can be used in advancing the various AI systems for AI self-driving cars.

I’ll start with the perhaps the most obvious question that I am frequently asked on this topic, what in the world does “pain” have to do with driving a car?

To answer this question, let’s use our earlier notion that there can be a physical manifestation of pain and a mental or emotional manifestation of pain.

For the physical manifestation, you might at first glance point out that a car does not physically experience pain. It might get dented. It might get nicked or scratched. It might get scrambled by a blow from another car. Throughout any of those physical encounters and results, we would be hard-pressed to suggest that the car felt any pain per se.

The car has little to almost no built-in capability that we could convincingly argue is a pain system of some kind.

There is essentially no detection by the car that it has experienced anything akin to pain. The car does not tell you that it just got dented by a shopping cart that rolled into it. Nor does the car react to the shopping cart by emitting a bleating horn that might be saying ouch. The car also doesn’t choose to move away from the shopping cart, perhaps realizing that other shopping carts might soon descend upon the car and cause further injury or damage to the car.

Admittedly, there are some ways that you could stretch the definition of pain detection to claim that a conventional car has some means to realize that a painful moment is possibly going to occur.

Some cars have curb feelers, which I’m guessing many of you might not know what that is. In more modern times, the rise of motion sensors and sound sensors became a popular item to add into your car.

Some of these systems would emit a loud voice telling you to get away from the car.

More On Pain Detection And Reaction

I suppose you could try to suggest that the curb feeler was an artificial pain detection device, seeking to alert you when the car was getting overly close to a curb, and you might also say that the motion sensors and sound sensors likewise were a pain detection based on the physical presence of another. This does seem a mild stretch, if not more so.

Alright, let’s say that these were at best a simplistic and faraway reach of a pain detection and reaction system. Does that mean that since there hasn’t been a true effort to-date to seek out an artificial pain detection and reaction system for a car that we ought not to have one?

There are some that believe future cars should have a better sensory capability to detect when something untoward might happen to the car. Perhaps there should be an outer layer of the car that can detect pressure.

If you could have sensors throughout the car, both on an outer layer and an inner layer, it might allow the AI to be more self-aware of what the physical status of the car consists of. It seems unlikely that the other sensory devices of the AI self-driving car would be suitable to providing that kind of indication about the physical status of the self-driving car in the same comprehensive and all-encompassing manner.

We’ll next shift our attention to the mental or emotional manifestation of pain and how it might apply to an AI self-driving car.

When AI developers are crafting image processing capabilities for a self-driving car, they are often already making use of various penalty functions, which earlier I’ve likened to the use of “pain” as a means for guiding learning during a Machine Learning or Deep Learning system setup. A deep artificial neural network might be created via the use of a penalty function when analyzing images.

Suppose we are training a Deep Learning system to identify street signs such as stop signs and roadway caution signs. You can collect thousands of images of such signs and feed them into a large-scale neural network. The closer that the neural network gets in terms of correctly identifying a stop sign as a stop sign, you might have a reward function that increases the weights and other factors to have the DL be numerically getting boosted for doing a good job.

You might also have a penalty function. The further the neural network gets, such as mistaking a yellow caution sign for a red stop sign, the weights and other factors get deductions or penalties. These could be claimed as the “pain” for having been off-target.

There is another realm of “pain” that few AI self-driving cars are encompassing as yet, and for which holds promise as a means to boost the AI self-driving capabilities.

When a human driver is driving a car, they might feel pain when the car takes a curve too strongly or when they are driving really fast and do a rapid swerve. In self-driving cars, the use of an IMU (Intertial Measurement Unit) already provides some indications about these kinds of movements associated with the car. The AI ought to be doing more with the IMU than it does now.

This has been classified by most of the auto makers and tech firms as an “edge” problem and so it is further down on the list of matters to fully embellish.

Human drivers also anticipate pain.

One might suggest that a driver realizes that if they crash into another car or if they sideswipe a wall, it is going to cause themselves to potentially have pain. They are more likely thinking about the injury or harm that might occur to them as a driver inside the car, and the bleeding and the broken bones that would result. I’m going to lump that into the pain bucket. Those are all physical indicators for which pain is quite likely to occur.

Is a driver also worried about the cost to potentially repair a damaged car if getting into a car accident? Are they worried too about their car insurance rates going up due to an accident? Yes, and you might be willing to agree those are all “pain points” associated with getting into a car accident.

Remember we are now focusing on the mental manifestation of pain. In that case, these concerns and qualms about getting physically harmed and also getting financially harmed, you could connect to being a type of mental pain.

The mental pain therefore guides the driver toward avoiding those anticipated actions and results that might produce pain.

Let’s recast this into the AI action planning aspects of a self-driving car.

The AI is trying to avoid getting the self-driving car into an accident. Why? Because the AI developers have presumably developed the AI code to do so. There might be code in the AI system that states do not run into the car ahead of you, stay back at least a car length for each 10 miles per hour, allowing for a buffer to avoid hitting the car.

Another augmented approach involves using Machine Learning and Deep Learning to guide the AI in figuring out the AI action planning. If the self-driving car gets too close to the car ahead, apply a penalty function that takes away points. Or, if you like, administer some mental pain to numerically discourage the behavior of getting overly close to the car ahead.

This also will aid in the rather untouched as-yet area of AI self-awareness for self-driving cars. Most of the AI system for self-driving cars are not anywhere close to being self-aware. I’m not trying to take us down the sentient route, and only bringing us to the notion that there needs to be a part of the AI that overlooks the AI system. We humans do the same. We overlook our own behavior, doing so to gauge and adjust as we are performing a task, such as driving a car.

Conclusion

Pain. It is key force of our existence. Songs are written about it. Most of our greatest literary works are about pain. The greatest paintings every made tend to depict pain.

Shall we exclude pain from AI systems? If so, are we potentially losing out on what might be an essential ingredient in the formation and emergence of intelligence? Some might say that pain and intelligence are inextricably connected. Darwin wants us to believe that pain is a survival mechanism and crucial to why we humans have lasted.

It pains me to say that we might indeed need to do more about pain for AI systems to further advance. The mysteries of pain in the human body are still being figured out. Likewise, we could consider how to apply whatever we do know about pain into the advancement of AI systems. For AI self-driving cars, we already use a pain-like aspect involving penalties and penalty functions.

More pain might be the remedy to get us toward more human-like driving.

No pain, no gain, as they say.

For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru

To follow Lance Eliot on Twitter: https://twitter.com/@LanceEliot

For his Forbes.com blog, see: https://forbes.com/sites/lanceeliot/

For his AI Trends blog, see: www.aitrends.com/ai-insider/

For his Medium blog, see: https://medium.com/@lance.eliot

For Dr. Eliot’s books, see: https://www.amazon.com/author/lanceeliot

Copyright © 2020 Dr. Lance B. Eliot

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store