Oozing Into Noble Cause Corruption: AI Driverless Cars Might Get Crowned

Dr. Lance B. Eliot, AI Insider

Image for post
Image for post
Being aware can stop noble cause corruption in its tracks

Here’s an age-old question for you, do the ends justify the means?

Some trace the origins of this thorny question to the Latin collection Heroides as reportedly written by Ovid (Publius Ovidius Naso), namely it states “Exitus acta probat,” which loosely translated could be interpreted as to whether the ends essentially prove or justify the means.

I am assuming that some of you would decry the entire notion that the ends justify the means. The implication of the notion is that you can do any darned untoward thing you want to do, as long as the ends that you are targeting are somehow noble.

I’d mildly object to the assumption that the ends being sought are of necessity noble, and it could be that the person arguing in favor of the ends justifying the means will want to convince themselves or us that the end is noble. I am saying that it might not really be. It could be a charade to make the means seem viable.

What sometimes happens is that people intending to do bad things will cleverly mask their upcoming bad deeds by wrapping it into a seemingly noble targeted ending. This generally allows them to get away with the bad things, since others will believe and at times rally in favor of the bad things, due to buying into the noble target ending that will presumably be someday reached.

Sometimes there are people that don’t intend to do bad things, but fall into the pit of doing bad things, along the way to achieving what seems like a noble ending. Of course, it could also be that the noble ending was a facade all along.

The problem with all of this ends-and-means stuff is that you might not know what is true versus what is blarney.

Maybe the ends hoped for are true and good, while the means are good too. Or, the ends hoped for are true and good, while the means are rotten. Or, the ends are terrible, but made to seem like they will be good, and the means are good, while there’s the other version which is the ends are terrible but made to seem like they are true and good, while the means are actually good.

It can be confusing.

Noble Cause Corruption Explained

There is a phrase given to those that believe they have a noble end and yet seemingly diverge from a proper means to reach it, namely the “noble cause corruption” phenomena.

What happens is that when someone might have an ends that they think is noble, they can become corrupted in the pursuit of that noble end. This can include carrying out unlawful acts, immoral acts, and whatever else might be needed to reach the desired ends.

In the news these days there is a colossal example in business of a presumed noble cause corruption case. It is the case of Theranos. If you read any business-related news, you likely already know some aspects about the case. This is all well-documented in many big-time media outlets, and especially in an expose written by John Carreyrou of the Wall Street Journal and later further elaborated in his book entitled “Bad Blood: Secrets and Lies in a Silicon Valley Startup.”

A Stanford University dropout, Elizabeth Holmes, at the age of 19, started a biotech firm named Theranos, and did so with the stated goal of being able to do a multitude of blood diagnostic tests via the use of a tiny drop or so of blood, using a single finger-prick device to get the blood. Her claims of being able to achieve these “ends” was a bold proclamation.

She got hundreds of millions of dollars in backing from some heavy weight investors, though few at the time seemed to realize that these investors were not biotech savvy. This likely helped the subterfuge. It turns out that the claimed technology did not exist and did not work as claimed.

What makes the story especially notable is that Theranos did a deal with Walgreens and began actually performing the service for real people in selected cities in the United States. Sadly, many of the blood tests done turned out to be wrong. Indeed, over a million blood tests had be to revoked and redone.

Some say that Elizabeth was a true believer in her cause and perchance got caught up in not being able to achieve the ends she desired. There are some that say that it was all a scam from day one.

What can the Theranos case tell us?

The bigger the noble ends, the likely easier it is to justify the means. The more too that the means can get out-of-hand without causing too much of a ruckus, because you just come back to the ends and everyone starts smiling again.

Pursuing the Noble AI Quest At Any Cost

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. There are some industry critics that are concerned that there is a chance for some of the auto makers and tech firms to fall into the noble cause corruption basket as it applies to AI self-driving cars.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task.

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too.

Returning to the matter of the noble cause corruption and how it might apply to the AI self-driving car industry, let’s consider some of the ways in which this might happen.

Suppose an AI developer is under-the-gun to get a Machine Learning (ML) or Deep Learning (DL) system to work that will be able to analyze visual images and find posted street signs in the images. For example, using a convolutional neural network to try and detect a stop sign or a speed limit sign. The AI developer amasses thousands of images that are used to train the deep or large-scale neural network. Feeding those images into the budding AI system, the AI developer tweaks it to try and ensure that it is able to spot the various posted street signs.

As I’ve stated in my writings and presentations at conferences, oftentimes these ML or DL are quite brittle. This brittleness means that there will be circumstances in which a visual image captured while an AI self-driving car is underway will maybe not be properly examined by the ML or DL that’s been implemented and placed into the on-board car AI system.

The sensor data interpretation might state that there isn’t a stop sign in the image, even though there really is one there, known as a false negative. Or, the sensor data interpretation might state that there is a stop sign in the image, even though there isn’t one there, known as a false positive. These false indications can have a daunting and scary impact on the AI’s efforts to drive the self-driving car.

Imagine that you are driving along and for whatever reason fail to see a stop sign and run right through the stop without any hesitation. I’ve seen this happen a few times during my years of driving. It takes your breath away when you see it happen. The odds are that the driver might plow into someone or something and injure or kill someone, very frightening. You look in amazement when it happens and cannot believe what you just saw, especially if by luck no one gets hurt then you think it is a miracle that nothing adverse occurred.

The other case of falsely believing a stop sign exists when it does not, this too can potentially create a car crash or similar adverse event. If a car suddenly and seemingly inexplicably comes to a stop, there is a solid chance that a car behind might ram into the stopped car. I suppose if you had to choose between a car that doesn’t stop at a real stop sign versus a car that stops at an imaginary stop sign, you’d feel “better” about the stopping at an imaginary stop sign, though it all depends upon the specifics of the traffic situation at the moment.

Pinched for Time to Fully Test the Convolutional Neural Network

The AI developer that is crafting the convolutional neural network is pinched for time in terms of being able to fully test the system and has not yet vetted ways to avoid the false positives and false negatives. The AI developer was given a deadline and told that the latest iteration of the ML or DL needs to be pushed into the on-board self-driving car system right away. This can be done via OTA (Over-The-Air) electronic updating with the AI self-driving car.

This AI developer believes earnestly and with all their heart in the importance of AI self-driving cars. It is a noble end to ultimately be able achieve true AI self-driving cars.

Why?

Because it is believed that AI self-driving cars will save lives. People that are being killed daily in human driven car crashes are needlessly dying, since if we had AI self-driving cars there would not be such deaths, or so the pundits say (I refer to this as the “zero fatalities, zero chance” myth).

It is also a noble cause because of the mobility that will be spread throughout the world. People that do not have access to a car and getting around will be able to simply summon an AI self-driving car and be on their way. Some refer to this as the democratization of mobility.

There are other stated noble cause outcomes for the advent of AI self-driving cars and I won’t go into all of them here. Generally, it is rather well-publicized that there are claimed noble ends to be had.

The AI developer has to make a choice between proceeding with the release of his convolutional neural network into the active AI self-driving car on-board system, though the AI developer knows it is not ready for prime time, but this AI developer is faced with the urgency of a deadline and been told that the failure to download the latest version will hold-up progress on the budding AI self-driving car being trial fielded.

What should the AI developer do?

The target end is a noble one. Being the inhibitor of reaching the noble end, well, that’s a tough pill to swallow. In this case, the AI developer decides it is best to proceed with something, and not hold-up the bus, so to speak, and opts to go ahead and let loose the not-yet-ready convolutional neural network. Accordingly, the AI developers makes it download ready and pushes it along.

Noble cause corruption.

The AI developer felt compelled by the noble cause to proceed with something they knew wasn’t ready and felt that the means was ultimately justifiable by the highly desirable ends. And though I’ve mentioned the instance of a visual image analyzer that fell under this spell, you should enlarge the scope and realize that any of the numerous AI subsystems could be equally pushed along and yet not be appropriately ready.

It could be the sensor elements involving the cameras, the radar, the ultrasonic, the LIDAR, and so on. This can also apply to the sensor fusion portion of the AI system. It could readily apply to the virtual world model updating portion. There is an equal chance that the same fate might befall the AI action planning portion, and likewise could happen with the car controls commands subsystem.

The advent of AI self-driving cars carries such a tremendous notion of noble cause that it is tempting by some to justify otherwise untoward actions to try and make sure that AI self-driving cars come to fruition. If you are creating an AI system that maybe does something more pedantic, such as an AI system that can help you play a video game or perhaps aid you in shopping for groceries, these are not nearly as noble.

AI self-driving cars have the drop-the-microphone noble cause. These are AI systems that are about saving lives. These are the AI systems about changing the world and making lives better.

There aren’t many AI systems that can claim that kind of double-whammy.

As earlier mentioned, the greater the noble ends, the chances of being slippery about the means will often be increased.

Conclusion

There is a clear and present danger that the alluring noble ends of reaching a true AI self-driving car can be corruptive toward the efforts involved in developing and fielding AI self-driving cars.

AI developers involved in AI self-driving car efforts are not necessarily plotting evil deeds (some conspiracy theorists believe they are), and instead can simply find themselves confronted with seemingly tough decisions about the work they are doing. Perhaps having to decide whether their decisions are justifiable as balanced against the desired ends.

I hope that AI developers and AI managers, along with all of those working at the various auto makers and tech firms that are devising AI self-driving cars, will take a moment to reflect upon whether there are any noble cause corruptive aspects involved in the efforts at their firm. If so, it is important to take the first step of recognizing the noble cause phenomena. Without realizing that the phenomena is or has taken over, you are less likely to be able to confront it.

Consider carefully the ends justifying the means, and make sure that you don’t fall into the trap of believing that any means is acceptable as long as the goal of producing a true AI self-driving car is reached. I could translate that into Latin but having it in English seems sufficient.

For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru

To follow Lance Eliot on Twitter: @LanceEliot

Copyright 2019 Dr. Lance Eliot

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store