Way Overworked AI Developers Hampers Driverless Car Progress And Can Be Hazardous

Dr. Lance Eliot, AI Insider

Image for post
Image for post

You have your sleeping bag at the office for those overnight non-stop coding deadlines that require you to work on the AI system until you get the particular component working right.

Turns out, you’ve been using the sleeping bag quite a bit lately.

Welcome to the typical workplace conditions for AI developers doing AI self-driving driverless autonomous car systems work.

Now, I’m not suggesting that there aren’t lots of other computer system developers in lots of other lines of work that aren’t doing the same. There are.

For driverless cars, the question arises whether excessive overworking lends itself to properly developing AI-based autonomous cars.

Overworked AI Developers And AI Systems Development

Keep in mind that AI self-driving cars are real-time oriented life-or-death determining systems.

If we were to compare the normal everyday kind of system to the AI system of a self-driving car, I think it would be fair to claim that the self-driving car system has a somewhat higher onus in terms of being built for high reliability and safety.

I know many high-tech CEOs that brazenly tell everyone that they intentionally work their people the hardest of any other company in town. It’s a source of pride to be able to say that you don’t allow your team members to take vacation. These CEO’s are proud to exclaim that they work their people to the bone.

Some even smirk that the appearance of providing perks at the office such as an in-house chef and ping pong tables is actually a “trick” to keep their employees working at the office, done under the guise of claiming to show regard for their people. The minimal cost to provide food at the office is well-worth the added productivity of the developers.

Keep those developers coding and doing so by throwing them a bone of one kind or another is easy enough to do, some sadly seem to think.

Plus, the advent of smartphones and the Internet has made things even “better” for those companies desiring to wring every ounce out of their developers. An employee that leaves the office is still on-the-hook for answering questions and remaining engaged in work activities, via the use of their smartphone and their tablets.

One company had encouraged their team members to “take a break” by heading to Yosemite (a wilderness area in Northern California, which is about a 3 ½ hour drive from Silicon Valley). Well, turns out that the company had arranged for the team members to stay in lodges at a camp area that was outfitted with Internet access. Yes, you guessed right if you deduced that the team members ended-up spending most of the time “in the wild” by sitting at the campground and working on their laptops.

So, is it sensible to go ahead and work your people non-stop or not?

Some would say that it has become the default practice.

There are many executives that believe utterly in the coding until-you-drop type of work environment. Furthermore, if you do drop, the viewpoint is that it shows your weakness. It shows that you aren’t suitable for the big time.

Sometimes these same executives will momentarily seemingly open their eyes to the situation and maybe, just maybe, concede that things are a bit over-the-top in terms of the workplace demands. Sadly, some of those will then do the Yosemite kind of trips under the false belief it will “refuel, recharge, and reconnect” their developers. They are either deluded into believing that a working “vacation” gets the trick done, or they know it won’t but want to at least seem to be doing something about workplace complaints or qualms by those being overworked.

Overwork Impacts On Development Of Autonomous Cars

What does this have to do with AI-based autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI systems for self-driving cars, and, as such, we keep tabs on what other like firms are doing, and we’ve had many of their AI developers that have eyed coming to us, based on the excessive overwork taking place at those other firms and the more measured tone that we take.

This brings up some important points about excessive overwork.

Though at first glance it might seem “wise” to overwork your AI developers, since it would appear to gain you some kind of efficiency and productivity gains, the surface level perspective can differ from the reality of what actually turns out.

Let’s consider some of the downsides of the excessive overwork situation.

First, let’s agree that people can get burned out at work. This is a dangerous thing to ferment when you are developing AI software for self-driving cars.

I challenge you to demonstrate that a burnt-out AI developer is somehow highly efficient and productive, which is what the excessive overwork pundits might claim. Sure, you are getting those developers to work longer hours than someone in a less obsessed work-hours environment, but going toe-to-toe, does the additional hours really translate into being more efficient and productive. I’d say it does not.

Burnout Consequences And Churn Too

So, the first principle here is that by working your AI developers toward excessive overwork you are burning them out, which will undermine productivity. Thus, you are falsely believing that by your people working more hours than other firms that you are somehow getting ahead of those other firms. It’s a myth.

Second, the odds are pretty high that you’ll have those AI developers seeking to leave your firm, doing so in hopes of finding a work environment of a more measured nature. This seems especially true for the millennial generation that is aiming to have a more balanced work/life portfolio.

I realize you might be thinking that having AI developers leave due to excessive overwork is just fine because they were the “weak” ones anyway. You might want to reconsider that belief. There’s lots of really good AI talent that knows they can command what the market will bear. There is a low supply of such talent. There is very high demand for such talent.

Third, if you have a high churn rate on your AI team, it will definitely adversely impact the systems being developed for an AI self-driving car.

Each time you have one of your AI developers leave, the odds are that whatever portion they were working on will now suffer falling behind or have other maladies. Plus, whomever you hire has to initially go up a learning curve of whatever is being worked on. Therefore, you are going to take a big productivity hit during the time that the AI developer was gone after leaving the firm, and until the new hire gets fully up-to-speed.

These could be significant chunks of time. From the moment that an AI developer leaves your team, until the time that you’ve found a suitable replacement, and brought that replacement on-board, and given them a reasonable amount of time to figure out what the predecessor was working on, I’d dare say it could be many weeks and in some cases months to do so.

You also need to consider the impact to the AI development team. The odds are that you’ll use some of them to aid in the interviewing process. Count that as lost productivity towards their AI development tasks. Once you hire the replacement, the odds are that other members of the team will need to aid bringing that person up-to-speed. More lost productivity for them. Imagine too if the replacement turns out to not be conducive to the rest of the team, and so it could be that you might have harmed the overall productivity of the entire team for a long time.

Error Rates And Higher Likelihood Of Shortchanging Validations

Another factor of excessive overwork involves error rates and can severely undermine the safety and reliability of the AI self-driving car systems.

Suppose I can produce 100 lines of code per hour. A colleague, Joe, let’s say he also produces 100 lines of code per hour. We seem to have the same productivity rate. Imagine that my code is completely error free (yay!). Imagine that Joe has 1 error per every 20 lines of code. To deal with the errors, there will be time needed to find them and correct them. And, that presumes you can even find the errors.

This is illustrating that you need to consider the error rates and other such factors and not fall into the trap of relying upon some other simplistic measure of however you count productivity. AI self-driving cars need to have highly reliable systems. There needs to be overt and ongoing and insistent efforts toward making the AI system be as reliable and safe as feasible. Not doing so will likely produce AI systems that are more so error prone, leading to a possibly disastrous result for all.

I’d like to also toss into the excessive overwork scenario that it can lead to bad decisions about crucial design elements of an AI system. It can cause the members of the team to become so stressed that they make take out their frustrations by purposely undermining the AI system, maybe even seeding something dastardly into it. Some at times have turned to drugs to try and maintain the non-stop work efforts, which then can lead to an undermining of their lives both inside and outside of work. Etc.

Dilbert-Like Reaction Can Be Faking Overwork

Here’s another sometimes shocking surprise to those leaders that think they are doing the right thing by promoting a company culture of excessive overwork, namely the fake work approach.

An AI developer can be clever enough to appear to be making progress when they are really just doing fake work. If you come to me and ask me for an estimate of how long it will take to setup that ML portion for a particular component, I might sandbag you by giving you a super high estimate. I do so to protect myself secretly from the overwork. This seems sensible to the worker because they feel that if the firm is being unfair to them, why not be unfair in return.

I am guessing that some of you might be thinking that this discussion about excessive overwork is a plea to go toward being underworked. Lance, are you saying that my people should lounge near the pool during the workday, drinking margaritas, and having a good old time, and punch the clock once and a while. No, I’m not saying that. If that’s what you think I’m saying, please take off those rose-colored glasses about the fantastic advantages of excessive overwork that you seemed to believe in. Time to smell the coffee and wake-up to what’s really happening by your approach.

Notice that I’ve tried to carefully phrase the nature of the overwork as “excessive” in the sense that I am saying taking overwork to an extreme is the problem.


As I earlier mentioned, doing overwork is often a needed element when facing particular deadlines. This though is typically temporary in nature and the AI developers can stretch to cope with it, knowing that they will not be mired in it permanently.

For some high-tech leaders that romanticize excessive overwork, I think if they really looked closely at the impact it is having on their teams, they might reconsider whether their high-level perspective matches with reality. Others in the firm will often act as “yes men” to go along with the excessive overwork philosophy, since the top leader won’t consider anything else but it. For those firms headed by the “excessive overwork” demanding take-no-prisoners leader, it’s hard to get those kinds of personalities to see anything other than the advantages of insisting on excessive overwork.

Let’s just hope that the AI self-driving cars under their tutelage don’t come back to harm us all if those AI systems are “wimpy” in comparison to the stronger and safer such systems developed in a more measured work environment.

For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru

To follow Lance Eliot on Twitter: @LanceEliot

For his Forbes.com blog, see: https://forbes.com/sites/lanceeliot/

For his Medium blog, see: https://medium.com/@lance.eliot

For Dr. Eliot’s books, see: https://www.amazon.com/author/lanceeliot

Copyright © 2019 Dr. Lance B. Eliot

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store