Bug Bounties To Make You Rich (Maybe): Arrives To AI Driverless Cars Too

Dr. Lance B. Eliot, AI Insider

Image for post
Image for post
Bug bounties provide cash to helpful hackers

Bounty hunter needed to find a copper pot that went missing from a small shop. Reward for recovery of the copper pot will be 65 bronze coins. So said a message during the Roman Empire in the city of Pompeii.

In more modern times, you might be aware that in the 1980s there were some notable bounties offered to find bugs in off-the-shelf software packages and then in the 1990’s Netscape notably offered a bounty for finding bugs in their web browser. Google and Facebook had each opted toward bounty hunting for bugs starting in the 2010 and 2013 years, respectively, and in 2016 even the U.S. Department of Defense (DoD) got into the act by having a “Hack the Pentagon” bounty effort (note that the publicly focused bounty was for bugs found in various DoD related websites and not in defense mission critical systems).

According to statistics published by the entity HackerOne, the monies paid out in 2017 toward bug bounty discoveries totaled nearly $12 million dollars and for 2018 it sized up to be more than $30 million dollars. For bugs that are considered substantive issues by a software maker, the usual everyday bounty is around $2,000 per bug (once it is confirmed that the bug exists). Bounties though are decided by the eye of the beholder in the sense that whomever is offering the bounty might go lower or higher and in some cases there have been bounties in the six figure range, typically around $250,000 or so.

Some are puzzled that any firm would want to offer a bounty to find bugs in their software.

On the surface, this seems like “you are asking for it” kind of a strategy. If you let the world know that you welcome those that might try to find holes in your software, it seems tantamount to telling burglars to go ahead and try to break into your house.

Those that favor bounty hunting for software bugs are prone to saying that it makes sense to offer such programs. Rather than trying to pretend that there aren’t any holes in your system, why not encourage holes to be found, doing so in a “controlled” manner? In contrast, without such a bounty effort, you could just hope and pray that by random chance no one will find a hole, but if instead you are offering a bounty and telling those that find a hole that they will be rewarded, it offers a chance to then shore-up the hole on your own and then prevent others from secretly finding it at some later point in time.

Well-known firms such as Starbucks, GitHub, AirBnB, America Express, Goldman Sachs, and others have opted to use the bounty hunting approach. Generally, a firm wishing to do so will put in place a Vulnerability Disclosure Policy (VDP). The VDP indicates how the bugs are to be found and reported to the firm, along with how the reward or bounty will be provided to the hunter. Usually, the VDP will require that the hunter end-up signing a Non-Disclosure Agreement (NDA) such that they won’t reveal to others what they found.

The notion of using an NDA with the bounty hunters has some controversy. Though it perhaps makes sense to the company offering the bounty to want to keep mum the exposures found, it also is said to stifle overall awareness about such bugs.

Some VDP’s stipulate that the NDA is only for a limited time period, allowing the firm to first find a solution to the apparent hole and then afterward to allow for wider disclosure about it. Once the hole has been plugged, the firm then allows a loosening of the NDA so that the rest of the world can know about the bug. The typical time-to-resolution for bounty hunted bugs is usually around 15–20 days when a firm wants to plug it right away, while in other cases it might stretch out to 60–80 days. In terms of paying the bounty hunter, the so-called time-to-pay, after the hole has been verified as actually existing, the bounty payments tend to be within about 15–20 days for the smaller instances and around 50–60 days for the larger instances.

White Hat Hackers Try to Do Some Kind of Good

Who are these bounty hunters? They are often referred to as white hat hackers. A white hat hacker is the phrase used for “hackers” that are trying to do some kind of good. We normally think of hackers as cybersecurity thieves that hack their way into systems to steal and plunder. Those are usually considered black hat hackers. Consider that hacking is akin to the days of the Old West, wherein the good gun slingers wore white hats and the evil ones wore black hats (well, that’s what TV and movies suggest).

This brings us to the topic of what kinds of software bugs the bounty efforts are looking for. Generally, the bounty program excludes things like social engineering. It’s more about having identified an actual bug in the system. The bounty hunter normally has to be relatively clever and try all sorts of potential exploits to find a hole. It can be a laborious process. There is no guarantee that the bounty hunter will find any holes. This doesn’t mean that there aren’t any holes, it just means that the bounty hunter couldn’t find them.

A firm might feel better about its software if dozens or perhaps hundreds or thousands of bounty hunters have tried to find software bugs and have not been able to do so. Again, this is not any kind of proof that no such bugs exist.

Suppose a bounty hunter finds a bug but decides not to tell the firm? That’s the classic conundrum.

If the firm provides a “safe harbor” protection via their VDP, meaning that they will not try to go after the bounty hunter for finding a bug, and if the firm offers enough of a monetary incentive, the bounty hunter is hopefully swayed toward reporting the bug to the firm.

On the other hand, the bounty hunter might be both a white hat and a black hat kind of hacker, such that if the bug is an exposure that could be exploited to steal or plunder, the value of the bounty might be insufficient and so the hunter keeps the bug under wraps.

Often, for bounty efforts, more than one bounty hunter finds the same bug. The firm that is undertaking the bounty effort needs to figure out which of the bug reports are duplicative. They also need to figure out which bounty hunter should get the credit for having found the bug. In many cases, the bounty hunters use some kind of reporting system set up by the firm to indicate the bugs being found, and as a result the logging keeps track of which bounty hunter first reported the bug.

Unfortunately, being able to determine which of the reported bugs are valid and which ones are not will take a lot of laborious effort by your highly skilled software engineers. It means that they will be taken away from whatever else that they should be doing.

If you are pondering what kind of bugs might be found, you can take a look at the Common Vulnerability Scoring System (CVSS) to see how bugs are labeled as either low, medium, high, or critical, along with seeing examples of such bugs. One example that is easy to describe is labeled as CVE-2009–0658 and involves the Adobe Acrobat buffer overflow vulnerability (which has since been fixed).

Essentially, if you tried to open a PDF document that contained a malformed picture (one likely purposely malformed), it would cause an overflow in the Adobe software buffer and allow a remote attacker to be able to then executive code on your system.

In some cases, the firm doing the bounty program will make it open to the public. Anyone that wants to have at it, please do so. These are usually time-bounded. The firm will declare that the bounty program starts say a month from now and will last for 60 days. This helps to then spark interest and get those bounty hunters looking. There are also time un-bounded bounty programs, wherein a firm will at any time welcome a bounty hunter offering a proposed found bug.

There are also private-oriented bounty efforts. In the private instances, the firm will tend to seek out specific known white hat hackers and arrange for them to get access to the software that is going to be put through the wringer. This hopefully reduces too the chances of a black hat hacker getting involved.

Debate ensues in leadership circles about whether it is better to use a bounty approach or to instead hire a bug-finding firm to do the work instead.

Whether Internal Team Should Do Bounty Hunting is a Discussion

Some would even argue that your own internal software team should be doing the bounty hunting.

One argument against using your own team to find bugs is that they are too familiar with the software to potentially find the bugs. They wrote the software and so might make all sorts of assumptions that would blind them to finding bugs.

There are bounty hunters that are interested in selling their find to the highest bidder. If the bounty provided by a firm does not seem sufficient, the hunter with a found bug could be tempted to find someone else willing to pay more. There is a black-market for the purchase of bugs, a marketplace somewhat readily found on the so-called Dark Web (these are parts of the Internet known for notorious or nefarious activity).

That being said, the effort to get a firm to pay you for the bug can be painfully slow and the firm might not ever opt to pay you, even if they have a bona fide bug bounty program in place. I would not suggest you quit your day job to become a software bounty hunter bent on making a fortune by finding bugs. There might be gold in them thar hills, but you will likely starve before you can find enough to make a living and put food on your table.

Bug Bounties Applied to AI Self-Driving Car Efforts

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. Besides our own efforts to find and eliminate any potential bugs, we also are able to aid other tech firms and auto makers by being private “bounty hunters” when requested, focusing on specifically AI self-driving car systems.

A macroscopic question though is whether or not the auto makers and tech firms should use bounty hunter efforts or not?

Similar to my earlier points, you might at first say that of course the auto makers and tech firms that are making AI self-driving cars should not undertake public oriented bounty hunter programs. Why would they allow hackers to try and find bugs in AI self-driving car systems? Isn’t this tantamount to having your home examined closely by burglars? In fact, it’s scarier than that. It’s like having an entire neighborhood of homes closely examined by burglars, and they might not just be interested in your jewels and money but maybe be a threat to your personal safety too.

When you consider that AI self-driving cars are life-or-death systems, meaning that an AI self-driving car can go careening off the road and kill the human occupants or humans nearby, it would seem like the last thing you would want to do is invite potential black hat hackers to find holes.

The counter-argument is that if the auto makers or tech firms don’t do a bounty type program, will they end-up putting on the roads an AI self-driving car that has unknown bugs, for which the black hat hackers will ultimately find the holes anyway. And, once those holes are found, the dastardly results if exploited could be life-and-death for those using the AI self-driving cars and those nearby them.

Some say that it would be dubious and actually dangerous for the auto makers and tech firms to consider doing a public oriented bounty program for finding bugs in AI self-driving cars. If those entities want to do a private oriented bounty program, involving carefully selected white hat hackers, it would seem more reasonable given the nature of the life-and-death systems involved.

Run a Private Bounty Program, Hire a Firm, Handle Internally — All Options

It becomes on the heads of the auto maker or tech firm then whether using a private bounty program is best, or whether to instead hire a firm to do the equivalent, or whether to try some kind of internal bounty effort. The presumption is that the auto maker or tech firm needs to decide what will most likely reduce the chances of bugs existing in the AI self-driving car systems.

There are some that believe that the auto makers and tech firms might not take seriously the need to find bugs and thus the assertion is made that regulations should be adopted accordingly. Perhaps the auto makers and tech firms should be forced by regulatory laws to undertake some kind of bounty efforts to find and eliminate bugs. This is open to debate and for some it is a bit of an overreach on the auto makers and tech firms. It is likely though that if AI self-driving cars appear to be exhibiting bugs once they are on our streets, the odds are that regulatory oversight will begin to appear.

One view is that there’s no need to do a large-scale casting call for finding bugs.

Instead, the AI self-driving cars themselves will be able to presumably report when they have a bug and let the auto maker or tech firm know via Over The Air (OTA) processing. The OTA is a feature for most AI self-driving cars that allows the auto maker or tech firm to collect data from an AI self-driving car, via electronic communication such as over the Internet, and then also be able to push data and programs into the AI self-driving car.

It is assumed that the auto makers and tech firms will dutifully and rapidly send out updates via OTA to their AI self-driving cars, shoring up any bugs that are found. Though this is supposed to be the case, there will still be a time delay between when the bugs are discovered and then a bug patch or update is prepared for use. There will be another time delay between when those patches get pushed out and when the AI self-driving cars involved are able to download and install the patch.

I mention this time elapsed periods because some pundits seem to suggest that if a bug is found on a Monday morning at 8 a.m., by 8:01 a.m. the bug will have been fixed and the fix sent to the AI self-driving car. Not hardly. The auto maker or tech firm will need to first determine whether the bug is really a bug, and if so what is causing it. They will need to find a means to plug or overcome the bug. They will need to test this plug and make sure it doesn’t adversely harm something else in the system. Etc.

Even once the patch is ready, sending it to the AI self-driving cars will take time. Plus, most of the AI self-driving cars are only able to do updates via the OTA when the AI self-driving car is not in motion and in essence parked and not otherwise being active. If you are using an AI self-driving car for a ridesharing service, the odds are that you’ll be running it as much as you can, nearly 24×7. Thus, trying to get the OTA patch will not be as instantaneous as it might seem.

We also need to consider the severity of the bug. If the bug is so severe that it causes the AI self-driving car to lose control of the car, such as if the AI freezes up, you are looking at the potential of an AI self-driving car that rams into a wall, or slams into another driver, or rolls over and off-the-road. The point being that you cannot think of this as finding bugs in perhaps a word processing package or a spreadsheet package. These are bugs in a real-time system and one that holds in the balance the lives of humans.

For those of you that pay attention to the automotive field, you likely already know that General Motors (GM) was one of the first auto makers to formally put in place a VDP, doing so in 2016. For their public bounty efforts, the focus has tended to be the infotainment systems on-board their cars or other supply chain related systems and aspects.

Overall, it has been reported that GM from 2016 to the present has been able to resolve over 700 vulnerabilities and done so in coordination with over 500 bounty hunters and hackers. Within the GM moniker, this effort includes Buick, Cadillac, Chevrolet, and GMC. Currently, an estimated seven of the Top 50 auto makers have some kind of bounty program.

This is overarching focus to-date though is different from dealing with the inner most AI aspects of the self-driving car capabilities. Recently, GM announced that they would be digging deeper via the use of a private bounty program. Apparently, they have chosen a select group of perhaps ten or fewer white hat hackers that had earlier participated in the VDP and will now be getting a closer look into the inner sanctum.

I’ve had AI developers ask me if they can possibly “get rich” by being a bounty hunter on AI self-driving cars. I wish that I could say yes, but the answer is a likely no. It might seem like an exciting effort of being a bounty hunter, wandering the hills looking for a suspect. It’s not as easy as it seems. The odds of finding a bug is likely not so high, and how much you’d get paid is a key question too.

Consider too that you would need access to the AI self-driving car and its systems to even look for a bug. Right now, there aren’t true AI self-driving cars that are readily and openly available on our roadways. Instead, the auto makers and tech firms are carefully watching over the AI self-driving cars that are on the public roadways. About the only means for you to get access would be to become a white hat hacker that gets invited into a private bounty hunter program for an auto maker or tech firm.


When the outlaw Jesse James was sought during the Old West, a “Wanted” poster was printed that offered a bounty of $5,000 for his capture (stating “dead or alive”). It was a rather massive sum of money at the time. One of his own gang members opted to shoot Jesse dead and collect the reward. I suppose that shows how effective a bounty can be.

Bounty programs have existed since at least the time of the Romans and thus we might surmise that they do work, having successfully endured as a practice over all of these years. For AI self-driving cars, I hope you will ponder carefully whether the use of a bounty program is worthwhile or not. The key overall aspect is that we don’t want AI self-driving cars on our roadways that have bugs. I’ll put up a Wanted poster right now for that goal.

For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru

To follow Lance Eliot on Twitter: @LanceEliot

Copyright 2018 Dr. Lance Eliot

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store