Dr. Lance Eliot, AI Insider
(Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/)
Bounty hunter needed to find a copper pot that went missing from a small shop. Reward for recovery of the copper pot will be 65 bronze coins. So said a message during the Roman Empire in the city of Pompeii.
In more modern times, you might be aware that in the 1980's there were some notable bounties offered to find bugs in off-the-shelf software packages and then in the 1990’s Netscape notably offered a bounty for finding bugs in their web browser. Google and Facebook had each opted toward bounty hunting for bugs starting in the 2010 and 2013 years, respectively, and in 2016 even the U.S. Department of Defense (DoD) got into the act by having a “Hack the Pentagon” bounty effort (note that the publicly focused bounty was for bugs found in various DoD related websites and not in defense mission critical systems).
According to statistics published by the entity HackerOne, the monies paid out toward bug bounty discoveries totaled nearly $30 million dollars last year.
For bugs that are considered substantive issues by a software maker, the usual everyday bounty is around $2,000 per bug (once it is confirmed that the bug exists).
Bounties though are decided by the eye of the beholder in the sense that whomever is offering the bounty might go lower or higher and in some cases there have been bounties in the six figure range, typically around $250,000 or so.
Some are puzzled that any firm would want to offer a bounty to find bugs in their software.
On the surface, this seems like “you are asking for it” kind of a strategy.
Those that favor bounty hunting for software bugs are prone to saying that it makes sense to offer such programs. Rather than trying to pretend that there aren’t any holes in your system, why not encourage holes to be found, doing so in a “controlled” manner?
Well-known firms such as Starbucks, GitHub, AirBnB, America Express, Goldman Sachs, and others have opted to use the bounty hunting approach.
Generally, a firm wishing to do so will put in place a Vulnerability Disclosure Policy (VDP).
The VDP indicates how the bugs are to be found and reported to the firm, along with how the reward or bounty will be provided to the hunter. Usually, the VDP will require that the hunter end-up signing a Non-Disclosure Agreement (NDA) such that they won’t reveal to others what they found.
White Hat Hackers Try to Do Some Kind of Good
Who are these bounty hunters?
They are often referred to as white hat hackers.
A white hat hacker is the phrase used for “hackers” that are trying to do some kind of good.
For anyone that knows much about hacking, such as trying to break into a system, it is somewhat frustrating that the mass media will often confuse true hacking from marginal hacking. If someone uses a social engineering technique to get your password, perhaps calling you on the phone and claiming to be with tech support and asking you for your password, few “genuine” hackers would consider that to be a form of hacking. The culprit merely tricked someone into giving up their password.
This brings us to the topic of what kinds of software bugs the bounty efforts are looking for.
Generally, the bounty program excludes things like social engineering. It’s more about having identified an actual bug in the system.
Often, for bounty efforts, more than one bounty hunter finds the same bug.
The firm that is undertaking the bounty effort needs to figure out which of the bug reports are a duplication. They also need to figure out which bounty hunter should get the credit for having found the bug.
If you are pondering what kind of bugs might be found, you can take a look at the Common Vulnerability Scoring System (CVSS) to see how bugs are labeled as either low, medium, high, or critical, along with seeing examples of such bugs.
Debate ensues in leadership circles about whether it is better to use a bounty approach or to instead hire a bug-finding firm to do the work instead.
There are plentiful number of firms that will do security threat analyses and do the same kind of work that bounty hunters would do. You can establish the hourly rate or a set fixed price for them to assess your systems and try to find bugs. They can then work hand-in-hand with your software team and it is all done as a rather confidential matter.
Whether Internal Team Should Do Bounty Hunting is a Discussion
Some would even argue that your own internal software team should be doing the bounty hunting.
One argument against using your own team to find bugs is that they are too familiar with the software to potentially find the bugs.
There are bounty hunters that are interested in selling their find to the highest bidder.
If the bounty provided by a firm does not seem sufficient, the hunter with a found bug could be tempted to find someone else willing to pay more. There is a black-market for the purchase of bugs, a marketplace somewhat readily found on the so-called Dark Web (these are parts of the Internet known for notorious or nefarious activity).
AI Self-Driving Cars And Bounty For Bugs
What does this have to do with AI self-driving driverless autonomous cars?
At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. Besides our own efforts to find and eliminate any potential bugs, we also are able to aid other tech firms and auto makers by being private “bounty hunters” when requested, focusing on specifically AI self-driving car systems.
A macroscopic question though is whether or not the auto makers and tech firms should use bounty hunter efforts or not?
Similar to my earlier points, you might at first say that of course the auto makers and tech firms that are making AI self-driving cars should not undertake public oriented bounty hunter programs.
Why would they allow hackers to try and find bugs in AI self-driving car systems?
Isn’t this tantamount to having your home examined closely by burglars?
In fact, it’s scarier than that. It’s like having an entire neighborhood of homes closely examined by burglars, and they might not just be interested in your jewels and money but maybe be a threat to your personal safety too.
When you consider that AI self-driving cars are life-or-death systems, meaning that an AI self-driving car can go careening off the road and kill the human occupants or humans nearby, it would seem like the last thing you would want to do is invite potential black hat hackers to find holes.
The counter-argument is that if the auto makers or tech firms don’t do a bounty type program, will they end-up putting on the roads an AI self-driving car that has unknown bugs, for which the black hat hackers will ultimately find the holes anyway. And, once those holes are found, the dastardly results if exploited could be life-and-death for those using the AI self-driving cars and those nearby them.
Some say that it would be dubious and actually dangerous for the auto makers and tech firms to consider doing a public oriented bounty program for finding bugs in AI self-driving cars. If those entities want to do a private oriented bounty program, involving carefully selected white hat hackers, it would seem more reasonable given the nature of the life-and-death systems involved.
Run a Private Bounty Program, Hire a Firm, Handle Internally — All Options
It becomes on the heads of the auto maker or tech firm then whether using a private bounty program is best, or whether to instead hire a firm to do the equivalent, or whether to try some kind of internal bounty effort.
The presumption is that the auto maker or tech firm needs to decide what will most likely reduce the chances of bugs existing in the AI self-driving car systems. In fact, the auto maker or tech firm might try all of those avenues, doing so under the notion that given the importance of such systems and their critical nature, the more the merrier in terms of finding bugs.
There are some that believe that the auto makers and tech firms might not take seriously the need to find bugs and thus the assertion is made that regulations should be adopted accordingly.
Perhaps the auto makers and tech firms should be forced by regulatory laws to undertake some kind of bounty efforts to find and eliminate bugs. This is open to debate and for some it is a bit of an overreach on the auto makers and tech firms. It is likely though that if AI self-driving cars appear to be exhibiting bugs once they are on our streets, the odds are that regulatory oversight will begin to appear.
One view is that there’s no need to do a large-scale casting call for finding bugs.
Instead, the AI self-driving cars themselves will be able to presumably report when they have a bug and let the auto maker or tech firm know via Over The Air (OTA) processing.
It is assumed that the auto makers and tech firms will dutifully and rapidly send out updates via OTA to their AI self-driving cars, shoring up any bugs that are found. Though this is supposed to be the case, there will still be a time delay between when the bugs are discovered and then a bug patch or update is prepared for use. There will be another time delay between when those patches get pushed out and when the AI self-driving cars involved are able to download and install the patch.
Even once the patch is ready, sending it to the AI self-driving cars will take time. Plus, most of the AI self-driving cars are only able to do updates via the OTA when the AI self-driving car is not in motion and in essence parked and not otherwise being active.
We also need to consider the severity of the bug.
If the bug is so severe that it causes the AI self-driving car to lose control of the car, such as if the AI freezes up, you are looking at the potential of an AI self-driving car that rams into a wall, or slams into another driver, or rolls over and off-the-road. The point being that you cannot think of this as finding bugs in perhaps a word processing package or a spreadsheet package. These are bugs in a real-time system and one that holds in the balance the lives of humans.
For those of you that pay attention to the automotive field, you likely already know that General Motors (GM) was one of the first auto makers to formally put in place a VDP, doing so in 2016.
For their public bounty efforts, the focus has tended to be the infotainment systems on-board their cars or other supply chain related systems and aspects.
Overall, it has been reported that GM from 2016 to the present has been able to resolve over 700 vulnerabilities and done so in coordination with over 500 bounty hunters and hackers. Within the GM moniker, this effort includes Buick, Cadillac, Chevrolet, and GMC. Currently, an estimated seven of the Top 50 auto makers have some kind of bounty program.
This is overarching focus to-date though is different from dealing with the inner most AI aspects of the self-driving car capabilities.
I’ve had AI developers ask me if they can possibly “get rich” by being a bounty hunter on AI self-driving cars.
I wish that I could say yes, but the answer is a likely no. It might seem like an exciting effort of being a bounty hunter, wandering the hills looking for a suspect. It’s not as easy as it seems. The odds of finding a bug is likely not so high, and how much you’d get paid is a key question too.
Consider too that you would need access to the AI self-driving car and its systems to even look for a bug.
Right now, there aren’t true AI self-driving cars that are readily and openly available on our roadways. Instead, the auto makers and tech firms are carefully watching over the AI self-driving cars that are on the public roadways. About the only means for you to get access would be to become a white hat hacker that gets invited into a private bounty hunter program for an auto maker or tech firm.
When the outlaw Jesse James was sought during the Old West, a “Wanted” poster was printed that offered a bounty of $5,000 for his capture (stating “dead or alive”). It was a rather massive sum of money at the time. One of his own gang members opted to shoot Jesse dead and collect the reward. I suppose that shows how effective a bounty can be.
Bounty programs have existed since at least the time of the Romans and thus we might surmise that they do work, having successfully endured as a practice over all of these years. For AI self-driving cars, I hope you will ponder carefully whether the use of a bounty program is worthwhile or not. The key overall aspect is that we don’t want AI self-driving cars on our roadways that have bugs. I’ll put up a Wanted poster right now for that goal.
For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website
The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.
More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru
To follow Lance Eliot on Twitter: https://twitter.com/@LanceEliot
For his Forbes.com blog, see: https://forbes.com/sites/lanceeliot/
For his Medium blog, see: https://firstname.lastname@example.org
For Dr. Eliot’s books, see: https://www.amazon.com/author/lanceeliot
Copyright © 2019 Dr. Lance B. Eliot