Clever Decoys Via Chaff Bugs To Ward Off Self-Driving Car Cyber-Hackers

Dr. Lance Eliot, AI Insider

Image for post
Image for post

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column:]

In the movie remake of the Thomas Crown Affair, the main character opts to go into an art museum to ostensibly steal a famous work of art, and does so attired in the manner of The Son of Man artwork (a man wearing a bowler hat and an overcoat).

Spoiler alert, he arranges for dozens of other men to come into the museum dressed similarly as he, thus confounding the efforts by the waiting police that had been tipped that he would come there to commit his thievery. By having many men serving as decoys, he pulls off the effort and the police are exasperated at having to check the numerous decoys and yet are unable to nab him (he sneakily changes his clothes).

This ploy was a clever use of deception.

During World War II, there was the invention of chaff, which was also a form of deception.

Radar had just emerged as a means to detect flying airplanes and therefore be able to try and more accurately shoot them down. The radar device would send a signal that would bounce off the airplane and provide a return to the radar device, thus allowing detection of where the airplane was.

It was hypothesized that there might be a means to confuse the radar device by putting something into the air that would seem like an airplane but was not an airplane. At first, the idea was to have something suspended from an airplane or maybe have balloons or parachutes that could contain a material that would bounce back the radar signals.

A flying airplane could potentially release the parachutes or balloons that had the radar reflecting material.

After exploring this notion, it was further advanced by the discovery that pieces of metal foil could be dropped from the airplane and that it was an easier way to create this deception.

The first versions were envisioned as acting in a double-duty fashion, doing so by being the size of a sheet of paper and contain propaganda written on them. This would be providing a twofer (two-for-one), it would confuse the radar and then once landed on the ground it would serve as a propaganda leaflet.

Turns out that the use of strips of aluminum foil were much more effective.

This nixed the propaganda element of the idea. With the strips, you could dump out hundreds or even thousands of the strips all at once, bundled together but intended to float apart from each other once made airborne. These strips will flutter around and the radar would ping off of them. With a cloud of them floating in the air, the radar would be overwhelmed and unable to identify where the airplane was.

Of course, there are many other ways in life that you might come across these kinds of ploys.

One such relatively newer such use applies to computer software.

Use Of Deception And Chaff Bugs In Software

Researchers at NYU had posted an innovative research paper about their use of chaff bugs in software (work done by Zhenghao Hu, Yu Hu, and Brendan Dolan-Gavitt).

This reuse of the old “decoys trickery” is an intriguing modern approach to trying to bolster computer security.

We already know that computer hackers (in this case, the word “hackers” is being used to imply thieves or hooligans) will often try to find some exploitable aspect of a software program so that they can then get the program to do something to their bidding or otherwise act in an untoward manner.

Famous exploits include being able to force a buffer overflow, which then might allow the program to suddenly get access to areas of memory it normally should not.

These potential exploits are usually small clips of code that can be turned into aiding the evil doings of the hacker.

Let’s get back to the researchers and what they came up with.

They were trying to develop software that would find exploits in software. Indeed, there are various tools that you can use to find potential exploits. The hackers use these tools. Such tools can also be used by those that want to scan their own software and try to find exploits, hopefully doing so before they actually release their software. It would be handy to catch the exploits beforehand, rather than having them arise at a bad time, or allow someone nefarious to find them and use them.

To be able to properly test a tool that seeks to find exploits, you need to have a test-bed of software that has potential exploits and thus you can run your detective tool on that testbed.

Presumably, the tool should be able to find the exploits, which you know are in the test-bed. This helps to then verify that the tool apparently works as hoped for. If the tool cannot find exploits that you know are embedded into the testbed, you’d need to take a closer look at the tool and try to figure out why it missed finding the exploit. This cycle is repeated over and over, until you believe that the tool is catching all of the exploits that you purposely seeded into the testbed.

So, you need to create a test-bed that has a lot of potential exploits. It can be laborious to think of and write a ton of such exploits. You can find many of them online and copy them, but it is still quite a labor intensive process. Therefore, it would be handy to have a tool that would generate exploits or potential exploits which you could then insert into or “inject” into a software test-bed.

With me so far?

Here’s the final twist.

If you had a tool that could create potential exploits, doing so for purposes of creating test exploits to be put into a test-bed of software, you could also consider potentially using the ability to generate potential exploits to create decoys for use in real software.

Think of each of the potential exploits as akin to a strip of foil for the World War II chaff.

Explaining How Chaff Bugs Work

For the WWII chaff, you’d have lots and lots of the strips, so as to overwhelm the enemy radar.

Why not do the same for software by generating say hundreds or maybe even thousands of potential exploits, clips of code, and then embed those clips of code into the software that you are otherwise developing.

This could then serve to trick any hacker that is aiming to look into your code.

They would find tons of these potential exploits.

Now, you’d of course want to make sure that these seeded exploits are non-exploitable.

In other words, you’d be shooting your own foot if you were generating true exploits. You want instead ones that look like the real thing, but that are in fact not exploitable.

The hacker then would be faced with having to find a needle in a haystack, meaning that even if you have a real exploit in your code, presumably done unintentionally and you didn’t catch it beforehand, the hacker now is faced with thousands of potential exploits and the odds of finding the true one is lessened. This would raise the barrier to entry, so to speak, in that the hacker now has to spend an inordinate amount of time and effort to possibly find the true exploit, even if it exists, which it might not.

Would it scare off a hacker looking for exploits?

Maybe yes, maybe no.

On the one hand, if the hacker looked at the overall code and found right away a potential exploit, they might get pretty excited and think it is their lucky day. They might then expend a lot of attention to the found exploit, which presumably if truly un-exploitable then is a waste of their time.

Would they then give up, or would they look for another one?

If they give up, great, the decoy did its thing. If they look for more, they’ll certainly find more because we know that we’ve purposely put a bunch of them in there.

After finding numerous such potential exploits, and after discovering that they are non-exploitable, would then the hacker give up?

This seems likely. It all depends on how important it is to find a potential exploit. It also depends on how “good” the non-exploitable exploits are in terms of looking like a true exploit.

You could consider randomly scattering the non-exploitable exploits throughout the software.

This might not be so good.

On the one hand, the randomness hopefully prevents a hacker from identifying a pattern to where the decoys were planted.

At the same time, it could be that you’ve put an exploit in a part of the code that would not be advisable for it.

AI Autonomous Cars And Chaff Bugs

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. As part of that effort, we are also exploring ways to try and protect the AI software, particularly once it is on-board a self-driving car and could potentially be hacked by someone with untoward intentions.

One approach to help make the AI software harder to figure out for an interloper involves making use of code obfuscation. This is a method in which you purposely make the source code difficult to logically comprehend.

Another possibility of a means to try and undermine an interloper would be to consider using chaff bugs in the AI software.

This has advantages and disadvantages.

It has the potential to boost the security by a security-by-deception approach and might discourage hackers that are trying to delve into the system. A significant disadvantage would be whether the decoys would possibly undermine the system due to the real-time nature of the system. The AI needs to work under tight time constraints and must be making computational aspects that ultimately are controlling a moving car and for which the “decisions” made by the software are of a life-or-death nature.

A decoy that placed in the wrong spot of the code and for which chews up on-board cycles could put the AI and humans at risk.

Consider that these are the major tasks of the AI for a self-driving car:

  • Sensor data collection and interpretation
  • Sensor fusion
  • Virtual world model updating
  • AI action plan updating
  • Car controls command issuance

Where would it be “safe” to put the decoys?

Safe in the sense that the execution of the decoy does not delay or interfere with the otherwise normal real-time operation of the system.

It’s dicey wherever you might be thinking to place the decoys.

We also need to consider the determination of the hacker.

If a hacker happens upon some software of any kind, they might be curious to try and hack it, and give up if they aren’t sure whether the software itself does something of enough significance that it is worth their effort to continue trying to crack it. In the case of the AI software for a self-driving car, there is a lot of incentive to want to crack into the code, and so even if the decoys are present, and even if it becomes apparent to the hacker, and even if the hacker is somewhat discouraged, it would seem less likely they’d give up since the prize in the end has such high value.

Anyway, one of the members of our AI development team is taking a closer look at this potential use of chaff bugs. My view is that the team members are able to set aside a small percentage of their time toward innovation projects that might or might not lead to something of utility. It is becoming gradually somewhat popular as a technique among high-tech firms to allow their developers to have some “fun” time on projects of their own choosing. This boosts morale, gives them a break from their other duties, and might just land upon a goldmine.

Machine Learning And Chaff Bugs

One approach that we’re exploring is whether Machine Learning (ML) can be used to aid in figuring out how to generate the non-exploitable exploits and also to make those decoys as realistically appearing to be integral to the code as we can get.

By analyzing the style of the existing source code, the ML tries to take templates of non-exploitable exploits and see if they can be “personalized” as befits the source code.

This would make those decoys even more convincing.

At an industry conference I mentioned the chaff bugs work, and I was asked about whether to hide them or whether to make them more obvious in some respects.

The idea is that if you hide them well, the hacker might not realize they are faced with a situation of having to pore through purposely seeded non-exploitable exploits and so blindly just plow away and use up a lot of their effort needlessly.

On the other hand, if you make them more apparent, at least some of them, it might be a kind of warning to the hacker that they are faced with software that has gone to the trouble to make it very hard to find true exploits. You might consider this equivalent to putting a sign outside of your house that says the house is protected by burglar alarms. The sign alone might scare off a lot of potential intruders, even whether you have put in place the decoys or not (some people get a burglar alarm sign and put it on their house as merely a scare tactic).

For AI software that runs a self-driving car, I’d vote that we all ought to be making it as hard to crack into as we can.


The auto makers and tech firms aren’t putting as much attention to the security aspects as they perhaps should, since right now the AI self-driving cars are pretty much kept in their hands as they do testing and trial runs. Once true AI self-driving cars are being sold openly, the chances for the hackers to spend whatever amount of time they want to crack into the system goes up.

We need to prepare for that eventually. If AI self-driving cars become prevalent, and yet they get hacked, it’s going to be bad times for everyone, the auto makers, the tech firms, and the public at large.

Chaff bugs and whatever other novel ideas arise, we’re going to be taking a look and be kicking the tires to see if they’ll be viable as a means to protect the AI systems of self-driving cars.

For free podcast of this story, visit:

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see:

To follow Lance Eliot on Twitter:

For his blog, see:

For his AI Trends blog, see:

For his Medium blog, see:

For Dr. Eliot’s books, see:

Copyright © 2019 Dr. Lance B. Eliot

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store