Pranking Driverless Cars, Dangerously Foolish Acts

Dr. Lance B. Eliot, AI Insider

Image for post
Image for post
Pedestrians pranking self-driving cars is arising and dangerously foolish

When you do a prank on someone, it hopefully is done in jest and with no particular adverse consequences.

Some would say that pranks are fun, interesting, and not a big deal. Even those that pull pranks would likely reluctantly admit that you can take things too far. A good friend of mine got hurt when someone with them at a bar opted to suddenly pull their chair away from the table when they had gotten up, and the friend upon trying to sit back down, unknowingly and shockingly went all the way to the floor. The impact to the floor hurt their back, neck, and nearly caused a head concussion. My friend went to the local emergency room for a quick check-up. A seemingly “fun” joke that was meant to be harmless, turned out to have serious consequences.

Of course, pranks can be purposely designed to be foul.

Let’s shift attention from the notion of pranks to something similar to a prank, but we’ll recast it in different terms.

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI systems for self-driving cars. One of the concerns currently about AI self-driving cars is that some people are trying prank them.

The AI of today’s self-driving cars is still quite crude in comparison to where we all hope to be down-the-road. Generally, you can consider the AI to be a very timid driver.

You need to be aware that there are various levels of AI self-driving cars. The topmost level is referred to as Level 5. A Level 5 self-driving car is one that is supposed to be able to drive the car without any human driver needed. Thus, the AI needs to be able to drive the car as though it was as proficient as a human driver. This is not easy to do.

For self-driving cars less than a Level 5, it is assumed and required that a human driver be present. Furthermore, the human driver is considered responsible for the driving task, in spite of the aspect that the driving is being co-shared by the human driver and the AI system.

Early AI Self-Driving Cars Cautious Like Teenage Drivers

So, imagine that you’ve got an AI system that drives a Level 5 self-driving car in the simplest of ways, being at times akin to a teenage driver (though, don’t over ascribe that analogy; the AI is not at all “thinking” and thus not similar to a human mind, even that of a teenager!). The AI is driving the self-driving car and taking lots of precautions. This makes sense in that the auto makers and tech firms don’t want an AI self-driving car to be driving in a manner that could add risk to having an incident occur.

You’ve got a kind of grand convergence in that some people have figured out how timid these AI self-driving cars are, and of those people, there are some that have opted to take advantage of the circumstances. As I’ll emphasize in a moment, more and more people are going to similarly opt to “prank” AI self-driving cars.

When AI self-driving cars first started appearing, it was considered a novelty and most people kept clear of an AI self-driving car. They did so because they were surprised to even encounter one.

In addition, most of the time, the AI self-driving cars were being tried out on public roads in relatively high-tech areas. Places like Sunnyvale, California and Palo Alto. These are geographical areas that are dominated by tech firms and tech employees.

One of the early-on stories about how the AI reacts in a timid manner consisted of the now famous four-way stop tale. It is said that an AI self-driving car would come to a four-way stop, and do what’s expected, namely come to a full and complete stop. Do humans also come to a full and complete stop? Unless you live in some place that consists of rigorously law-abiding human drivers, I dare say that many people do a rolling stop. They come up to the stop sign and if it looks presumably safe to do so, they continue rolling forward and into the intersection.

In theory, we are all supposed to come to a complete stop and then judge as to which car should proceed if more than one car has now stopped at the four-way stop.

Well, the AI self-driving car detected that other cars were coming up to or at the stop sign on the other sides of the four-way stop. The AI then calculated that it should identify what those other cars are going to do. If a human inadvertently misreads a stop sign and maybe doesn’t even realize it is there, and therefore barrels into an intersection, you would certainly want to have the AI self-driving car not mistakenly enter into the intersection and get into an untoward incident with that ditsy human driver. Crash, boom.

But, suppose those human drivers weren’t necessarily ditsy and were just driving as humans do. They came up to the stop sign and did a traditional rolling stop. The AI of the self-driving car would likely interpret the rolling stop as an indicator that the other car is not going to keep the intersection clear. The right choice then would be for the AI to keep the self-driving car at the stop sign, waiting until the coast is clear.

Suppose though that human driven cars, one after another, all did the same rolling stop. The AI, being timid or cautious, allegedly sat there, waiting patiently for its turn.

Was it coincidental that the other cars, the human driver cars, proceeded each to do a rolling stop? It could be. But, it could also be that they noticed that the other car waiting at the stop sign was an AI self-driving car.

Is this a prank then by those human drivers upon an AI self-driving car?

I suppose you might argue with the use of the word “prank” in this case. Were those humans trying to pull away the seat of someone before they sat down? Were these humans messing with someone’s experiment as a means to get revenge? In one sense, you could argue that they were pranking the AI self-driving car, and doing so to gain an advantage over the AI self-driving car (didn’t want to wait at a four-way stop). You could also argue that it wasn’t a prank per se, but more like a maneuver to keep traffic flowing (in their minds they might have perceived this), and perhaps it is like a feint in a sport.

Imagine if the human driver coming up to the four-way stop tried to do a rolling stop, but meanwhile another human driven car did the same thing. You’d likely end-up with a game of chicken. Each would challenge the other. If you dare to move forward, I will too. The other driver is thinking the same.

The four-way stop example showcases the situation of an AI self-driving car and its relationship to human driven cars. There’s also the circumstances of a pedestrian messing around with an AI self-driving car.

Pedestrians Likely to Mess with Self-Driving Cars

Depending upon where you live in the world, you’ve probably seen pedestrians that try to mess with human drivers of conventional cars. In New York, it seems an essential part of life to stare down human drivers when you are crossing the street, especially when jaywalking.

There is no ready means to do a traditional stare down with an AI self-driving car. Presumably, a pedestrian would then be mindful to be less risky when trying to negotiate the crossing of a street. They can no longer make the eye contact that says don’t you dare drive there and get in my pedestrian way. Instead, the AI self-driving car is going to be possibly do whatever it darned well pleases to do.

I’d gauge that most pedestrians right now are willing to give an AI self-driving car a wide berth, but this is only if they even realize it is an AI self-driving car. There are many AI self-driving cars that are easily recognizable because they have a conehead shape on the top of the self-driving car (usually containing the LIDAR sensor). Once again fitting into the “amazement” category, pedestrians are in awe to see one drive past them. Give it room, is the thought most people likely have.

But, if you see them all the time and instantly recognize them, the awe factor is gone.

In a manner similar to the four-way stop, you can often get an AI self-driving car to halt or change its course, doing so by some simple trickery. The AI is likely trying to detect pedestrians that appear to be a “threat” to the driving task. If you are standing on the sidewalk and a few feet from the curb, and you are standing still, you would be likely marked as a low threat or non-threat. If you instead were at the curb, your threat level increases. If you are in-motion and going in the direction of the street and where the AI self-driving car is headed, your threat risk increases further.

Knowing this, you can potentially fool the AI into assuming that you are going to jaywalk. Given the timid nature of the AI, it will likely then calculate that it might be safer to come to a stop and let you do so, or maybe swerve to let you do so. If you have one pedestrian try this, and it works to halt the AI self-driving car, and if there are more pedestrians nearby that witness this, they too will likely opt to play the same trick.

You don’t need an AI self-driving car to see this same kind of phenomena occurring. Drive to any downtown area that is filled with pedestrians. If those pedestrians’ sense that you are a sheep, they will take advantage of the situation. As we increasingly have more AI self-driving cars on our roadways, I’ll predict that the “amazement” category will fade and instead will be replaced with the “prank” an AI self-driving car mindset.

Some AI developers that are working on AI self-driving cars are dumbfounded when I make such a prediction. They believe fervently in the great value to society that AI self-driving cars will bring. They cannot fathom why people would mess around with this.

Why would people mess around with AI self-driving cars in this “prank” kind of way?

Here’s some of the likely reasons:

  • Why did the chicken cross the road, in order to get to the other side. If humans, whether as drivers or pedestrians, perceive that an AI self-driving car is essentially in their way, some of those humans are going to find a means to keep it from getting in their way. These humans will simply outmaneuver the AI self-driving car, as such, one of the easiest means will be to do a feint and the AI self-driving car will do the rest for the human.

Some Likely to Envision Their Pranks Are Helpful

  • There could be some people that will ponder whether they might somehow help AI self-driving cars by purposely trying to prank them. If you come up behind your friend and say boo, the next time someone else does it, they’ll hopefully be better prepared. Some people will assume that if they prank an AI self-driving car, it will learn from it, and then no longer be so gullible. They might be right, or they might be mistaken to believe that the AI will get anything out of it (this depends on how the AI developers developed the AI for the self-driving car).

There you have it, a plethora of reasons that people will be tempted to prank an AI self-driving car. I can come up with more reasons, but I think you get the idea that we are heading toward a situation wherein a lot of people will be motivated to undertake such pranks.

What’s going to stop these pranksters?

Some auto makers and tech firms, and especially some AI developers, believe that we should go the root of the problem. What is that root? In their minds, it’s the pesky and bothersome human that’s the problem.

As such, such advocates say that we should enact laws that will prevent humans from pranking AI self-driving cars.

In this view, if you have tough enough penalties, whether monetary fines or jail time, it will make pranksters think twice and stop their dastardly ways. Overall, it’s not clear that a regulatory means of solving the problem will be much help in the matter. I’m sure that law abiding people will certainly abide by such a new law. Lawbreakers would seem less likely, unless there’s a magical way to readily catch them at their crime and prosecute them for it.

If we go down that rabbit hole, how exactly are we to ascertain that someone was carrying out a prank? Maybe the person was waving their arms or making their way into the street, and they had no idea an AI self-driving car was there. Also, are we going to outlaw the prankster doing the same thing to a human driver? If not, could the prankster claim they were making the motions toward a human driver and not the AI?

I’d say that the legal approach would be an untenable morass.

There are some though that counter-argue that when trains first became popular, people eventually figured out to not prank trains. It was presumably easy for someone to stand in the train tracks and possibly get an entire train to come to a halt. But, this supposedly never took root. Some say there are laws against it, depending upon which geographical area you are in. Certainly, one could also say that there are more general laws that could apply in terms of endangering others and yourself.

Some say that we should have a consumer education campaign to make people aware of the limitations of AI self-driving cars. Perhaps the government could sponsor such a campaign, maybe even making it mandatory viewing by government workers. It could be added into school programs. Businesses maybe would be incentivized to educate their employees about fooling around with pranking of AI self-driving cars.

Some are a bit more morbid and suggest that once a few people are injured by having pranked an AI self-driving, and once some people are killed, it will cause people generally to realize that doing a prank on an AI self-driving car has some really bad consequences. People will realize that it makes no sense to try and fool AI self-driving cars since it can cause lives to be lost.

These and similar arguments are all predicated on the same overarching theme, namely that the AI is the AI, and that the thing that needs to be changed is humans and human behavior.

I’d be willing to wager a bet that people will not be willing to accept an AI system that can be so readily pranked.

I know that this disappoints many of those AI developers that are prone to pointing the finger at the humans, and in their view it’s better, easier, faster to change human behavior. I would suggest that we ought to be looking instead at the AI and not delude ourselves into believing that mediocre AI will carry the day and force society to adjust to it.

I realize there are some that contend that people won’t somehow figure out that they can prank AI self-driving cars. Maybe only a few people here or there will do so, but it won’t be a mainstream activity.

I’d like to suggest we burst that bubble. I assure you that once AI self-driving cars start becoming prevalent, people will use social media to share readily all the ways to trick, fool, deceive, or prank an AI self-driving car. Word will spread like wildfire.

You know how some software systems have hidden Easter eggs? In a manner of speaking, the weaknesses of the AI systems for self-driving cars will be viewed in the same light. People will delight in finding these eggs. Say, did you hear that the AI self-driving car model X will pull over to the curb if you run directly at it while the self-driving car is going less than 5 miles per hour?

Look Forward to Being Duped by a False Prank

This though is also going to create its own havoc. The tips about how to prank an AI self-driving car will include suggestions that aren’t even true. People will make them up out-of-the-blue. You’ll then have some dolt that will try it on an AI self-driving car, and when the self-driving car nearly hits them, they’ll maybe realize they were duped into believing a false prank.

It could be that true prank tips might also no longer work on an AI self-driving car. As mentioned earlier, there is a chance that the Machine Learning (ML) of the AI might catch onto a prank and then be able to avoid falling victim to it again. There’s also the OTA (Over The Air) updating of AI self-driving cars, wherein the auto maker or tech firm can beam into the AI self-driving various updates and patches. If the auto makers or tech firm gets wind of a prank, they might be able to come up with a fix and have it sent to the AI self-driving cars.

This though has its own difficulties. People may not yet realize that the AI self-driving cars are not homogenous and that the nature of the AI systems differs by the auto maker or tech firm.

In short, though I am not one to say that a technological problem must always have a technological solution, I’d vote in this case that there should be more attention toward having the AI be good enough that it cannot be readily pranked.

We need to focus on anti-pranking capabilities for AI self-driving cars.

I say this and realize that in so doing that I am laying down a gauntlet for others to help pick-up and run with. We are doing the same.

This is a difficult problem to solve and not one that lends itself to any quick or easy solution. I know that some of you might say that an AI self-driving car needs to shed its tepidness. By being more brazen, it would be able to not only overcome most pranks, it would create a reputation that says don’t mess with me, I’m AI that’s not be played for a fool.

Here’s an example of why that’s not so easy to achieve. A pedestrian walks into the middle of the street, right in front of where an AI self-driving car is heading. Let’s assume we don’t know whether it’s a prank. Maybe the pedestrian is drunk. Maybe the pedestrian is looking at their smartphone and is unaware of the approaching car. Or, maybe it is indeed a prank.

What would you have the AI do? If it was a human driver, we’d assume and expect that the human driver will try to stop the car or maneuver to avoid hitting the pedestrian. Is this because the human is timid? Not really. Even the most brazen of human drivers is likely to take evasive action. They might first honk the horn, and maybe shine their headlights, and do anything they can to get the human to get out of the way, but if it comes down to hitting the pedestrian, most human drivers will try to avoid doing so.

Indeed, for just the same reasons, I’m a strong proponent of having AI self-driving cars become more conspicuous in such circumstances. My view is that an AI self-driving car should use the same means that humans do when trying to warn someone or draw attention to their car. Honk the horn. Make a scene. This is something that we’re already working on and urge the auto makers and tech firms to do likewise.

Nonetheless, if that human pedestrian wont budge, the car, whether human driven or self-driving, will have to do something to try and avoid the hitting of the pedestrian.

That being said, some humans play such games with other humans, by first estimating whether they believe the human driver will back-down or not. As such, there is some credence to the idea that the AI needs to be more firm about what it is doing. If it is seen as a patsy, admittedly people will rely upon that. This doesn’t take us to the extreme posture that the AI needs to therefore hit or run someone down to intentionally prove its mettle.

In the case of the four-way stop situation, I’ve commented many times that if the other human drivers realized that the AI self-driving car was willing to play the same game of doing a rolling stop, it would cause those human drivers to be less prone to pulling the stunt of the rolling stop to get the AI self-driving car into a bind. I’ve indicated over and over that AI self-driving cars are going to from time-to-time be “illegal” drivers. I know this makes some go nuts since they are living in a Utopian world whereby no AI self-driving car ever breaks the law, but that’s not so easily applied in the real-world of driving.

Some say too that two illegal acts, one by the human driver and one by the AI, does not make a right. I’d agree with that overall point, but also would like to note that there are “small” illegal driving acts happening every day by every human driver. I know its tempting to say that we should hold AI self-driving cars to a higher standard, but this does not comport with the realities of driving in a world of mixed human driving and AI driving. We are not going to have only and exclusively AI self-driving cars on our roadway, and no human driven cars, for a very long time.

There’s also the viewpoint that AI self-driving cars can team-up with each other to either avoid pranks or learn from pranks on a shared basis. With the advent of V2V (vehicle-to-vehicle communications), AI self-driving cars will be able to electronically communicate with each other. In the case of a prank, it could be that one self-driving car detects a prankster trying a prank on it, and then the AI shares this with the next self-driving cars coming down that same street. As such, then all of those AI self-driving cars might be ready to contend with the prank.

Unfortunately, there’s also another side of that coin. Suppose the AI of a self-driving car inadvertently misleads another AI self-driving car into anticipating a prank when in fact there isn’t one coming up. It’s a false positive. This could readily occur. The forewarned AI self-driving car has to be savvy enough to determine what action to take, or not to take, not simply by the word shared with it from other AI self-driving car.

If you introduced a slew of teenage drivers into our roadways, doing so all at once, what would happen? Presumably, you’d have tons of timid human drivers that would not take the kinds of shortcuts that more seasoned human drivers have learned over time. Some hope or believe that the AI self-driving cars will do the same. In essence, over time, with the use of Machine Learning and via OTA updates by the AI developers, the AI self-driving cars will get better at more “brazen” driving aspects.

Depending upon the pace at which AI self-driving cars are adopted, some think that maybe the initial small population of AI self-driving cars will take the brunt of the pranking and this will then be overcome by those AI self-driving cars getting us to the next generation of AI self-driving cars. It will be a blip that people at one time pranked the AI self-driving cars in their early days of roadway trials (remember when you could stick out your leg and pretend you were kicking toward an AI self-driving car, and it would honk its horn at you — what a fun prank that was!).

I’d suggest we need to take a more overt approach to this matter and not just hope or assume that the “early day” AI self-driving cars will come through on getting better at dealing with pranks. We need to be building anti-pranking into AI self-driving cars. We need to be boosting the overall driving capabilities of AI self-driving cars to be more human-like. Having AI self-driving cars on our roadways that can too easily fall for a feint attack or a feint retreat, well, those kinds of AI self-driving cars are going to potentially spoil the public’s interest and desire in having AI self-driving cars on our roadways. There will always be human pranksters, it’s likely in the human DNA. Face reality and let’s make sure the “DNA” of AI self-driving cars is anti-prank encoded.

For free podcast of this story, visit:

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see:

To follow Lance Eliot on Twitter: @LanceEliot

Copyright 2018 Dr. Lance Eliot

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store