Using Virtual Spike Strips To Stop Errant AI Autonomous Cars

Dr. Lance Eliot, AI Insider

Image for post
Image for post

In many parking lots where you need to pay to get into the parking area, the exits often have a series of metal spikes that are pointed toward any cars that might be tempted to sneak into the lot. That small strip of spikes is enough to likely puncture the tires of any trespassing car that has a driver intent on avoiding paying any fees to use the parking lot. Cars that are properly exiting the parking lot are able to readily rollover the spikes since the spikes are angled away from that direction and usually on a spring action, so they suppress under the pressure of the car tires and the weight of the car.

There’s another use for these metal teeth.

Here in Southern California, we are somewhat known as the capital of car chases for the United States.

The police usually catch the driver of the car that’s leading the chase, but this fact doesn’t seem to overly discourage people from launching into a car chase. .

You might be aware that there is controversy about whether or not the police should even undertake such chases. A wild car chase through populated areas can be highly dangerous for everyone involved. The person that’s being sought is often a desperate driver, willing to ram other cars, hit pedestrians, drive on sidewalks, drive the wrong way on the roads, and otherwise do anything to get away.

In some jurisdictions, the police will at times stay back from the frantic driver and try not to seemingly pressure the driver into doing the especially dangerous maneuvers.

Some might wonder why the police need to undertake a car chase when presumably they can simply try to disable the car involved.

Indeed, you’ve probably seen that the police often will try to lay down a strip of metal spikes in the hopes that the wayward driver will drive over the spikes. These traffic spikes are also sometimes called tire shredders. Other names for the spikes include being referred to as stop sticks, jack rocks, and stingers.

These strips function in the same way that the parking lot strips do. The notion is that a car will try to rollover the spikes, the spikes will puncture the tires, and the tires will either deflate or be torn apart. Without viable tires, in theory the car chase should come to an end.

AI Driverless Cars And Virtual Spike Strips

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One aspect that is a rather “thorny” topic, so to speak, involves the matter of externally stopping an AI self-driving car.

Allow me to elaborate.

First, let’s clarify that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved.

Let’s consider two separate overall use cases, one involving a Level 5 self-driving car, which as I’ve mentioned would be a self-driving car being driven entirely and only by the AI, and the other case would be a less than Level 5 self-driving car for which there is a co-sharing of the human driver and the AI.

In a Level 5 self-driving car, there will be some form of conversational dialogue between the AI and the human occupants that involves the humans making requests of the AI that’s driving the car. You get into a Level 5 self-driving car and tell it you want to be driven to the baseball stadium. The AI proceeds accordingly.

What kind of latitude do you have as the human directing the AI self-driving car?

Can you tell it to drive illegally?

Maybe you are late getting to work one day, and so you tell the AI to exceed the posted speed limit on the highway that you take to get to work. The posted speed is 45 miles per hour, but you tell the AI to go 55 miles per hour, in hopes of getting to work on time.

Should the AI obey such a command?

I’m betting that you are tempted to say that no, the AI should not obey a commend to undertake an illegal driving act. Your wanting to get to work on-time is not much of a reason to have the AI perform an illegal driving action and also it could be dangerous too since the rest of the traffic might be going 45 mph and meanwhile the AI self-driving car might be swerving around other cars to go the desired 55 mph.

But, maybe you are in the Level 5 self-driving car and you are bleeding profusely because you somehow had gotten cut, or maybe someone is in the self-driving car that is pregnant and about to deliver a baby, under those circumstances would it make sense to allow the AI to go ahead at 55 mph rather than 45 mph, even though it would be an illegal driving act? I’m assuming you are more sympathetic in such instances to allowing the AI to “break the law” as part of its driving efforts.

The point being that we as a society have yet to wrestle with the range of legal and “illegal” acts of what an AI of a self-driving car is going to be “allowed” to do in terms of the driving task.

So far, I think you can see that there’s going to be a fine line between the strict legal kind of driving that we assume an AI self-driving car will do and the potential need for the AI to be permitted to go beyond the normally stated constraints.

Let’s consider the car chase predicament.

If you were to tell your AI self-driving car to proceed to lead a car chase, telling it to go at very high speeds and try to be evasive as it drives, should it do so?

I’m sure you are saying that even the suggestion that the AI would abide by such a command is absurd on the face of things. Have an AI self-driving car that starts a car chase? Nuts! This should never happen.

Reasons You May Tell Your AI Self-Driving Car to Engage in Evasive Action

Suppose that you have just survived a potential car jacking and are desperate to get away from attackers that are trying to get you. Maybe you are a celebrity or a very wealthy person that is being sought by some bad people. Maybe it’s a gang that just wanted to get your wallet and your car. Indeed, in terms of cars, there are some that believe we’ll begin to see “robojacking” of AI self-driving cars, an obviously undesirable trend that might arise as self-driving cars become more prevalent.

Would it be OK then for the AI self-driving car in this instance be able to proceed as though being car chased and therefore be at the forefront of the car chase?

I’m guessing you are now more sympathetic to the notion.

If the AI was trying to do a getaway because the person had committed an illegal act, such as robbing a bank, I’m sure we would all agree this kind of a car chase effort should not be condoned.

For the moment, can we agree that there might be valid cases of the AI undertaking evasive driving action for which we could construe the driving to be the rudiments of a car chase? The AI would be driving the car at a fast pace, attempting to allude followers, and likely would be committing “illegal” driving efforts as it does so.

If you agree with that aforementioned notion, we then need to figure out how far are we willing to have the AI go on this matter.

Can the AI drive the self-driving car on the wrong side of the street? Can it drive on sidewalks? Can it swerve around other cars? These are all dangerous acts that could potentially harm others.

Here’s your conundrum. If you say that the AI cannot do any kind of driving that might harm others, I challenge you to then explain what exactly the AI can do during this evasive driving? It’s not much of an evasive driving if the self-driving car is driving along just as a normal car does, and I’d dare say that’s not evasive driving at all.

At this juncture of the discussion, I hope you are at least open to the notion that we might end-up with AI self-driving cars that are driving evasively, doing so with a presumed purpose, whether or not the purpose is considered legitimate by society or not.

Though the focus herein involves an AI self-driving car that is seemingly driving at a fast clip and doing dangerous driving tactics, I’ll point out that my next comments herein could even apply to a situation of an AI self-driving car that appears to be driving in a perfectly normal everyday way. Please keep that in mind.

Trying To Stop An AI Autonomous Car

Suppose that an AI self-driving car is driving in a manner that we otherwise don’t want it to do be doing so, and we want to essentially stop the self-driving car.

How could that be accomplished?

You could use those metal spike strips. In other words, regardless of whether we are trying to stop a human driven car or an AI self-driving car of a Level 5, the use of the tire shredders could still be invoked.

Toss those strips in front of an oncoming Level 5 self-driving car. What happens? Assuming that the AI is not able to avoid rolling over the metal teeth, the tires would presumably get punctured. At this point, the AI is now continuing to try and drive the self-driving car. Doing so is rough at this juncture. The self-driving car is going to lurch and have the same difficulties that any normal car would have when the tires have been punctured.

Few of the auto makers and tech firms are working on how to have the AI deal with circumstances such as having the tires shredded and still be able to safely drive the car. They consider this kind of problem to be an “edge” problem.

Do we need the AI to ultimately be able to deal with other problems associated with the physical aspects of the self-driving car? Absolutely. A self-driving car is like any other car in that it will have physical breakdowns and problems. A human driver would need to accommodate such difficulties, and so should the AI.

Besides the usual physical ways to stop a car, which would seem to apply to an AI self-driving car too, would we possibly have other means to try and stop an AI self-driving car?

Virtual Spike Strip Alternatives for AI Self-Driving Cars

Perhaps we might use a virtual spike strip.

By this, I mean that we could somehow convince the AI that it should bring the AI self-driving car to a stop.

We could possibly do so then without necessarily tossing a physical metal strip of spikes in front of the self-driving car. In lieu of that rather more blunt approach, we could make the AI bring the self-driving car to a halt simply because we told it to do so. In a sense, it is like a virtual kind of spike strip.

Presumably, this would be a lot safer too than a physical spike strip. The self-driving car is still intact and thus rather than shredding the tires and hoping that the self-driving car doesn’t do a barrel roll and injure anyone including bystanders, the AI could presumably bring the self-driving car to a safe halt instead.

As an aside, we can consider even having the AI self-driving car take some other action, such as some have suggested that we could instruct an AI self-driving car being used as a getaway vehicle by criminals that that just robbed a bank and we might instruct the AI to bring them to the nearest police station. Wouldn’t it be nice that the AI could wrap up those dastards in a nice tight bow and deliver them directly to the police? Well, I have my doubts about the practical nature of this suggestion, but that’s something I’ll tackle for you another day.

Let’s then pursue the notion that we, whomever the “we” is, might commandeer the AI of the AI self-driving car and have the AI do something other than what perhaps the human occupants have told the AI to do.

How could we control the AI in this manner, doing so externally of the AI self-driving car?

One somewhat obvious way might be to use the OTA (Over The Air) capability of the AI self-driving car. The OTA is normally used to get data from the self-driving car, such as sensory data, and also provide updates to the self-driving car. When a new version of the AI software is needed for your AI self-driving car, it via OTA can connect to the cloud setup by the auto maker or tech firm, and receive those updates sent via electronic communication. No need to take your car into the auto shop for such updates.

Suppose the police see an AI self-driving car that is rocketing down the freeway, presumably being chased or potentially a car that the police will want to chase. The police might instead make contact with the auto maker or tech firm that setup the cloud OTA for this particular brand of AI self-driving car and have the auto maker or tech firm presumably send an electronic command to the self-driving car that instructs the self-driving car to come an immediate safe stop.

Lest you think this might take a long time to do, it could be all orchestrated beforehand. It could happen in seconds.

Problem solved! Or, should I say, problem solved?

You can imagine that this raises all sorts of societal entanglements.

Should just any police officer be able to issue such a command, doing so at any time? That seems a bit Big Brother like. Suppose too that someone somehow got ahold of the police capability and they opted to use it for their own devious purposes? Furthermore, there’s now a chance of a security breech that if you could hack this system then you might be able to stop an AI self-driving car. Maybe you could stop thousands of them all at once.

We’re also somewhat overlooking the technological aspect that suppose the OTA is not able to send a signal to the AI self-driving car, perhaps the self-driving car is not in an area where it has connectivity.

Another approach might be to have the police physically display something that the AI self-driving car would “see” and therefore invoke the halt command in that manner. This is handy since it does not rely on any electronic communication over the airwaves. Forget about the OTA, and instead just physically present something to the attention of the AI self-driving car. This presumes that you are physically near to the AI self-driving car, which of course if you were using actual metal strips you would need to be anyway.

Keeping in mind that the AI self-driving car has cameras for visual processing of the surroundings, you might have agreed beforehand with the auto maker or tech firm that if a certain kind of image is seen by the sensors, the AI self-driving car will come to an immediate safe halt. Thus, the police arrange to get in front of where the AI self-driving car is driving toward, and they hold-up a sign that has this special image on it (actually, since there are cameras pointing to the rear of the car too, the police could do this from behind the self-driving car and don’t even need to try and get in front of it; that’s a “nice” difference in comparison to actual physical metal strips wherein you need to get in front of a speeding car!).

Let’s assume then that the police hold-up the special image, the camera sensors detect the image, and during the sensor data collection and interpretation the AI system realizes it is indeed the special image. The virtual world model which is being used to keep track of the surroundings of where the AI self-driving car is driving, the model is then used by the AI to try and identify a safe place to come to an immediate halt. Maybe it determines that up ahead a quarter mile would be the safest spot and allow for a gradual reduction in speed rather than slamming on the brakes. The AI action planning routine then devises the driving tasks to do so and sends those commands to the car controls.

Voila, the AI self-driving car now comes to a safe halt.

I mentioned that the sign might be a special visual image. Since most AI self-driving cars also have radar sensors, sonic sensors, and LIDAR, we don’t necessarily need to even use a physical visual sign. It could be something else that might trigger the same kind of response. Perhaps an electronic signal might be used. Or a shape of some kind that might be detected by the radar.

One advantage of this approach would be that there’s no need to rely upon an electronic communication that is remotely being beamed to the AI self-driving car. Instead, if you are within physical proximity, you can trigger it to abide by the signal.

We’re once again though finding ourselves facing the issue of who can rightfully make use of these special images or signals? Can any police or authority do so? Suppose it gets leaked out how these triggers work, and someone opts to use them for devious purposes. And so on.

On this overall topic, there are some AI developers that are worried that even having a special program or routine embedded in the AI system that can be triggered and would bring the AI self-driving car to a halt is itself a dangerous “hole” in their view.

In essence, once such a routine or program exists within the AI that’s on-board the AI self-driving car, it means that one way or another it can somehow potentially be invoked. This could be done by someone that is considered authorized to do so, or by someone that is not authorized to do so (or, one supposes it could even be accidentally invoked by the AI itself, and unexpectedly and erratically come to a halt for no apparent sensible reason).

Some would argue that the AI should not have a specific routine or program for this purpose per se, and instead would merely take as input the “suggestion” of coming to an immediate safe halt.

This might even involve the AI asking the human occupants whether or not they want the AI to bring the self-driving car to a safe halt.

Admittedly, using physical metal strips to stop a car are a bit more straightforward. Most of the time, those strips are only in the possession of the police. Most of the time, they are only using the strips when the circumstances seem generally to merit it. This is usually done in the public view.

You might note that I have not covered so far in this discussion the question about what to do when the AI is a less than Level 5 self-driving car.

I’ll leave it to you to ponder this, but the mainstay of the issue would be whether the AI in a co-sharing driving task should be able to exert control over the self-driving car such that even if the human driver does not want to come to a halt, the AI would force it to happen anyway.

We are also making an assumption throughout all of this discussion that the AI would readily be able to bring the AI self-driving car to an “immediate” and “safe” halt.

What timeframe constitutes the word immediate? Within seconds, minutes, or how long is permitted to execute the halt? In terms of “safe” that’s also a somewhat tenuous notion.

Right now, most of this kind of discussion is not yet taking place overtly. As mentioned, it is generally considered an edge problem for now.

Once we have prevalent AI self-driving cars, the topic will become more mainstream. Do we need some kind of virtual spike strips for AI self-driving cars? If so, what would it consist of? How would it be used? These are other such questions are not just technological, but also societal aspects that will need some very thought-provoking consideration. In the meantime, please do watch out for those metal strips!

For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru

To follow Lance Eliot on Twitter: @LanceEliot

For his Forbes.com blog, see: https://forbes.com/sites/lanceeliot/

For his Medium blog, see: https://medium.com/@lance.eliot

For Dr. Eliot’s books, see: https://www.amazon.com/Dr.-Lance-Eliot/e/B07Q2FQ7G4

Copyright © 2019 Dr. Lance B. Eliot

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store