Driverless Car At An Accident Scene, Will It Crash Into The Crash

Dr. Lance B. Eliot, AI Insider

Image for post
Image for post
Self-driving driverless cars might get confused when coming up to a crash scene, could worsen it

Yesterday, I was driving on the freeway and the traffic was especially bogged down. Most mornings the traffic is slow, but on this occasion, it was really slow, pure stop-and-go kind of driving.

Here in Southern California you never can predict what seems to cause traffic patterns to change. Sometimes you guess that there must be an accident up ahead and yet you end-up never seeing any indication that an accident was keeping traffic bottled up. Other times you eventually crawl up upon an accident scene and kind of think, aha, I knew that’s what was making me get late to work today.

This might seem callous since our first thoughts should go toward whomever was involved in the accident and hoping that they are safe, but with the high volume of accidents we have it is possible to become numb to the nature of accidents and just perceive them as irritating stoppages when you’ve got to get to someplace on time.

Example of Driving at an Accident Scene

In this case, I did eventually crawl up upon the accident scene.

At first, I could just barely see up ahead. From a distance of maybe a quarter mile away of the accident, I could see that there were some emergency vehicles on the freeway, which had flashing lights on. They were tall enough to standout over the cars of traffic. They were stopped on the freeway. They were askew and parked at odd angles, occupying the leftmost two lanes of the freeway. The car traffic itself was like water that flows in whatever direction provides fluidity, and was gradually flowing to the right of the accident scene.

This meant that roughly five lanes of traffic were trying to narrow over into about two lanes that were the only way to make your way around the accident scene. Thus, the car traffic was constricted by having to squeeze down into just two lanes, slowing everything down, plus the drivers were gawking at the accident scene and so that too slowed traffic even further.

After seemingly forever, I finally also managed to squeeze over into the rightmost lanes. I had been in the leftmost lane and so it was quite a chore to get toward the right lanes. Even though I had my turn indicator blinker on, nobody in the other lanes wanted to let anyone into the rightmost lanes. It is a cruel world out there on the freeways and often no civility about letting others into your lane.

For a circumstance like this, each car was edging forward and wanting to get through the morass as soon as possible, so they weren’t interested in letting other cars get over or get ahead of them. At times, some cars tried to force themselves over into the rightmost lanes, doing so an inch at a time and there were “bumper wars” of cars that dare to get within a fraction of an inch of another car, trying to push their way into the next lane over from them and the other cars were desperately and doggedly trying to keep them out.

When my turn came to be in those two scrawny lanes, I then could see the accident scene clearly as I slowly drove past it.

The emergency vehicles had formed a kind of cocoon around a motorcycle and a motorcyclist that were both splayed onto the freeway asphalt. There was a medic tending to the motorcyclist. Police were standing nearby and seemed to be taking notes about the accident scene. An ambulance was waiting to take the motorcyclist to the nearest hospital. A fire truck was there, serving to help block the freeway traffic and create the cocoon, and often is one of the first responding emergency vehicles. This is partially due to the aspect that there are sometimes fires that erupt from a car accident and the fire department needs to get there to put out the fire, or prevent one from starting since there is also usually fuel spilled onto the freeway that can easily get ignited.

Within maybe ten seconds, I had finally driven past the accident itself.

I had been in line behind the accident scene for nearly an extra twenty minutes of driving time. Now, finally past it, the traffic was thinned out because of the constricting pattern and accident scene blockage. As such, cars that had squeezed past the accident scene were now gunning their engine and racing ahead at breakneck speeds. It was as though the accident scene was a starting line for an Indy car race and the cars that managed to get past the accident scene were excited to hit the gas and see how fast their cars could accelerate from near zero to the top allowed speeds on the freeway.

Most drivers don’t realize that this is actually one of the more dangerous aspects of an accident scene. Namely, traffic post-scene drives much too fast and can recklessly then create another new accident.

It is not unusual to have an initial accident that spawns a second accident within about a tenth of a mile ahead of the first accident. This is due to cars that misuse the sudden freedom of an open passage to then speed ahead and end-up colliding with other cars. You can imagine how the drivers that have suffered the long wait of getting up to and past an accident scene are in a state of mind of wanting to get going, and so they throw caution to the wind and just push the accelerator pedal to the floor. They are doing so because they are late as a result of being stalled in the wake of the initial accident, and because they are frustrated that a car that can go 80 miles per hour has only been going 5 miles per hour throughout the accident scene area. This potent combination of stressed out drivers and anxiety of wanting to get going will tend to lead to unsafe driving post-scene and result in another accident.

AI Self-Driving Driverless Cars and Accident Scene Traversal

At the AI Cybernetics Self-Driving Car Institute, we have been creating AI-based capabilities for self-driving cars that allows self-driving cars to traverse these kinds of accident scenes.

Traversing an accident scene is not as easy as you might at first think it would be.

Let’s take a look at what a self-driving car needs to do when confronted with an accident scene.

Us humans are pretty used to dealing with accident scenes and so it is ingrained in our driving practices. If you watch a teenage first-time driver that comes upon an accident scene, you’ll see them be unsure of what to do. Most of today’s self-driving cars are in the same boat. The self-driving car does not have any special AI-algorithms about accident scene traversal. It tries instead to rely upon overall driving practices, but those overall driving practices are not tuned and honed to the specifics of what occurs in an accident scene. Thus, the need for a specialty component for the AI of the self-driving car, a component that provides particular expertise about accident scenes and how to traverse them.

First, the self-driving car needs to realize that there is an accident up ahead. It can do so by detecting that traffic is slowing down and shifting into a stop-and-go kind of pattern. This is done via the use of the cameras, LIDAR, radar and other sensors that the AI is using to detect car traffic. Of course, traffic that slows down does not necessarily indicate there is an accident up ahead. As I mentioned earlier in this piece, I often find that there wasn’t any accident and that the traffic slowed for some other reasons, and so the traffic slowing is just one such potential indicator.

Next, the self-driving car needs to detect that there are emergency vehicles involved. In the case of the motorcyclist that was downed, the flashing lights of the emergency vehicles could be seen from quite a distance away of the accident and can be detected via the cameras on the self-driving car. Also, the emergency vehicles were slightly taller than the rest of the traffic and so they could be picked out of the visual images of the traffic and be matched to images of emergency vehicles by the AI-fusion software. This is similar to say doing facial recognition in Facebook, except matching the images of vehicles to what the cameras on the self-driving car are sending into the AI system that’s driving the car.

The AI now has several crucial clues in-hand, including that the traffic has slowed down, there are emergency vehicles ahead, they are parked at askew positions, and they have flashing lights on.

This is pretty much a high probability that there is an accident scene there. The AI software is using probabilities to assign the odds of what is taking place on the roadway. This is similar to how humans are “guessing” or estimating what is taking place on the roadways. You don’t necessarily know things for sure, and so need to gauge the chances that something is or is not taking place. The self-driving car uses probabilities to make these same kinds of guesses and hunches.

A self-driving car that is not wise to accident scenes might just continue to stay in its lane, let’s assume that it was in the leftmost lane as I was, and come forward until it reaches the actual accident scene, not realizing that once it gets there that the freeway is effectively blocked off. Today’s self-driving cars would normally then bluntly shove the control of the car back to the human, since it has gotten itself into a pickle that it cannot figure out what to do. Once the human driver took over, presumably the human driver would then get the car over to the right, drive past the accident, and then once past it then could re-engage the self-driving car capabilities.

These are what the levels 0 to 4 of self-driving cars tend to do in accident scene traversal, i.e., just hand control to the human, but a level 5 by definition cannot just hand the car over to the human driver. There isn’t supposed to be a human driver needed by a level 5. So, a level 5 for sure needs to have some kind of “smarts” about driving accident scenes. It would be handy for this to also exist at the levels less than 5, but for sure it must exist at the level 5.

I realize that there are some self-driving car pundits that will be screaming out that if we had exclusively self-driving cars on the roadway that this accident scene traversal problem would be “solved” and no need to worry about it. In their view, the utopia of all self-driving cars would also imply that the self-driving cars and their AI is communicating via V2V (vehicle-to-vehicle) and so would orchestrate their own dance about how to handle the accident scene. Self-driving cars would collaborate and let each other over into the rightmost lanes in an orderly and efficient manner. The ones closest to the accident would be telling the other self-driving cars what is taking place.

What a wonderful world it will be. If that does actually ever happen, it will be decades and decades from now. The point is that this Utopian viewpoint is not going to happen anytime soon and so we do need to have self-driving cars that are working independently of each other and able to individually traverse accident scenes.

Back to the matter at-hand, once the self-driving car suspects that an accident scene is up ahead, it then moves into the specialty realm of how to cope with an accident scene.

So, first we needed detection to ascertain that an accident scene is now confronting the self-driving car. Next, the self-driving car and its AI invokes the specialty routines for dealing with traversing of accident scenes. These algorithms begin to undertake the planning needed to traverse the accident scene. Some of these plans are based on AI developers that created templates, while other plans are based on the outcome of machine learning. Via machine learning, the system has identified various kinds of accident scenes, and used tons of data about accident scenes and their traversal to try and identify ways to best traverse an accident scene.

Key Factors for the AI Self-Driving Driverless Car Traversal

There are a number of key factors that impact what a self-driving car should and should not do when confronted with an accident scene.

Has the accident just happened or has it become stabilized? In the case of the downed motorcyclist, it was an accident scene that was greatly stabilized. There were already emergency vehicles there. The emergency vehicles had already made their way to the accident scene and were parked there. They had created a cocoon to protect the scene. The human responders were already walking around on the freeway, rendering aid, and otherwise handling the accident scene.

If an accident has just happened, the accident scene itself will be much more chaotic and less predictable.

Anyway, the point is that this was an accident that had just occurred, and there weren’t any emergency vehicles there as yet. There wasn’t a protective cocoon setup. There weren’t any flashing lights. Etc. A self-driving car needs to be able to detect accident scenes that are evolved over time. There is the it-just-is-happening part of the time continuum, and then the accident scene is stabilized portion, and so on. We classify these into emerging accident, happened accident, stabilizing accident scene, rescue accident scene, recovery accident scene, clean-up recovery scene, and re-opened accident scene. For each of these classifications, we have trained the AI to plan and act accordingly.

Besides the phases or stages of an accident, there are other aspects for the self-driving car to be concerned about.

What is the traffic situation?

Is there almost no traffic or heavy traffic?

This is important since it can either make options available or constrain options as to how the self-driving car should react. If there isn’t much traffic, the self-driving car can have more lanes to shift into or other evasive actions to avoid or go around the accident scene. There are also special cases such as a toxic spill, or when there is fire involved.

Another consideration is that the self-driving car itself might become part of the accident scene.

For example, when I mentioned before that I had seen a motorcyclist get hit, you could argue that I now was part of the accident scene and that I should either stop to render aid or that I should stop to serve as a witness to what happened. Will we expect our self-driving cars to do likewise? In other words, the self-driving car “saw” an accident with its cameras, LIDAR, etc., and so it should potentially stop to provide that info for purposes of analysis of the accident and how it happened. Also, what if the occupants inside the self-driving car also saw the accident? Shouldn’t the self-driving car make them available?

Or, suppose the human occupants want to stop and render aid, they somehow have to tell the self-driving car to do so (with a level 5 it is still an open question about how the occupants will be communicating with the self-driving car).

I have so far focused on accidents occurring on a freeway, but accidents happen in all kinds of driving contexts.

The severity of the accident is another crucial factor.

There are also aspects of obeying whatever is taking place at the scene such as when a police officer decides to direct traffic as part of aiding accident scene traversal.

Conclusion

There are an estimated 6 million accidents per year in the United States.

The odds are pretty high that on any daily driving journey of any substance that you will likely come upon an accident, whether it is an accident that is just occurring or the aftermath of one that already occurred.

Self-driving cars need to know what to do when they come upon an accident scene. For most self-driving cars of today, they just kind of give-up and hand the driving over to the human driver. This is not only dangerous since the human driver has to suddenly become engaged in driving, it also belies the hope of someday getting to a level 5 true self-driving car that does not rely upon having a human driver available.

By developing specialized AI algorithms for self-driving cars that are faced with accident scenes, we are improving the capabilities of self-driving cars. Even if by some miracle we begin to see less accidents once self-driving cars are readily available, there will still be a mixture of human drivers and self-driving cars, and there will still be accidents.

Self-driving cars need to know what to do.

For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru

To follow Lance Eliot on Twitter: @LanceEliot

For my Forbes.com blog, see: https://forbes.com/sites/lanceeliot/

Copyright © 2019 Dr. Lance B. Eliot

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store