Here’s Why AI Self-Driving Cars Need To Cope With Anomalies

Dr. Lance Eliot, AI Insider

Image for post
Image for post

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

An anomaly is something considered out of the ordinary and often used to describe things or events that seem to be peculiar, rare, abnormal, or at times are otherwise difficult to even classify.

Sometimes an anomaly is unwanted and can be bothersome when performing a task, while in other instances an anomaly might shed new insight that no one previously gave due attention to. It can be hard to know whether an anomaly will be ultimately seen as desirable versus undesirable.

In the late 1800’s, Wilhelm Roentgen was working in his lab on an experiment to make electrons zip through open air. After repeated trials with various cathode rays, he noticed that a barium platinocyanide covered screen at the edge of his table became fluorescent.

This was an oddity.

He could have shrugged it off as an anomaly that was not worthy of further attention.

Turns out that Wilhelm opted to study this aspect and it led him to the discovery of X-rays and X-ray beams.

Wilhelm’s anomaly provides an example of a situation when detecting and acting upon an anomaly paid-off. Sometimes an anomaly can be a fluke that provides no added value to what is being examined or studied.

It could be random noise that happened to be encountered when you were doing something else and thus had no true bearing on the phenomena that you were studying. If you then pursue the anomaly to try and figure out whether it has merit or not, you might be wasting valuable attention and resources on something that has little or no benefit in the end.

When I used to teach university classes on statistics and AI, I would cover the various “exclusion” techniques that could be used to deal with suspected anomalies. One obvious approach is to simply discard the anomaly. This though can create issues since it can leave a somewhat unexplained hole or gap in your research.

Another approach to deal with an anomaly in your data involves Winsorising it.

The Winsorizing Technique

Winsorising is a mathematical technique in which you substitute the anomaly with something from the nearest other data that is considered not an anomaly (referred to as non-suspected data). But this can be a questionable practice since it implies that you actually obtained “real data” that further supported your other data, when instead you essentially made-up or manufactured data to your liking. The same can be said for any other method used to substitute the actual data for concocted data.

One criticism of scientific studies and especially those in the medial domain are that at times the scientists performing a life-critical study will opt to toss out an anomaly that appears in their research.

If you are trying to show that a new drug will save lives and prevent some dastardly malady from spreading, it can be tempting to disregard anomalies that might arise. By tossing out the anomaly or hiding it by a form of substitution, it could be that you are inadvertently hiding something that could be very telling. Perhaps the drug only works in certain situations and the anomaly could have revealed those crucial border-crossing aspects.

You don’t have to be a scientist to encounter anomalies.

We experience anomalies in our daily lives.

Driving Journey And An Anomaly

As an example, I was driving my car the other day on a lengthy driving journey and was using a major highway to do so. For hours on end, the traffic situation was relatively predictable and monotonous. It was a two-lane road in the northbound direction and passed through the California central region considered our state’s agricultural belt. Regular cars would drive in the leftmost lane or “fast lane” and the lumbering trucks filled with various agricultural products such as oranges, onions, and so on, kept to the slower rightmost lane.

If a lumbering truck was going excessively slow, the other trucks behind it would try to go around the slowpoke truck and do so by briefly getting into the fast lane. Regular cars in the fast lane hated to have this happen.

On the occasions that I opted to let trucks in front of me, I could see via my rear-view mirror the pained expressions on the drivers that were in cars just behind me. They were exasperated that I was allowing the snail-paced trucks into the fast lane. How rude of me!

In any case, this was a routine matter and happened from time-to-time.

Most of the time, nearly all the time, the trucks were in the slow lane. I got used to passing truck after truck, all of them as though at a standstill in the slow lane, though it was just the perception based on the rapidly moving fast lane versus the much slower moving slow lane.

Towards the end of my drive, I saw up ahead that the trucks were in the fast lane.

This was curious.

An anomaly!

What should I have done?

I could just stay in the fast lane, cruising at the slower 55 mph and followed the lead of the trucks. Or, I could switch entirely into the slow lane and zip ahead of the lengthy line of trucks in the fast lane. This is the reverse of what you might normally expect, in that usually you zip past via the fast lane, but if the trucks wanted to hog the fast lane, it seemed like they were nearly begging me to go ahead and use the slow lane (at least that’s what I would have explained to a highway patrol officer that might have later stopped me for speeding past the trucks in the slow lane!).

I wondered whether those other car drivers would even take a moment to ponder why all the trucks were in the fast lane. I’d bet that many of the drivers would not have given any thought to it.

Well, I decided I would just stay in the “fast” lane and see how this matter evolves.

After an agonizing five to ten minutes of being part of the lumbering herd (a distance of about 8 miles), it finally became apparent as to what was going on.

Up ahead there was an accident in the slow lane and it was marked off with cones and flares. I am assuming that the truck drivers either spread the word among themselves or that somehow it had gotten marked onto a GPS mapping system, though mine did not seem to know about the accident. This was a rather remote location and so unlikely that anyone was helping to mark an accident that had just recently occurred.

The trucks had wisely gotten into the fast lane, in-advance of coming upon the accident scene.

Let’s revisit the story.

Why would I claim that this was an example of an anomaly?

Due to the aspect that the trucks normally were in the slow lane and only briefly would get into the fast lane. If I had been collecting data during my driving journey, it would have plotted on a graph as showing that 99% of the time the trucks were in the slow lane and maybe 1% of the time used the fast lane (for passing purposes). This use of the fast lane in the case of the accident scene traversal was perhaps in that 1% of the time that the trucks were using the fast lane, but I logically could discern that the trucks weren’t passing each other as was the custom for them to use the fast lane.

AI Self-Driving Autonomous Cars Aspect

Suppose that I was an AI system that had been driving my car. Would the AI have been able to discern that this seemed indeed to be an anomaly?

On the one hand, you might say that no, the AI would have not been able to do so. The trucks were legally in the fast lane, and they had been using the fast lane from time-to-time, so nothing about this would at first glance seem odd or untoward.

The AI would presumably not especially care that those were trucks ahead of it rather than regular cars.

Furthermore, the AI would want to drive “legally” (presumably) and so the idea of switching into the slow lane to pass the trucks would not likely have been something that the automaker or tech firm had even included into the AI action plans for driving the car.

Overall, it would be likely that the AI of a self-driving car would probably not notice anything particularly unusual going on and would have simply stayed in the fast lane and followed along with the traffic.

This is unfortunate in that it could be important for the AI to be watching for and possibly acting upon anomalies that it might encounter.

Here’s another example that might better illustrate the matter.

I was on the freeway the other day and the traffic was light and moving along rather quickly (that’s a rarity in of itself here in traffic snarled Southern California). I noticed up ahead that a man was walking along on the edge of the freeway.

Allow me to explain that most of our freeways here are relatively well blocked off from any pedestrians getting onto the freeway.

The only time that you would normally see a person walking along the freeway would most likely be if their car broke down. They might then be walking to the nearest ramp, so they could get off the freeway. But, this doesn’t happen very often either since there are numerous specially dedicated phone boxes on the freeway that stranded people can use to call for assistance.

Thus, the moment I saw a man walking on the freeway, I took a look at the side of the freeway to see if a car was broken down. I had not yet seen it and I looked up ahead and could not see one there either. This man, even if he was walking away from a broken-down car, appeared to be quite a lengthy distance from his car. I right away doubted that this was a situation of a broken-down car.

I moved over into the left most lane of the freeway, trying to create as much a separation of space between me and the walking man for when I would zip past him. I also was keeping my eye on him.

I had in mind that the walking man might not be content with walking along the side of the freeway. Perhaps he might opt to suddenly dart into traffic.

I would say his presence was an anomaly.

Should I have just ignored what I considered to be an anomaly?

I opted to give the anomaly some credence.

I took action by moving over to the fast lane and by keeping my eye on the matter.

What would AI do?

With today’s AI, the odds are that the AI would likely have detected the walking man. The odds are that the detection would have led to the walking man being marked as such in the virtual world model that would be used by the AI to grasp the nature of the surroundings of the driving environment. The AI would certainly already be generally programmed for detecting and monitoring the movement of pedestrians.

Would the AI though have taken any action?

Perhaps not.

The pedestrian did not appear to be a threat to the AI self-driving car. He wasn’t running into the lanes. He wasn’t making wild motions. There was nothing obvious about any dangers associated with the pedestrian. If you didn’t know any better, you would have classified the walking man as you would any person that might be walking on the sidewalk on any street that you might be driving on. In that sense, this seemed perfectly normal. At least it might seem so on the surface and without any deeper kind of assessment or analysis.

And so we now reach the crux of my theme, namely, as a human driver, I would classify this walking man as an anomaly. And, I would then consider whether to give merit to the anomaly or shrug it off.

Here was my thinking:

  • If I shrugged it off, I would presumably continue unabated and pretty much ignore the anomaly.
  • If I thought the anomaly had merit, I would investigate further, hoping to ascertain the validity of the anomaly. If the anomaly seemed to have sufficient validity, I would then decide upon whether my course of action should be altered knowing that I seem to have a genuine anomaly in-hand.

I assert that any well-qualified AI should be able to do the same, and especially for AI self-driving cars, which involve life-and-death kinds of matters and indeed that an anomaly can ascertain the fate of the humans in the self-driving car or nearby to the self-driving car.

AI For Autonomous Cars Needs To Cope With Anomalies

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving driverless autonomous cars. One important aspect of the AI is its capability to identify, detect, interpret, analyze, and determine a course of action related to anomalies.

The first part of anomaly handling deals with detection.

The sensors of the self-driving car will likely already have various programs that examine the sensory collected data to try and find patterns.

These include visual processing routines that handle the data collected via the cameras, encompassing both video and still images. There is software that does likewise for the radar, and for the ultrasonic sensors, and for the LIDAR (if so equipped), and so on.

Many of these pattern matching algorithms for examining the sensory data were likely trained via Machine Learning (ML). This gets us to the first area of concern about anomaly detection by a self-driving car. If the Machine Learning consisted of data that was scrubbed and had no anomalies, the appearance of anomaly out-of-the-blue during actual use of the system might go completely unnoticed. The sensory data interpretation programs might just shrug off the outlier data and consider it part of the noise and other transients that one is going to get when using sensors.

That’s a tough aspect to overcome, namely, trying to figure out what is the usual kind of noise and transient data versus something that is a genuine anomaly worth considering. Suppose the AI was trained on all sorts of traffic signs, and then in the real-world a traffic sign that was not used in training is detected. The AI might opt to conclude that the traffic sign is not a traffic sign since it is outside the pattern of what constitutes a traffic sign.

I experienced this the other day when there was a hand-written sign that a roadway crew had put up to forewarn about a hole or divot in the street up ahead. They tried to make it look like a regular traffic sign, but it was obvious to the human eye that it was a quickly crafted ad hoc sign.

What would the AI do about it?

I would guess that the sensors would certainly have detected the presence of the sign. But, after trying to match it to the ones that it had learned from before, the odds are that it would be classified or categorized as just any kind of sign and not given its due related to the roadway and traffic situation (in contrast, for example, for the political elections, there are tons of signs put up all around town, none of which have anything to do with traffic, and thus it makes sense that a self-driving car would opt to ignore those signs).

The sensor data interpretation needs to be robust enough to give anomalies some attention, but at the same time if the anomaly is not relevant there is the issue of consuming the on-board processing cycles to try to ferret out the merits of the anomaly, which could perhaps starve some other crucial driving process. It is like a chess match that involves trying to determine how many levels deep, called ply, you want to do your analysis on. The deeper you consider the moves ahead in chess, the better the odds of making a good move now, but at the same time it chews up time and attention, which might be needed for other purposes (not so in a chess match, I realize, but this is so when driving a car).

Overall, there are some anomalies that genuinely do not exist but that the data or indication suggests it exists, and for which any pursuit is like going down a rabbit hole.

There are other anomalies that have genuine origination and so need pursuit. One means to try to gain an indication of whether the anomaly has legs, so to speak, involves doing a kind of cross-triangulation on the anomaly.

In the case of sensor fusion, when the various sensory devices have provided their interpretations, it is up to the sensor fusion portion of the AI to aid in figuring out what might be a bona fide anomaly versus what might not be. By comparing the results of interpretations from each of the different sensors, the sensor fusion has the unenviable task of trying to figure out the real truth of what is surrounding the AI self-driving car.

Suppose the cameras have detected a shadowy image of something at the side of the road. The image is so hazy that it is not readily possible to classify the image as being a pedestrian versus being say a fire hydrant or a street post (or, maybe it is a false reading of some kind). Meanwhile, suppose the radar has picked up a somewhat stronger set of signals and can present a more shaped outline of the object. And, let’s suppose the LIDAR has done the same in terms of providing a clearer shape. By triangulating the multiple sensors, the sensor fusion might be able to discern that it is something that does exist and not just noise, and furthermore that it is pedestrian and not just an inanimate object.

The sensor fusion then passes this along to the virtual world model portion of the AI system. Within the virtual world model, there is now a numeric marker placed at the position of the suspected object in the overall model, and it is furthermore categorized with a probability that it is a pedestrian. The AI action planning program now examines the virtual world model to figure out what action, if any, the driving of the car should undertake given this news that there might be a pedestrian at the side of the road.

Trickiness Of Coping With Anomalies

Here’s the really tricky part that many AI systems are not yet considering.

It is somewhat easy to consider the role of anomalies at the sensor data analysis aspects. The same can be said about detecting anomalies at the sensor fusion portion. It gets more complex once you are considering the virtual world model and the AI action planning portions.

Let’s use my example about the walking man on the freeway.

I’m relatively confident that the AI self-driving car would be able to detect the walking man and determine that the object is a pedestrian. Sure, there could be issues trying to make this determination and it would depend on factors such as whether there is line-of-sight to the walking man for the sensors, and whether there is any weather that might be disrupting the sensor data such as rain or snow, etc.

Once the walking man gets placed into the virtual world model, would the AI realize that a walking pedestrian on the freeway is unusual? Would it be able to also extend that line of consideration and then look for other clues that might confirm the validity of the walking man being there, such as looking for a disabled car?

I’d dare say that most AI systems for self-driving cars would be unlikely at this time of being able to have that kind of anomaly seeking mindset.

In case you want to argue that the walking man was another example of no-harm no-foul in terms of if an AI system had not become concerned about the walking man (similar to my story about the trucks that got into the fast lane), I was waiting to tell you the end of the story about the walking man.

After I passed him, doing my 100 feet per second speed, and at a distance of about two lanes (let’s say about 15–20 feet from him), he subsequently ran out into traffic. Many of the cars coming up were moving so fast that one of them ended-up striking him (I heard about it on the news, didn’t see it happen directly). This happened in the slow lane.

I know that there are some AI pundits that will claim that had the AI self-driving car been in the slow lane it would not have hit the walking man because it would have miraculously made an evasive maneuver. I don’t think it makes any sense to say that in this circumstance. Physics prevents being able to avoid someone that suddenly darts in front of a car that is going 100 feet per second. You are just not going to be able to brake fast enough to avoid hitting that person.

Where would you swerve to? Into other lanes of traffic? Or, maybe into the ditch next to the freeway, but perhaps kill the occupants of the car?

There are even some AI developers and AI pundits that would say that if a human was stupid enough to run into traffic, the person gets what they deserve. This is even a dumber thing to say. Suppose the car driver had swerved into the ditch and died, thus keeping the walking man alive. Is that a “deserved” death in the estimation of this idea that you get your just deserves? I think not.

Conclusion

Those same pundits might also argue that the walking man should not have gotten onto the freeway to begin with.

As mentioned earlier, there are fences and brick walls that separate the freeway. Yes, it is possible to climb over those walls. Should we put up barbed wire and maybe gun posts, and make it seemingly impossible to get onto the freeway (a kind of modern-day Berlin Wall), doing so because the AI is insufficient to try and figure out when a pedestrian is there and should be avoided?

I think not.

Those of us developing AI self-driving cars should be aiming to have the AI do the right kinds of actions, such as the action that I took, which I believe was a sound course of action. I had moved over into lanes away from the walking man and kept alert as to what the walking man was doing. There are other actions that could have possibly been done, such as maybe trying to block traffic and slow down traffic, or maybe call 911, but in any case, all of those actions rely on the realization that there was an anomaly afoot.

Robust AI for self-driving cars needs to give credence to anomalies. The AI needs to be overtly seeking out anomalies and giving them their due. This cannot though be done in a wanton fashion.

There is only so much processing and bandwidth that the AI on-board system can undertake and do so on a timely basis. The AI needs to be watching out for false positives and not take action that is otherwise unwarranted and might carry its own risks. Nor should the AI be taken in by false negatives. Anomalies, love them or hate them, but either way you need to deal with them.

That’s the rub.

For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru

To follow Lance Eliot on Twitter: https://twitter.com/@LanceEliot

For his Forbes.com blog, see: https://forbes.com/sites/lanceeliot/

For his AI Trends blog, see: www.aitrends.com/ai-insider/

For his Medium blog, see: https://medium.com/@lance.eliot

For Dr. Eliot’s books, see: https://www.amazon.com/author/lanceeliot

Copyright © 2019 Dr. Lance B. Eliot

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store