AI Machine Learning Is Confounded By Visual Object Transplants And Can Foul-Up AI Self-Driving Cars

Dr. Lance Eliot, AI Insider

Image for post
Image for post

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Have you ever seen the videos that depict a scene in which there is some kind of activity going on such as people tossing a ball to each other and then a gorilla saunters through the background?

If you’ve not seen any such video, it either means you are perhaps watching too many cat videos or that you’ve not yet been introduced to the concept of inattentional blindness or what some consider an example of selective attention.

The gorilla is not a real gorilla, but instead a person in a gorilla suit (wanted to mention this in case you were worried that an actual gorilla was invading human gatherings and that the planet might be headed to the apes!).

Overall, the notion is that you become focused on the other activity depicted in the video and fail to notice that a gorilla has ambled into the scene.

Brittleness Of Neural Networks

In a moment, I’ll explain how the gorilla appearance is similar or analogous to brittleness issues in artificial neural networks.

Sometimes, even small perturbations can confuse a neural network.

If a neural network was focusing on only say the legs of turtles, and it detected a turtle leg on the squirrel, it might then conclude that the squirrel is also a turtle.

If the neural network had been honing on the tail of the turtle to determine whether a turtle is a turtle, the aspect that the turtle had a squirrel’s tail would have caused the neural network to no longer believe that the turtle is a turtle.

This is one of the known dangers, or let’s say inherent limitations, about the use of machine learning and neural networks.

AI Autonomous Cars And Inattentive Attention

There are various studies that have shown how easy it can be to confuse a machine learning algorithm or neural network that might be used by an AI self-driving driverless autonomous car.

These are often used when the AI is examining the visual camera images being collected by the sensors of the self-driving car.

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. As such, we are actively using and developing these AI systems via the use of machine learning and neural networks. It’s important for the auto makers and tech firms to be using such tools carefully and wisely.

Object Transplants

There’s another somewhat similar potential difficulty involving what is sometimes called object transplants.

An interesting research study undertaken by researchers at York University and the University of Toronto provides an insightful analysis of the concerns related to object transplanting (the study was cleverly entitled “The Elephant in the Room”).

Object transplanting can be likened to my earlier comments about the gorilla in the video, though with a slightly different spin involved.

Imagine if you were watching the video that had a gorilla in it.

Suppose that you actually noticed the gorilla when it came into the scene.

If the scene consisted of people tossing a ball back-and-forth, would you be more likely to believe that it was a real gorilla or more likely to believe it is a fake gorilla (i.e., someone in a gorilla suit)?

Assuming that the people kept tossing the ball and did not get freaked out by the presence of the gorilla, I’d bet that you’d mentally quickly deduce that it must be a fake gorilla.

Your context of the video would remain pretty much the same as prior to the introduction of the gorilla.

The appearance of the gorilla did not substantially alter what you thought the scene consisted of.

Would the introduction of the gorilla cause you to suddenly believe that the people must be in the jungle someplace?

Probably not.

Would the gorilla cause you to start looking at other objects in the room and begin to think those might be gorilla related objects?

For example, suppose there was in the room a yellow colored stick.

Before the gorilla appeared, you noticed the stick and just assumed it was nothing more than a stick. Once the gorilla arrived, if you are now shifting mentally and thinking about gorillas, maybe the yellow stick now seems like it might be a banana. You know that gorillas like bananas. Therefore, something that has a somewhat appearance of a banana, might indeed be a banana.

Visual object transplanting can impact the detection aspects of a trained machine learning system such as a convolutional deep neural network in a potentially similar way.

In the research study, the neural network did not detect that the elephant was in the picture, presumably not even noticing that it was there (thus, the clever titling of the research study as dealing with the elephant in the room!), due to positioning of the elephant in a picture.

Depending upon where the elephant was positioned in the picture, the neural network at one point reported that the elephant was actually a chair.

In another instance, the elephant was placed near other objects that had been earlier identified, such as cup and a book, and yet the neural network no longer reported having found the cup or the book. There were also instances of switched identifies, wherein the neural network had identified a chair and a couch, but with the elephant nearby to those areas of the picture, the neural network then reported that the chair was a couch and the couch was a chair.

AI Self-Driving Cars And Object Transplants

On a related note, I’ve previously mentioned in the realm of AI self-driving cars that there has been an ongoing debate related to the same notion of object transplanting, specifically the topic of a man on a pogo stick that suddenly appears in the street and near to an AI self-driving car.

There are some AI developers that have argued that it’s understandable that the AI of a self-driving car might not recognize a man on a pogo stick that’s in the street.

By recognizing, I mean that the visual images captured by the self-driving car are examined by the AI and that the AI system was not able to discern that the object in the street consisted of a man on a pogo stick. It detected that an object was there, and had a rather irregular shape, but it was not able to discern that the shape consisted of a person and a pogo stick (in this instance, the two are combined, since the man was on a pogo stick and pogoing).

Why would it be useful or important to discern that the shape consists of a person on a pogo stick?

You, as a thinking human being, and assuming that you’ve seen a pogo stick before, and one that’s in use, you likely know that it involves going up-and-down and also moving forward or backward or side-to-side. If you were driving along and suddenly saw a person pogoing in the street, you’d likely be cognizant that you should be watching out for the potential erratic moves of the pogo stick and its occupant. You could potentially even predict which way the person was going to go, by watching their angle and how hard they were pogoing.

An AI system that merely construes the pogoing human as a blob would not readily be able to predict the behavior of the blob. Predictions are crucial when you drive a car.

The range of potential problems associated with object transplanting woes includes:

  • In the case of object transplanting, there is the chance that the transplanted object is not detected at all, even though it might normally have been detected in some other context.
  • Or, the confidence level or probability attached to the object certainty might be lessened in comparison to what it might otherwise have been (in the case of the elephant added into the picture and the subsequent missing cup or book, it could be that the neural network had detected the cup and the book but had assigned a very low probability to their identities, and so reported that they weren’t there, based on some threshold level required to be considered present in the picture).
  • The detection of the transplanted object, if detection does occur, might lead to misidentification of other objects in the scene.
  • Other objects might no longer be detected.
  • Or, those other objects might have a lessened probability assigned to them as identifiable objects. There can be both local and non-local effects due to the transplanted object.
  • Other objects might get switched in terms of their identities, due to the introduction of the transplanted object.

Conclusion

For AI self-driving cars, there are a myriad of sensors that collect data about the world surrounding the self-driving car. This includes cameras that capture pictures and video, it includes radar, it includes sonic, it includes LIDAR, and so on. The AI needs to examine the data and try to ferret out what the data indicates about the surrounding objects.

Are those cars ahead of the self-driving car or are they motorcycles?

Are there pedestrians standing at the curb or just a fire hydrant and a light post?

These are crucial determinations for the AI self-driving car and its ability to perform the driving task.

AI developers need to take into account the limitations and considerations that arise due to object transplanting. The AI systems of the self-driving car need to be shaped in a manner that they can sufficiently and safely deal with object transplantation and do so in real-time while the self-driving car is in motion. The scenery around the self-driving car will not always be pristine and devoid of unusual or seemingly out-of-context objects.

When I was a professor, each year a circus came to town and the circus animals arrived via train, which happened to get parked near the campus for the time period that the circus was in town. A big parade even occurred involving the circus performers marching the animals from next to the campus and over to the nearby convention center. It was quite an annual spectacle to observe.

I mention this because among the animals were elephants, along with giraffes and other “wild” animals. Believe it or not, on the morning of the annual parade, I would usually end-up driving my car right near to the various animals as I was navigating my way onto campus to teach classes for the day. It was as though I had been transported to another world.

If I was using an AI self-driving car, one wonders what the AI might have construed of the elephants and giraffes that were next to the car. Would the AI have suddenly changed context and assumed I was now driving in the jungle? Would it get confused and believe that the light poles were actually tall jungle trees?

I say this last aspect about the circus in some jest but do want to be serious about the facet that it is important to realize the existing limitations of various machine learning algorithms and artificial neural network techniques and tools. AI self-driving car makers need to be on their toes to prepare for and contend with object transplants.

And that’s no elephant joke.

That’s the elephant in the room and on the road ahead for AI self-driving cars.

For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru

To follow Lance Eliot on Twitter: https://twitter.com/@LanceEliot

For his Forbes.com blog, see: https://forbes.com/sites/lanceeliot/

For his AI Trends blog, see: www.aitrends.com/ai-insider/

For his Medium blog, see: https://medium.com/@lance.eliot

For Dr. Eliot’s books, see: https://www.amazon.com/author/lanceeliot

Copyright © 2019 Dr. Lance B. Eliot

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store