Seeing Around Corners with Computer Periscopy: Handy for Driverless Cars

Dr. Lance B. Eliot, AI Insider

Image for post
Image for post

The Shadow knows! That was the famous line used in the popular pulp novel series, comic book series, and radio series about a fictional character known as The Shadow.

We tend to not particularly notice our own shadow. How often do you glance around to see your own shadow? Probably not very frequently.

Introducing Computational Periscopy

I’d like to introduce the topic of computational periscopy, which as you’ll learn herein involves leveraging shadows.

The notion of computational periscopy involves the use of a computer-based approach to effectively devise a kind of periscope. We all know that a periscope is normally a physical device that you can use to look around a corner or over the top of an object, doing so without you hopefully being seen. Perhaps you had one when you were a child. These had quite cheap optics and allowed you to be a pretend army soldier.

In computational periscopy, one key area of interest is how to figure out what you cannot directly see, namely when you have non-line-of-sight (NLOS) of something and possibly use other clues to guess at what might be there. How did I try to figure out when my child, acting as a seeker, might be on the other side of the wall and standing at the corner? I had NLOS at that moment of my offspring. As mentioned, I opted to try and use the shadow as a surrogate of what might be on the other side of the wall.

Computational periscopy can try to use that same shadow trick. I forewarned you, the shadow knows!

For those of you interested in this topic of computational periscopy, please be aware that there is more than just shadows involved, though shadows are certainly significant. There are other elements encompassing capturing radiated light that comes from an object, either by a natural lighting source or via the use often times ultrafast laser pulses to get light to bounce off an object. Furthermore, one aspect of periscopy is to try and refrain from revealing the periscope, in the sense that if you had a normal periscope you would typically put it into line-of-sight (LOS), but this means that the periscope can potentially be seen, which you either might not want to do or it is prohibitive to position the periscope.

Herein let’s focus on the shadows aspects.

Robot Meandering Around a Room

Suppose you have a robot that is meandering around a room. It is trying to navigate the room and do so without bumping into things.

Imagine that there is sufficient lighting in the room that shadows are being cast. The image processing of the camera images streaming into the robot “eyes” could analyze the scene and try to determine if there are any shadows being cast beyond the edge of the refrigerator. If so, the robot could try to figure out what kind of object might be on the other side of a refrigerator.

Besides “seeing” objects directly, the robot can try to guess at the nature and position of objects not seen, if it can detect shadows of the objects. Suppose that a human is standing on the other side of the refrigerator and doing so out-of-sight of the robot (this is the NLOS). Via the lighting in the room, it turns out that the human is casting a shadow. The shadow is visible to the robot. The shadow of this human extends beyond the refrigerator, at the front of it, and lays cast onto the floor area that the robot is about to navigate.

Based on the shadow, the robot using computational periscopy algorithms and techniques would “reverse engineer” from the characteristics of the shadow and estimate that there is a person standing beyond view on the other side of the refrigerator.

Or, maybe the shadow shape is poor, due to the stance of the object and the lighting aspects of the room, and perhaps the robot cannot discern that it might be a human casting the shadow, but it is pretty sure there is something there casting the shadow. The periscopy algorithm might suggest that it is some kind of object that stands about six feet in height and has a width of about a foot or two. That’s enough of a guess that it permits the robot to be cautious when going around the refrigerator, allowing it to anticipate that there is something standing there and will need to be quickly navigated around too.

Computational periscopy provides another means to collect sensory data and try to make something useful out of it.

Let’s not kid ourselves though and assume that shadows are an easy matter to analyze. If you walk around later today and start looking carefully at shadows, you’ll realize there is a tremendous variation in how a shadow is being cast. Trying to reverse engineer the shadow to deduce what casted it, well, this can be tough to do. Plus, you are going to end-up usually with probabilities about what might be there or not there, rather than pure certainties.

The other “killer” (downside) aspect right now is that computational periscopy tends to require humongous amounts of computing processing to undertake. Much of the work to-date has soaked up supercomputer time to try and figure out the shadow related aspects. It can be costly to purchase such premium computing power.

There are also the real-time aspects that are daunting too.

In short, the computational periscopy is handy, yet it still is in need of faster algorthims and improved techniques so that it can readily be used in near real-time situations, along with finding a means to cut back on the computing power needed so that this kind of processing can be done on more everyday hardware.

A recent study at Boston University provides a glimpse at how computational periscopy might ultimately be advanced for prime time and be amenable to more mass usage. If you are interested in that particular study, they’ve posted their research data and details on GitHub at https://github.com/Computational-Periscopy/Ordinary-Camera. Some critics would say this is interesting but a far distance from being usable in a real-world setting. Others would say you need to crawl before you walk, walk before your run, and so on.

It’s a healthy sign that we are hopefully going to be able to move computational periscopy toward being practical and usable for everyday purposes, though the road ahead is still long.

Shadows Useful for Navigating Traffic and AI Self-Driving Cars

When you are driving a car, you are somewhat unlikely to usually be noticing the shadows around you and your car. As humans, and as car drivers, we typically take shadows for granted. I would even say that there might be some kind of mental processing taking place about shadows and we might not just realize we are doing so. It is like breathing air. You don’t give it direct thought.

You are so used to shadows that your mind likely is processing them but most of the time deciding it either isn’t worthwhile to put much mental effort toward, or that it will only do so when it becomes necessary.

Have you ever been driving your car on a sunny day, and all of a sudden, a large cloud formation goes in front of the sun? This casts a large shadow onto your car and the road. I’d bet that your mind noticed that something light-related just happened. You might even turn to someone else in your car and say, hey, did you notice that, it all of a sudden got dark. This suggests that your mind is on the alert for shadows, and giving it low priority most of the time, until or if something happens to get the priority pumped up.

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One aspect about the visual processing of images coming from the cameras on the self-driving car is that we can potentially boost the AI driving capabilities by making use of computational periscopy, including detecting and analyzing shadows.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car.

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too.

Returning to the topic of computational periscopy, let’s consider how this innovative approach can be leveraged by AI and especially in the case of AI self-driving cars. There are various research studies on shadows detection and usage for AI self-driving cars that go back a number of years and it is an ongoing field of study that will continue to mature over time.

If the use of computational periscopy could aid the AI in being a better driver, we’d certainly want to give this approach a solid chance of being utilized.

Admittedly, the odds that the periscopy via shadow detection and interpretation is going to be a dramatic difference in improving driving is somewhat questioned, at least right now. Thus, it’s the case that many AI developers for AI self-driving cars would likely put the periscopy onto an “edge” problem list, rather than a mainstay problem list.

An edge problem is one that is regarded as sitting at the edge or far corner of the core problem you are trying to solve. Right now, AI developers are focused on getting an AI self-driving car to fundamentally drive the car, doing so safely, and otherwise are covering a rather hefty checklist of key elements involved in achieving a fully automated AI-driving self-driving car. Dealing with shadows would be interesting and would have some aded value, but devoting resources and attention to it is not as vital as covering the fundamentals first.

I often disagree with pundits about what they consider to be edge problems for AI self-driving cars. For once, in this case of the periscopy, I would tend to agree that it indeed should be considered an edge problem (they’ll be happy to know this!).

AI Self-Driving Car Being Loaded Down with Computer Processing

Would it be worthwhile to devote processing power to doing the shadow detection and analysis?

Would it be worthwhile to include the shadow analyses into the sensor fusion that is already trying to connect the dots on the other sensory analyses?

If this addition would mean that time delays might occur between sensor data collection to sensor fusion and ultimately to the AI action planner, we’d need to weigh whether that time delay was worth the benefits of doing the shadow analyses. Might not be.

Also, if we are limited to how much computer processing power we can pack into the AI self-driving car, and if the shadow analyses occurred at the sacrifice of using processing power for other efforts, we wouldn’t want that to be a consequence either, unless we knew that the shadow analyses had a substantive enough payoff.

You might argue that we can just add more computer processing on-board the self-driving car but doing so continues to raise the cost of the self-driving car, and raises the complexity of the AI system, and adds weight and potential bulk to the car. These are factors that need to be compared on an ROI (Return on Investment) basis as to whatever the shadow detection can likely provide.

Optimize Periscopy Algorithms To Consume Less Processing Power

Let’s set aside for a moment the concerns about the on-board processing and other related factors. It might be helpful to consider the difficulties involved in the shadow detection and analysis. This might also inspire those of you sparked by this problem to help find ways to improve the periscopy algorthims and techniques. It would be handy to get them optimized for being faster, better, and consume less computer processing power and memory. Well, of course that’s just about always a goal for any computer application.

Close your eyes and imagine a shadow, whichever one that comes to mind. Or, if you are in place where you easily create a shadow, please do so.

Where did the shadow cast onto? That’s important. If you have the shadow casting onto a flat surface like a floor or a wall, it’s likely easier to detect. Once the shadow appears on a surface that is irregular, or if the shadow spreads across a multitude of differing surfaces, trying to detect the shadow can become harder to do.

Another aspect is whether you have two objects that each cast a shadow and the shadows intersect or merge with each other. You have to assume that you cannot see the original objects that are casting the shadow. This means that when you are looking at the merged shadow, you cannot readily figure out which portion of the shadow refers to which of the original objects.

Another facet of a shadow involves motion and movement. It is going to be more challenging to decipher a dancing shadow. The stationary shadow already has challenges. Add to the shadow that it is moving, along with the aspect that the object can be twisting and turning, you’ve got yourself quite a shadow detection task.

I’ll make things even more intriguing, or shall I say more complex and arduous. We are going to have cameras mounted into the AI self-driving car that are capturing the images or video of what is outside of the self-driving car. The self-driving car can be standing still, such as at a stop sign or red light. The self-driving car is more likely to be in-motion during a typical driving journey.

You now have a series of streaming images, which are being generated while the self-driving car is in motion, and meanwhile you are trying to detect shadows, of which the objects casting those shadows is likely moving to. I hope this impresses you as to the underlying hardness of solving this problem.

I would be remiss in not also emphasizing the role of light in all of this. The light source that is casting the shadows can also be in motion. The light source can be blocked, temporarily, while the AI is in the midst of examining a series of images. The light source can get brighter or dimmer. All of the effects of the lighting will consequently impact the shadows.

I had mentioned earlier that we’ve all had moments while driving a car on a sunny day and a set of clouds blocks momentarily the sun, altering the shadows being cast. Let’s combine that aspect with my desire to ascertain if the delivery truck driver was heading back to his truck. Imagine that the moment the driver got to the truck, a cloud floated along, blocking the creation of his shadow.

No Shadow Does Not Mean No Object Is There

Just because there is no shadow does not ergo always mean there is no object there. The shadow detection has to take this aspect into account. Likewise, an object that casts a shadow that seems to be unmoving does not necessarily mean the object itself is rooted in place. The shadow of a street sign is likely to be motionless, which makes sense because it is presumably rooted in place. The truck driver might have gotten to the front of his truck and frozen in place, for an instant, which might allow me to detect his shadow, but the stationary aspect of the shadow cannot be used to assert that the object itself will remain stationary.

Shadows got a lot of intense attention by the entertainment industry for purposes of developing more realistic video games. For those of you that remember the bygone days, you know that there was a period of time whereby animated characters in a video game lacked shadows. It was a somewhat minor omission and you could still enjoy playing the game.

Nonetheless, it was well-known within the video gaming industry that game players were subtly aware that there weren’t shadows. This made the characters in the game less lifelike. A lot of research on shadows and computer graphics poured into being able to render shadows. The early versions were “cheap” in that the shadow was there but you could discern easily that it wasn’t like a real shadow. Sometimes the shadow would magically disappear when it shouldn’t. Sometimes the shadow stayed and yet the character had moved along, which was kind of funny to see if you happened to notice it.

Another area of intense interest on shadows involves analyzing satellite images. When you are trying to gauge the height of a building, the building might be partially blocked from view by trees or camouflage. Meanwhile, the shadow might be a telltale clue that is not also obscured. The same thing with people that are standing or sitting or crouching. You can potentially figure out where the people are by looking at their shadows.

I mention this other work about shadows to highlight that the shadow efforts are not solely for doing computational periscopy. There are a lot of good reasons to be thinking about the use of computers for analyzing shadows.

Pretend that you are in a Level 5 AI self-driving car. It is coming up to an intersection. The light is green. The cross-traffic has a red light. The AI assumes that it has right-of-way and proceeds forward under the assumption that the self-driving car can continue unabated into and across the intersection.

There are tall buildings at each of the corners of this intersection. The AI cannot see what’s on the other sides of those buildings. This means that there could be cross-traffic approaching the intersection, but the AI could not yet detect the traffic, only once those cars come into view at their respective red-light crosswalk stopping areas.

This might be a handy case of potentially detecting the shadow of a speeding car that is in the cross-traffic and not going to stop at the red light. It all depends on the lighting and other factors. This is though a possibility. I already gave another possibility of the truck driver, a pedestrian for a moment in time, trying to step out from behind a large obstacle, his double-parked truck.

One approach to trying to do a faster or better job at analyzing shadows by an AI system, assuming that a shadow can be found, involves the use of Machine Learning (ML) and Deep Learning (DL).

Conventional computational periscopy algorthims tend to use arcane calculus equations to try and decipher shadows. Another potential approach involves collecting together tons of images that contain shadows and trying to get a Deep Learning artificial convolutional neural network to pattern on those images. Perhaps shadows of a fire hydrant are readily discerned by pattern matching rather than having to calculate the nature of the shadow and reverse engineering back to the shape of a fire hydrant.

The neural network would need to catch onto the notion that the lighting makes a difference in terms of the shadow cast. It would need to catch onto the aspect that the surface of where the shadow is cast makes a difference. And so on. These though presumably could become part of the neural network pattern matching and ultimately be able to do a quick job of inspecting a shadow to stipulate what it might be and what it might portend for the AI self-driving car.

Conclusion

We can come up with a slew of ways in which shadow detection and analysis could be meaningful while driving a car.

Some human drivers overtly use shadows to their advantage. Most of the time, shadows are quietly there, and the odds are that a human driver is not especially paying attention to them. There can also be crucial moments, a key moment in time, during which a shadow can provide an added clue about a roadway situation that could spell a life-or-death difference.

Recent efforts to forge ahead with computational periscopy are encouraging and illustrate that we might someday be able to get a shadow detection and analysis capability that can function well in real-time, doing so without hogging the computing power available in a self-driving car and nor requiring the Hoover Dam to empower it.

Still, all in all, we have a bumpy and complicated way yet to go.

This shadow detection “trickery” isn’t a silver bullet for AI self-driving cars.

Does the shadow know? I assert that sometimes the shadow does know. Maybe we can use the shadow to avoid the evils of car accidents that lurk on our roadways and await our every move. Bravo, computational periscopy.

For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru

To follow Lance Eliot on Twitter: @LanceEliot

Copyright 2018 Dr. Lance Eliot

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store