Augmented Reality (AR) Making Inroads for AI Driverless Car Reality
Dr. Lance B. Eliot, AI Insider
When I was initially showing my teenagers how to drive a car, we would go over to the local mall after-hours and use the nearly empty parking lot as an area to do some test driving. Round and round we would go, circling throughout the vast parking lot. Having a novice teenage driver at the wheel can be rather chilling due to their newness at steering and guiding a multi-ton vehicle that can readily crash into things. Fortunately, the parking lot had very few obstacles and so the need to be especially accurate in where the car went was not as crucial in comparison to being on a conventional street (or, imagine being in the mall during open hours, which we later tried too, and it was a near heart attack kind of moment).
Once they got comfortable with driving any which way in the empty mall parking lot, I would up the game by asking them to pretend that there were cars in the parking stalls. Each stall was marked by white painted lines on the asphalt, so it was relatively easy to imagine where the parked cars would be. While driving up and down the rows of pretend parked cars, if they veered over a painted white line, I’d then tell them that they just hit a car. At first, I was repeatedly having to say this. Hey, you just hit a Volvo. Ouch, you just rammed into a Mercedes. And so on.
Eventually, they were able to navigate the mall parking lot rather cleanly. No more ramming of pretend cars. I then had them practice parking in a parking stall or slot. I’d insist that they pretend that there was a car to the left and a car to the right of the parking spot and thus they would need to enter into the spot without scratching against those adjacent cars. We did this for a while and gradually they were able to pull into a parking stall and back-out of it without touching any of the pretend cars.
Having perfected driving throughout the essentially empty mall parking lot, and being able to park at will, I then asked them to pretend that there were other obstacles to be dealt with. We were near a Macy’s department store and I explained that they were to pretend that shoppers were flowing out of the store into the parking lot to get to their cars, and likewise people were parking their cars so they could go into the store. I would point with my fingers and tell them that there was a person here and there, and over there, and one that is walking next to the car. Etc.
This was a much harder kind of pretend. I would tell them they just hit a pedestrian that was trying to get quickly to their car, but we’d have a debate about where the “person” really was. I was accused of magically making people appear in places as though they just instantaneously were beamed to earth, rather than having had a chance to spot a person walking slowly through the parking lot as would happen in real-life. This attempt to create a more populated and complex virtual world was becoming difficult for both me and the learning drivers, so I gave up trying to use that method for their test driving.
Let’s now shift in time and cover a seemingly different topic, but I think you’ll catch-on as to why I am doing so.
Unless you were living in a cave in July 2016, you likely knew then or know now about the release of Pokémon Go. The Pokémon game had long been popular and especially my own kids relished the Pokémon merchandise and shows. Pokémon Go is an app for your smartphone that makes use of Augmented Reality (AR) to layer Pokémon characters onto the real-world. You hold-up your smartphone and turn-on the camera, and lo-and-behold you are suddenly able to see your favorite Pokémon strutting in front of you, or standing over next to a building, or climbing up a pole.
You are supposed to try and find and capture the various virtual characters. This prompted many people to wander around their neighborhoods searching the real-world and the virtual-world to locate prized Pokémon. Some suggested that this was a boon for getting especially younger people off-their-duffs and getting outdoors for some exercise. Rather than sitting in a room and continually playing an online game, they now had to walk around and be immersed in the outdoors. It also was a potential social energizer due to getting multiples of people to gather together to jointly search for the virtual characters.
Unfortunately, it also has had some downsides. There were some players of the game that got themselves into rather questionable situations. If they saw a character perched on the edge of a cliff, they might accidentally be so hot in the pursuit of the character that they themselves fell off the cliff. There were stories of players wandering into bad neighborhoods and getting mugged, and supposedly in some cases muggers waiting in hiding since they knew that people would come to them via pursuit of the virtual characters.
There were reports too that some people became so transfixed in looking at their smartphones to try and spot the virtual characters that they would accidently walk into obstacles. You might be riveting your attention to chasing a Pokémon that you failed to see the fire hydrant ahead of you and thus you tripped over it.
Or, some pursued a Pokémon out into the street and got nearly run over by a car. What makes this particularly vexing is that the car driver does not know why you are suddenly running into the street in front of their car. It would be one thing if you had a dog and it got loose, and you opted to chase after the dog into the street. The car driver could likely see the dog and have predicted that someone might try to run after it. In the case of the virtual world on your smartphone, the car driver has no idea that you are avidly pursuing a Pikachu (a popular Pokémon character) and therefore the driver might be taken aback that someone has blindly stepped into the path of the car.
I’ll now tie together my first story about the mall parking lot driving with the story about Pokémon Go.
Heads-Up Display With Augmented Reality
Back when I was helping my teenagers learn to drive, Augmented Reality was still being established and it was relatively crude and a computer cycles hog. The advent of having AR on a smartphone that could update in near real-time was a sign that AR was finally becoming something that could touch the masses and not be only relegated to very expensive goggles.
During my teaching moments about driving a car, I had dreamed that it would be handy to have a Heads-up Display (HUD) on the car that would make use of a virtual-world overlay on the real-world so that I could do more than just pretend in our minds that there were various obstacles in the parking lot. I would have liked to have the entire front windshield of the car act like a portal that would continue to show the real-world, and yet also allow an overlay of a virtual world.
If I could have done so, I would have then had a computer portray people walking throughout the parking lot. It could also have presented cars in the parking stalls. Since the virtual world would involve animation and movement, I would have virtualized “pretend” cars that were backing out of parking spots, some might be trying to pull into parking spots, others might be meandering around the mall searching for a parking spot.
Just think of how rich an experience this could have been. There we would be in a nearly empty parking lot, and yet by using the windshield to also portray the made-up virtual world, my teenager drivers would actually see pedestrians, other cars, perhaps shopping carts, and a myriad of other objects that would be in a real-world real parking lot.
Furthermore, I would presumably be able to adjust the complexity of the virtual portrayals. I might start by having just a few pedestrians and a few cars, and then after my teenage drivers got used to this situation, I could have made the parking lot seem like the crazed shopping day of Black Friday encompassing zillions of people and cars filling the mall parking lot. With just a few keystrokes the surrounding driving environment could be adjusted and allow for a wide variety of scenarios and testing circumstances.
The beauty too of this virtual world overlay would be that if the novice driver happened to hit an AR portrayed car or pedestrian, no one was actually injured or killed. I’m not saying that their hitting any such AR presented artifact would be good, but at least it is better to have it happen in a virtual sense and presumably avert ever doing so in the real-world sense.
I might even have purposely used a no-win scenario wherein they would be forced into hitting something or someone, doing so to get them to a realization of the seriousness of driving a car. It is one thing to generally know that you could hit someone or something but doing it even in a virtual sense would seem to hammer home the dangers involved in driving. By the way, allow me to clarify that my kids have always been serious and thoughtful drivers and I’m quite thankful for that!
The use of Augmented Reality related to cars has increasingly become a thing, as they say. There are indeed prototype and experimental windshields that will now do the kind of virtual world overlay that I’ve been depicting. These tend to be expensive and more so a research effort than something deployed into everyday use. Nonetheless, great strides are being made in this realm.
Why would you use a HUD in your car with AR? If you are worried that this might lend itself to playing Pokémon Go while actively driving a car, let’s hope that’s not what emerges. The idea instead is that the car itself might use its own sensors to help you with comprehending the driving scene ahead of you. If the car is equipped with cameras it might be able to identify in the scene ahead where cars are, where pedestrians are, where the street signs are, and so on. The windshield would then have virtualized circles and outlines that would point out those real-world objects.
I’m sure you’ve been driving and tried to find a street sign so that you would know the name of the street you are on, or maybe to see what the speed limit is. Sometimes those signs can be hard to quickly spot, especially when you are driving the car and mainly trying to watch the street traffic. Via the car sensors, a computer might be able to find the street signs and when you are looking out your windshield it would have say red outlines that surround each of the nearby street signs. This would then give you a quick visual nudge as to where the street signs are.
Another aspect could be the computer predicting where traffic is going to go next. Suppose you are driving your car and have come up to an intersection. You are waiting to make a left turn. Another car is approaching from the other side of the street. The AR could show a visual arrow on your windshield pointing to where that car is going to go, and you would then be aided by the computer having forewarned you about the upcoming car. It might make a difference in that you could have possibly not realized the car would intercept your intended path, and yet via the windshield HUD it is now portrayed right there in front of your eyes.
One significant criticism of the AR overlay onto a windshield is that it could be as much a distractor as a helper. Maybe when the AR overlay is showing you where the street signs are, it causes your attention to shift toward looking at the street signs and you miss seeing the bicyclist coming up from your left. The use of a HUD can be both a blessing and a curse. In many respects it could boost your driving capabilities and help make you a safer driver. In other ways it could undermine your driving capabilities and cause you to take your eye off the ball. This is an open debate and still being argued about.
There are other emerging AR uses for cars too.
Remember the car owner’s manual that presumably came with your car? It probably sits in your glove compartment and you rarely take it out to look at it. Some auto makers are using AR to make your owner’s manual more engaging and hopefully more useful. You download their app on your smartphone and then open the camera and point it at the owner’s manual. When you turn the pages of the owner’s manual, the AR will overlay additional information and animation.
Suppose that the owner’s manual explains how to adjust the settings on your complicated in-car stereo and radio entertainment system. The manual might have a series of pictures and a narrative explaining how to make adjustments to the entertainment system. This can be confusing though as you look at the manual and look at your actual car, trying to figure out how the flat and unmoving pictures in the owner’s manual are equivalent to what you see in front of you as you are seated in the driver’s seat. Via the AR, the owner’s manual might “come alive” and show animation of adjusting the entertainment system. This could make things easier for you to understand what to do.
Even more immersive is the use of the AR to hold-up your smartphone and aim it at the dashboard where your entertainment system controls reside. The owner’s manual is now overlaid to the real-world dashboard. It can then show you exactly where the controls are and how to adjust them. In that sense, you don’t even need a paper-based owner’s manual per se and can just use the online version that also has the AR capability included too.
Another use of AR by auto makers involves trying to sell you are a car.
You are at the car dealership and looking at the car model they have sitting in the dealership showroom. It is red in color and has conventional tires. You download an app and turn-on the AR, and upon holding up your smartphone to point it at the car, you indicate to the app to “change” the color from red to blue. Voila, suddenly the car in the showroom is blue instead of red. You also are considering the super-traction tires in lieu of the conventional tires, and so you instruct the AR to “change” the tires accordingly. You are now looking at your desired car and can feel more comfortable that it will be what you actually want to purchase.
For the marketing of cars, you could download an AR app from an auto maker and hold-up your smartphone to look at the street in your neighborhood, and by doing so you suddenly see their brand of car driving down your street. It is a virtual depiction of their brand of car. You then think of yourself behind the wheel and driving down your street, the envy of your neighbors. Might just entice you to go ahead and buy that car (well, buy it for real, not an imaginary version).
What does this have to do with AI self-driving cars?
At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One emerging means to try and test AI self-driving cars involves the use of Augmented Reality.
Allow me to elaborate.
I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved.
For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task.
Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too.
How AR Can Help in Testing of AI Self-Driving Cars
Returning to the topic of Augmented Reality and AI self-driving cars, let’s consider the matter of testing of AI self-driving cars and see how AR might be of help.
Testing of AI self-driving cars is one of the most worrisome and controversial topics in the AI self-driving car arena.
Here’s the fundamental question for you: How should AI of self-driving cars be tested?
I’ll help you answer the question by providing these various ways that you could do the testing of the AI of a self-driving car:
· Test the AI software in-absence of being in the actual car and do what some people refer to as bench testing.
· Test the AI software via the use of simulations that act as though the AI is driving in a real-world setting.
· Test the AI software while “in silicio” (Latin meaning in silicon, while actually on-board of the self-driving car) on a closed test track that is purposely established for testing cars.
· Testing the AI software while on-board and on public roads in some constrained manner such as a particular geofenced portion of a town or city.
· Testing the AI software while on-board and on public roads in an unconstrained manner such that the AI self-driving car travels to anyplace that a conventional car might travel to.
One quick answer to my question about where should an AI self-driving be tested is that you could say “All of the Above” since it would seem potentially prudent to try each of the aforementioned approaches. There is no one particular testing approach that is the “best” per se and each of the approaches has trade-offs.
There are some critics of the latter ways of testing involving putting the AI self-driving car onto public roads. It is viewed by some that until AI self-driving cars are “perfected” they should not be allowed onto public roads at all. This seems sensible in that if you are putting an untested or shall we say partially tested AI self-driving car onto public roads, you are presumably putting people and anything else on the public roads into greater risk if the AI self-driving car goes awry.
Proponents of public road testing argue that we will never have any fully tested AI self-driving cars until they are allowed to be on public roads. This is due to the wide variety of driving situations that can be encountered on public roads and for which the other methods of testing generally perhaps cannot equally match. A test track can only undertake so many differing kinds of tests. It is akin to driving in a mall parking lot, of sorts, though of course much more extensive, but in comparison to the public roads approach it is considered quite constrained and limited.
Do you confine AI self-driving cars to being tested solely via only non-public roads testing and wait until this has tested every possible permutation and combination (which many would argue is not especially feasible), or do you let AI self-driving cars onto the public roads to try and make ready progress toward the advent of AI self-driving cars? This would be a kind of risk-reward proposition. Some say that if you don’t allow the public roads option, you might either not have AI self-driving cars for many decades to come, or you might not ever be satisfied with AI self-driving cars even via their other testing method and thus doom AI self-driving cars to never seeing the light of day, as it were.
To try and reduce the risks associated with putting AI self-driving cars onto public roads for testing, the auto makers and tech firms have opted to usually include a human back-up driver in the AI self-driving car. In theory, this implies that the risks of the AI self-driving car going awry are minimized due to the notion that the back-up driver will take over the controls when needed. I’ve mentioned many times that the human back-up driver should not be construed as a silver bullet solution to this matter and that human back-up drivers are merely an added layer of protection but not a foolproof instrument.
If you think about the testing of AI self-driving cars in terms of number of miles driven, it is a means to try and grapple with the magnitude of the testing problem.
Various studies have tried to identify how many miles an AI self-driving car would need to drive to be able to presumably encounter a relatively complete range and diversity of driving conditions, and also do so without presumably getting involved in any incidents so as to suggest that it has now become sufficiently capable to be considered “safe” (depending on your definition of the word “safe”).
This also raises other questions such as what constitutes an incident. If an AI self-driving bumps against another car but there is no material damage, and no one was hurt, does that constitute an incident or do we let it slide? Should we only consider incidents to be those that involve human injury? What about fatalities and how should those be weighed versus incidents involving injuries that are non-fatal?
There is also the matter of whether or not the AI self-driving car is “learning” during the time that it is driving. If so, you then have a somewhat moving goalpost in terms of the number of driving miles needed. Suppose the AI self-driving car goes X number of miles without any incident, it then has a serious incident, but it presumably “learns” or is somehow adjusted so that it will not once again get involved in such an incident. Do you now restart the clock, so to speak, and scrap the prior miles of driving as now water under the bridge, and say that the AI self-driving car has to now go Y number of miles to prove itself to be somehow error free?
I’d like to also clarify that this prevalent notion in the media of “zero fatalities” once we truly have AI self-driving cars is a rather questionable suggestion. If a pedestrian suddenly steps into the street and in front of an oncoming car, whether driven by a human or by AI, and if there is insufficient stopping distance, there is nothing magical about the AI that will make the car suddenly disappear or leap over the pedestrian. We are going to have fatalities even with the Utopian world of only AI self-driving cars.
In any case, on the driving miles question, some studies suggest we might need to have AI self-driving cars that have driven billions of miles, perhaps 5 to 10 billion as a placeholder, before we might all feel comfortable that sufficient testing has taken place. That suggests we would have some number of AI self-driving cars on public roads for billions of miles. Keep in mind that this road-time is during the “testing” phase of the AI self-driving car, and not once the testing is already completed.
Meanwhile, Google’s Waymo is way out in front of the other AI self-driving car makers by having accumulated by their reported numbers to be somewhere around 10 million driving miles. For those of you that are statistics minded, you might realize that 10 million is only 1% of 1 billion, and thus this makes evident that if billions of miles are the goal that even the front runner is a far cry from reaching that number.
With the use of simulations, which I had mentioned earlier as a potential testing method, it is obviously relatively easy to do large-scale driving miles since there is not any actual rubber meeting the road. You can crank up the computer cycles and do as many miles of simulations as you can afford on your computer. There are some that are using or intend to use super-computers to ramp-up the complexity and the driving volumes of their simulations.
Waymo Experienced in Simulation Testing
Waymo has variously reported that they have surpassed around 5 billion miles of simulation testing. They continue to crank away at the use of the simulations while also having their self-driving cars on the roadways.
This illustrates my earlier point that doing testing is likely to involve using some or all of the testing methods that I’ve listed. I would also add that some view the testing methods as being serial and to be done in a set sequence. Thus, you would presumably do all of your simulation testing, finish it, and then move toward putting your self-driving car on the roads. Others point out that this is a less effective method and that you need to undertake the various testing approaches simultaneously. This particularly arises regarding how to best setup the simulation, which I’ll further describe in a moment.
There are some that say that simulated miles are not all equal. By this they are meaning that it all depends upon how you’ve setup your simulation and whether it is truly representative of a real-world driving environment. Someone could setup a simulation involving driving around and around in a tight circle and then run it for billions of miles of a simulated AI self-driving car trying to drive in that circle. Besides the AI self-driving car maybe getting dizzy, it would give us little faith that the AI self-driving car has been sufficiently tested.
I don’t think any of the serious auto makers or tech firms developing AI self-driving cars are setting up their simulations in this rudimentary circling-only way. But it does bring up the valid point that the simulation does need to be complex enough to likely match to the real-world. This is also why doing more than one testing method at a time can be handy. If your AI self-driving car encounters a situation in the real-world, you can use that as a “lesson learned” and adjust your simulation to include that situation and other such situations that are sparked by the instance.
The biggest and easiest criticism or considered weakness of the simulation as a testing method is that it is not the same as having a real self-driving car that is driving on real roads. A lot of people would be hesitant to have full faith and belief that a simulated run is sufficient all on its own. How do you know that the simulation accurately even modeled the AI self-driving car? The odds are that the simulation does not necessarily have the AI running on the same actual hardware as found in the self-driving car. It is more likely that it is running as part of the simulation. The simulation is regrettably likely not the same as the actual AI sitting on-board the AI self-driving car and “experiencing” the driving environment as it is so experienced when an actual car is actually on actual roads.
We then ought to take a look at the test track approach. It involves the actual AI self-driving car on an actual road. The rub is that the closed tracks are only so many acres in size. They can only offer so many variations of driving situations. Furthermore, if you want to have the testing involve real people to be pedestrians or driving other cars nearby the AI self-driving car, you need to hire people to do so, and they are potentially put into harms way if you are going to try some risky maneuvers such as a pedestrian that darts into the street in front of the AI self-driving car or have a human driven car that tries to dangerously cut-off the AI self-driving car.
A test track would need to be well-equipped with street lights, intersections, bike lanes, traffic signals, sidewalks, and a slew of other infrastructure and obstacles that we face on public roads. The question then arises as to how many testing situations can you devise? What is the cost to setup and have an actual AI self-driving car undertake the test? You are not going to be seeking to drive millions or billions of miles on the closed track and so instead need to setup specific scenarios that come to mind.
Another factor is the familiarity aspects that an AI self-driving car might “learn” on a closed track. If the AI self-driving car is used repeatedly in the same confined space, it will presumably over time begin to “memorize” aspects of it. This might color the nature of the testing. Will the AI self-driving car when confronted anew with variants of the setup, once released onto public roads, be able to adequately cope with the fresh settings of the public roads in comparison to the repeated settings of the closed track?
It is like a baby duckling that imprints on a dog rather than an adult duck. What will the duckling be able to do when wandering in a large scope world of other ducks?
I’ll also mention as an aside that the same question about repeated runs is similarly mentioned about the public roads efforts of testing in constrained ways. If you geofence an AI self-driving car to a set of city blocks and it repeatedly drives only in those city blocks, you are getting hopefully really good proficiency in that geofenced area, but you have to ask whether this is then going to be truly generalized to other locales. It could be that the AI self-driving car only is able to sufficiently drive in the geofenced area, but once allowed to roam further will get confused or not be able to respond as quickly due to being in a fresh area. This is often referred to as prevalence-induced behavior.
We are faced with the conundrum that each of the testing methods has their own respective upsides and downsides. As mentioned, you can still aim to try each of the methods, though you would want to be aware of their respective limitations and act accordingly. Furthermore, you would want to make sure that whatever is learned from one method is fed into the other methods. I want to emphasize I am not just saying that you would adjust or improve the AI self-driving car by learning from the other testing methods. You would also want to adjust or improve the other respective testing methods based on learnings from the other testing methods.
If a public roads testing in a constrained setting revealed something of interest, besides potentially adjusting or improving the AI for the on-board self-driving car, you would likely also want to adjust the simulation accordingly too. And, if you were doing closed track testing, you might want to hone in on the public roads reveal to then use it in the closed track setting. They each would infuse the other.
Adding Augment Reality to Closed-Track Testing
What role might Augmented Reality play in this?
Suppose we could add Augmented Reality into the closed track testing. The twist is that we don’t need to do a Heads-up Display (HUD) approach per se since there isn’t a human driver in a Level 5 self-driving car (I’m excluding for the moment a potential back-up human driver). Instead, what we could do is try to convince the AI on-board the self-driving car that there are things in the test track that aren’t really there. We would merge together a virtual world with the real-world of the test track.
The cameras on the AI self-driving car are receiving images and video that depict what the self-driving car can “see” around it. Suppose we intercepted those images and video and added some virtual world aspects into it. We might put an image of a pedestrian standing at the crosswalk and waiting to cross at the test track intersection. This is not an actual human pedestrian. It is a made-up image of a pedestrian. This made-up imaginary pedestrian is overlaid onto the real-world scene that the AI self-driving car is being fed.
The AI self-driving car is essentially “fooled” into getting images that include a pedestrian, and therefore we can test to see if the AI is able to interpret the images and realize that a pedestrian is standing there. There is no risk to an actual human pedestrian because there is none standing there. There is no cost involved in hiring a person to stand there. We dispense with the logistics of having to deal with getting someone to come and pretend to be a pedestrian on the test track.
Keep in mind that we are not doing a simulation of the AI self-driving car at this point — the AI is running on the actual AI self-driving car which is actually there on the actual test track. The only “simulated” aspects at this juncture would be the pedestrian at the corner. They are the simulated aspect which has now been merged into the “perceived” real-world environment.
Here’s how the AI self-driving car would normally do things:
· Camera captures images and video (the AI has not yet seen it)
· It is fed to the AI
· The AI analyzes the captured images and video to see what’s there
· The AI updates the internal model of what is around the self-driving car accordingly
· The AI assesses the internal model to determine what actions to take in driving the car
Here’s the way it might work with the AR included:
· Camera captures images and video (the AI has not yet seen it)
· NEW: The captured images and video are fed into the AR special app
· NEW: The AR special app analyzes the images and video and inserts a pedestrian at the corner
· NEW: The AR special app now feeds the AR-augmented images and video into the AI
· The AI analyzes the captured images and video to see what’s there
· The AI updates the internal model of what is around the self-driving car accordingly
· The AI assesses the internal model to determine what actions to take in driving the car
The AR becomes an interloper that grabs the images and videos, adds the virtual world elements, and then feeds this into the AI of the self-driving car. From the perspective of the AI in the self-driving car, it has no indication that the images and videos were not otherwise collected in raw from the sensors. This allows then appropriate testing of the AI, since if we had to change the AI to be able cope with this AR augmentation, we would then have a “different” version of the AI than would normally be in the AI self-driving car that we are intending to put onto public roads (which, I might point out, could be another way to do this, though with my caveat as mentioned that it will then differ from what presumably is on the roadways).
What could you include then into the virtual world that you are going to “trick” the AI self-driving car that’s on the closed track to believe exists there on the closed track?
You can have just about anything you might want. There could be virtual people, such as pedestrians and bicyclists. There could be virtual objects such as tree that falls in front of the AI self-driving car, but there isn’t an actual tree and it just a made-up one. There could be virtual infrastructure such as added traffic signals that aren’t there on the closed track and only imaginary.
There could be other cars nearby the AI self-driving car, though they might be virtual cars. The AI doesn’t realize these cars aren’t there and assumes they are real cars. There could be trucks, buses, trains, and so on. You might even have animals such as a dog chasing a cat onto the street.
This is harder to pull-off than it might seem at first glance. If you only had static virtual elements that stood in place, it might be somewhat easier to do this. We would likely though want the virtual cars to be driving next to the actual AI self-driving car and be moving at the same speed as the AI self-driving car. Or, maybe driving behind the self-driving car, then pulling alongside, then passing it, and maybe getting in front of the AI self-driving car and slamming on its brakes.
Can you imagine if we had a human driver do the same thing on the test track? We’d need a stunt driver that would be ready in case the AI self-driving car was unable to brake in time and rammed into the stunt driver’s car. Also, how many times could you get the stunt driver to do this same test? Each time would require a restart of the test and you’d be putting that same stunt driver into risk after risk.
As I say, it is certainly advantageous to use this AR approach, but it is also quite tricky to do. You need to intercept the images and video, feed it to the AR system, it needs to figure out what virtual elements are to be included and what movement they should have, and it then needs to feed the overlaid images and video into the AI self-driving car.
The AR needs to know the GPS positioning of the AI self-driving car and its movement so that the AR can properly render the faked virtual elements. This is a computationally intensive task to figure out the AR elements and especially if we add lots of virtual elements into the scene. There might be a dozen faked pedestrians, all at different parts of the scene. We might have a dozen faked cars that are driving nearby the AI self-driving car, alongside it, behind it, in front of it, and so on. Keeping track of the virtual world and making sure it moves with the moving of the AI self-driving car is a challenging computational task.
We likely would also want to feed the responses of the AI that are being used in issuing the car controls commands for the self-driving car to also be fed into the AR. This would allow the AR to gauge what the AI is likely perceiving and thus allow the AR to adjust the virtual world appropriately.
All of this electronic communication and computational effort must be done in real-time and match to the real-world that the AI is supposed to be facing. Latency is a huge factor in ensuring this works as desired for testing purposes.
Here’s what I mean. Suppose the AI normally gets the images and video fed to it every millisecond (just a made-up example). The AR is intercepting the images and video before it reaches the AI. Let’s assume the AR is running on a computer off-board of the AI self-driving car and so we need to push the images and video via electronic communication to that off-board location. There’s time involved in that transmission.
The AR then needs to take time to computationally decide where to place the next round of virtual elements. Once it renders those elements, it now needs to transmit then back over to the AI self-driving car. We’ve just used up time to electronically communicate back-and-forth with the AI self-driving car. We also used up time to figure out and render the virtual world elements into the images and video.
Suppose it took an extra millisecond or two to do so. The AI self-driving car now is getting data delayed from the sensors by that one millisecond or more. It could be that the AI self-driving car, moving along at say 90 feet per second, might now have less time and less chance to do something that it otherwise could have done in a real-world setting that was absent of the AR. We might have inadvertently pinched the AI by adding the AR into the sequence of actions and now the AI is no longer going to be able to react as it could if the AR was not there at all.
Instead, we’ve got to get the AR virtual world aspects to be seamless and not at all disruptive to the normal operation by the AI of the self-driving car. I’ll add more to the complexity by pointing out that the AR is likely also going to want to be receiving other information from the test track infrastructure. We might for example have other real cars on the test track, perhaps being driven by humans, and so that needs to be taken into account too while the AR does its computations.
We’re talking about a sophisticated looping structure that must be buttoned down to be timely and not interfere with the AI of the self-driving car. If we have several AI self-driving cars being tested at the same time, each of them needs their own rendering of the virtual world elements as specific to where those self-driving cars and what they are doing.
University of Michigan Mcity Test Track Fusing Real and Virtual Worlds
At the University of Michigan’s Mcity test track, they are making strides toward this kind of AR and real-world testing. In a recent paper entitled “Real World Meets Virtual World: Augmented Reality Makes Driverless Vehicle Testing Faster, Safer, and Cheaper,” researchers Henry Liu and Yheng Feng describe two fascinating examples that they have undertaken with this approach.
The first example involves the use of a virtual train.
Suppose you wanted to determine whether an AI self-driving car will let a moving train pass bye before the AI opts to continue the self-driving car on a path forward. At a test track, you could maybe be lucky enough to have train tracks. You might arrange to rent a train and the train conductor. Maybe you get the train to go back-and-forth on the test track and you run the AI through this drill several times. Let’s also hope that the AI self-driving car does not make a mistake and become smushed into a little ball by a train that rams it because the AI misjudged and put the self-driving car onto the tracks in front of the oncoming train. Ouch!
By using AR, the researchers were able to have a computer-generated freight train that appeared to the AI self-driving car as though it was an actual train. To make matters more interesting, they included three virtual cars that were ahead of the real-world AI self-driving car. This is the handy aspect of the AR approach. You can readily switch the scenario and add and subtract elements, doing so without the usual physical and logistical nightmares involved in doing so.
Their second example involved doing a classic “running a red light” as a test of whether the AI self-driving car could sufficiently detect that a wayward car was going to run a red light and take appropriate evasive action by the AI self-driving car. This also provided a less costly and safer means of doing this kind of test. The fatalities rate for colliding with a red light runner are relatively high in comparison to other kinds of collisions, and thus being able to test to see that the AI can handle a red light running situation are prudent.
How many millions of miles might a public road testing need to occur before an AI self-driving car might perchance encounter a red light runner that happened to also threaten the path of the AI self-driving car?
Well, come to think of it, where I live, it happens much too often, but anyway I assume you get my drift.
Using AR for closed track testing can be a significant boon to overcoming the usual concerns that a closed track does not provide a sufficient variety of scenarios and that it can be overly costly and logistically arduous to setup for a multitude of scenarios.
One aspect about the AR testing is whether to include only the visual aspects of the AR, which is what we as humans are used to too, or whether to also include the other sensory devices as part of the mix of what the AR is essentially spoofing.
An AI self-driving car typically has a multitude of sensors, including cameras, radar, sonar, ultrasonic, and LIDAR. The sensor fusion portion of the system combines these together to get a more robust indication of what surrounds the AI self-driving car. If one sensor is not functioning well, perhaps obscured by dirt on the camera lenses or maybe it is nighttime, the sensor fusion often has to consider the other sensory inputs with a greater weight.
If the AR does only the visual sensory augmentation, it means that the other sensors aren’t going to be able to play a part in the testing. This is less than ideal since the real-world public roadways will involve presumably all of the sensors and a delicate balance of relying on one or the other, depending upon the situation at hand.
You also need to make sure that the AR virtual elements act and react as they would in the real-world. Pedestrians do wacky things. Bicyclists dare cars all the time. Other car drivers can be wild and swerve into your lane. It is crucial that the virtual elements be setup and programmed to act in a manner akin to the real-world.
There is still plenty of room to mature the AR capabilities for the testing of AI self-driving car in closed track settings. I guess if we want to attract younger engineers to also aid in making progress, perhaps we might need to include Pikachu, Charizard, Mewtwo, Misty, and Mew into the AR overlays for the test track. We certainly don’t want any AI self-driving cars running down a Pokémon. That’s an accident we for sure want to avoid.
For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website
The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.
More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru
To follow Lance Eliot on Twitter: @LanceEliot
Copyright 2018 Dr. Lance Eliot