How Ugly Zones Put Self-Driving Cars To The Most Ruthless Tests

Dr. Lance Eliot, AI Insider

Image for post
Image for post

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Is he a man or a machine?

That was asked about Francesco Molinari when he won the 2018 Open Golf Championship and earned himself nearly $2 million in prize money.

It was his first major golf victory and it was the first time that the now 147th annual golf tournament was won by an Italian (there was a lot of celebrating in Italy!).

How did he achieve the win?

You could say that it was years upon years of other golf competitions and smaller wins that led to this big win.

Would you be willing to say it was due to practice?

Francesco is known for being a slave to practicing. He had opted to radically change his practice routines, entering into what some call the ugly zone.

Introducing To The Ugly Zone

When practicing any kind of skill, you are to do so with a maximum amount of pressure, perhaps even more so than what you’ll experience during live competition play.

The goal is to make practices as rough and tough as a real match. Maybe even more so.

Francesco shifted his practices two years prior to his incredible win into becoming near torture tests.

His new coach embodied the ugly zone philosophy and emphasized that the frustration level had to be equal to a real game or possibly higher than a real game.

The more annoyed that Francesco became with his coach, the more the coach knew he was doing something right in terms of making practices hard. Every practice golf shot was considered vital. No more of the traditional hitting golf balls with your clubs for mindless hours on end. Instead, all sorts of complicated shots and series of shots were devised for practices.

Some psychologists suggest that adding challenges to practices tends to boost the long-term impacts of the practices.

It is often referred to as desirable difficulty.

The ugly zone proponents contend that you need to learn how to think and act under pressure.

They say that if you are the type of person that gets butterflies in your stomach during live competitions, you need to hone your skills so that instead of expunging the butterflies that you instead learn to shape them so they fly in a formation. Use the pressure to overcome your fears. Use the pressure as a kind of high octane juice. That’s what the ugly zone is supposed to achieve.

AI Autonomous Cars And Ugly Zones

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. In addition, we make use of a wide variety of techniques and one of those that we advocate is the use of the ugly zone.

Allow me to explain.

Many of the auto makers and tech firms that are making AI self-driving cars are doing testing in these ways:

  • Use of simulations
  • Use of proving grounds
  • Use on public roads

When an AI self-driving car is being “tested” on public roads, this means it is being done in a relatively uncontrolled environment and that presumably just about anything can happen.

On the one hand, this is good because there might be that “unexpected” aspect that arises and for which it is then handy to see how well the AI can respond to the matter. On the other hand, you might go hundreds, thousands, or millions of miles using the AI self-driving car and not encounter these plausible rare occasions at all, thus, in that sense, the AI self-driving car will not be tested readily on such facets.

There’s also the rather obvious but worth stating point that doing “testing” of AI self-driving cars while on public roads is something of a dicey proposition. If the AI is unable to appropriately respond to something that occurs, the public at large could be endangered. Suppose a man on a pogo stick suddenly appears in front of the AI self-driving car and the AI does not know what to do, and perhaps hits and injures the man — that’s not good.

As I’ve mentioned many times, there are some AI developers that have an “egocentric” perspective about AI self-driving cars and seem to think that if someone does something “stupid” like pogoing in front of a self-driving car that they get what they deserve (this will doom the emergence AI self-driving cars, I assure you).

There is also some sense of false security by many of the auto makers and tech firms that having a human back-up driver during public roadway testing is a sure way of avoiding any adverse incidents. This is quite a myth or misunderstanding, and there is still a bona fide chance that even with a human back-up driver that things can go awry for an AI self-driving car.

Another aspect of doing testing on public roadways is that it might be difficult to reproduce the instance of what happened. I mention this because trying to do Machine Learning (ML) via only one example of something is quite difficult to do. It would be handy to be able to undertake the situation a multitude of times in order to try and arrive at a “best” or at least better way to respond. I’ve stated in my industry speeches that we’re suffering from a kind of irreproducibility in the AI self-driving car realm and for which inhibits or staggers potential progress.

As perhaps is evident, doing testing on public roadways has some disadvantages.

That’s why it is vital to also do testing via the other means possible, including using simulations and using proving grounds.

For simulations, you can presumably run the AI through zillions of scenarios. There’s almost no limit to what you could try to test. The main constraint would be the computational cycles needed. Some auto makers and tech firms are even using supercomputers for their simulations, similar to how such high-powered computing is being used to gauge the impacts of climate change or other large-scale problems.

Not everyone though necessarily believes that the simulations are true to the real-world and thus the question is posed whether the AI reacting in a simulated environment is actually the same as it will react while on the roadways. If you are simulating climate change and your simulation is a bit off-base by estimates being made, this is likely Okay. But, if you are dealing with AI self-driving cars, which are multi-ton beasts that can produce instantaneous life-or-death consequences, a simulation that isn’t true to the real-world does not give one a full sense of confidence in the results.

In essence, if I told you that I had an AI self-driving car that has successfully passed a simulation of over one-hundred million miles of car driving, albeit only in a computer-based simulation, and never been on an actual road, would you be happy to see it now placed into public use, or unhappy, or disturbed, or what?

I think it’s fair to say that you’d be concerned.

There’s also the potential use of proving grounds.

This is usually private land or sometimes government land that is set aside for the purposes of testing AI self-driving cars.

You could say that in some ways it is better than simulations because it has a real-world aspect to it.

You could also say that this is safer than being on the public roadways since it is in an area that avoids potential harm to the general public.

I recently had a chance to closely explore a well-known proving ground, namely the American Center for Mobility (ACM) in Michigan, and spoke with the CEO and President, Michael Noblett, along with getting a specially guided tour of the facility by Angela Flood, Executive Director.

The ACM consists of over 500-acres, offering multiple test environments adjacent to the Willow Run Airport. There is about 2.5 miles of an extensive driving loop that contains high-speed usable highway roads and two tri-level overpasses. It is an impressive facility and available for commercial purposes, governmental purposes, and usable too by standards bodies and colleges.

For more info about the ACM, see: https://www.acmwillowrun.org/learn-about-the-facility/

Creating Ugly Zones In All Modes

Generally, it seems apparent that you’d want to use a combination of simulations, proving grounds, and public roadways for developing and testing of your AI self-driving car.

Each approach has its own merits, and each approach has its own drawbacks.

In combination, you can aim to get more kinds of testing that will hopefully lead to sounder AI self-driving cars.

Let’s now revisit the ugly zone.

For real-world driving of an AI self-driving car, as mentioned earlier, the AI might go for many miles without ever encountering some really difficult driving situations. Any such instances would presumably occur by happenstance, if at all. With a providing ground, you can possibly setup the AI for having to cope with quite ugly situations. Same goes for the use of simulations.

Regrettably, there are some auto makers and tech firms that are not pushing their AI to the limits via the use of the proving grounds and nor the simulations. They seem to believe that the focus should be the “normal” conditions of driving.

For example, at a proving ground, the AI self-driving car is driving on a road and all of a sudden a woman pushing a baby stroller carriage starts to walk across the street (this might be a stunt woman hired for this purpose, and the baby stroller is empty other than a fake doll). The AI self-driving car detects the motions and objects involved, i.e., the adult female and the stroller, and deftly swerves to avoid them. AI saves the day! Case closed, the AI is prepared for such a scenario.

This seems convincing as a test.

You might mark-off on your checklist and claim that the AI can detect a person with a baby stroller and take the right kind of action to avoid a calamity.

There are though additional considerations.

How many other cars were on the road with the AI self-driving car?

In this case, none.

Was there a car directly next to the AI self-driving car that would have been potentially in the way of the swerving action?

Not in this case.

Were there other pedestrians also trying to cross the street at the same time as the woman and the stroller?

No, just the woman and the stroller.

Were there any road signs warning about an upcoming hazard or perhaps any orange cones in the road due to roadway repairs being made? No.

And so on.

I think we would all feel a bit more confident in the testing of avoiding the woman with the baby stroller if we believed it was done in a more high-pressure situation.

Imagine if the AI self-driving car had other cars all around it, boxing it in, and meanwhile there were lots of other pedestrians near to or approaching the self-driving car, and the road itself was a mess, and a lot of things were happening all at once. That’s more telling about what the AI can cope with.

Having a simplified, stripped down situation with an otherwise barren road, and just the woman and the stroller, does not seem like much of a test per se.

It’s not anything close to being an ugly zone.

Don’t misunderstand my point. I’m fine with the stripped down test as one such test.

But, if that’s going to be the nature of the testing that’s taking place, it would seem like there’s no provision for the ugly zone.

Recall that I earlier mentioned that having a practice without any kind of ugly zone would seem to be a practice that has a substantial omission and we ought to question the validity of the practice overall.

For AI self-driving cars, we should definitely have ugly zone testing (or, if you prefer, we can say “practices” rather than “testing”).

Should you use only and always ugly zones?

Well, as I mentioned previously, I’m an advocate for a measured amount of practice time for sometimes having ugly zones and sometimes not.

My Goldilocks viewpoint is to have a combination of times with and without the ugly zones. But, however you allocate the time, there must be some amount of ugly zone practice.

Avoidance of using an ugly zone approach in undertaking practices for AI self-driving cars is a scary and understated form of practice and will pretty much “guarantee” the failure of AI self-driving cars in the real-world.

Conclusion

We believe in the ugly zone approach for AI self-driving cars.

Let’s create as tough an environment as feasible so that once the AI self-driving car is on the public roadways, it’s a piece of cake.

True stress testing should be done in all means feasible and not wait until the AI self-driving car is in a public place and for which public harm can occur.

Whether you want to put your own children into an ugly zone for their piano practices or for their art lessons, that’s up to you.

I think we can all agree that we’d believe more so in the potential of AI self-driving cars to be trustworthy on our streets if we knew that they had survived, learned from, and were adept at dealing with ugly zones.

Go, ugly zones, go.

For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru

To follow Lance Eliot on Twitter: https://twitter.com/@LanceEliot

For his Forbes.com blog, see: https://forbes.com/sites/lanceeliot/

For his AI Trends blog, see: www.aitrends.com/ai-insider/

For his Medium blog, see: https://medium.com/@lance.eliot

For Dr. Eliot’s books, see: https://www.amazon.com/author/lanceeliot

Copyright © 2019 Dr. Lance B. Eliot

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store