Clever Use Of Canary Analysis For The Fielding Of Self-Driving Cars

Dr. Lance Eliot, AI Insider

Image for post
Image for post

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

Starting around 1911, it was John Scott Haldane, considered the father of oxygen therapy, who proposed that canaries be used as an early detector of poisonous gases for miners.

Miners would often enjoy the sounds of the canary whistling and otherwise be comforted knowing that the canary was there to help the miners to stay alive. Around the 1980’s and 1990’s, canaries were gradually no longer used and instead automated methods of gas detection were incorporated into mining.

Today, we often use the analogy of having a canary in a cage to suggest that it is important to have an early warning for anytime we might be in a potentially dangerous situation.

Use of Canary Analysis In The Computer Field

In the computer field, you might already know about the use of so-called canary analysis.

This is a technique of trying to reduce the risks associated with moving something from a test environment into a live production environment.

We’ve all had code that we updated and then pushed into production, and then found out that oops there were bugs that hadn’t been caught during testing or the new code introduced other conflicts or difficulties into the production environment. In theory, testing should have caught those bugs beforehand and also determined whether the new code is compatible with the production environment. But, the world is not a perfect place and so in spite of what might be even very exhaustive testing and preparations, it is still possible to have problems once an update has gone into live use.

The normal approach to canary analysis consists of opting to parcel out some of your production users (such as say 1% of the users), and they get the changed production system, which becomes the canary part of this, and then a baseline with the same setup, meanwhile the existing production instance remains as is. You then compare the new baseline to the canary, collecting and comparing various performance metrics. If the canary seems to be OK, you can proceed with the full roll-out into production. It’s like a classic A/B type of testing.

There are various automated canary analysis systems in the computer field.

Perhaps one of those most notable and popularized would be the Google and Netflix efforts of Kayenta. Kayenta is an open-source automated canary analysis system. The concept is to be able to release software changes at what is considered “high velocity” — meaning that when you want to relatively continuously push stuff into production.

AI Autonomous Cars And Canary Analysis

What does this discussion about canaries have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are leveraging the canary notion but for a slightly different angle on how it can be applied to technology and AI.

AI self-driving cars are increasingly becoming complex machinery.

There are a slew of processors and a large body of AI software that runs on-board the vehicle.

Every time that you get into a self-driving car, you’ll need to ask yourself one question — do you trust the AI of that self-driving car?

Right now, most polls show that people are dubious about the trustworthiness of AI self-driving cars.

Many of the existing AI systems being developed for self-driving cars are tending to focus on trying to catch issues at the time that they arise.

That’s important, but it also can get the self-driving car into a situation that can be dire or untoward. You’d rather try to catch beforehand if something is amiss or could arise that is amiss.

For airplanes, it is standard practice to do a preflight check.

Our approach is to undertake what we consider an automated “canary” precheck of the AI self-driving car.

It is an added layer of a system that tries to analyze and exercise the AI self-driving car to ensure as best possible that the AI self-driving car is ready for use. Similar to an airline preflight check, the canary can do a full-length and deep analysis, or it can do a lighter partial analysis. The fuller version takes longer to do. The human owner that wants to use the self-driving car can choose which magnitude of pre-check to undertake.

We call this added feature PFCC (Pre-Flight Canary Check).

This is essentially a self-diagnostic to try and validate and verify that the AI system and the self-driving car are seemingly ready for travel.

Applying Canary Analysis To Self-Driving Cars

First, the PFCC tries to test each of the sensory devices and detect whether they are in working order.

Next, the PFCC tries to check the sensor fusion modules.

This involves feeding pre-canned results of data from the sensors as though the self-driving car was already underway. It is a kind of a simulated set of data to see whether the sensor fusion is working properly. Known results are compared to what the sensor fusion currently has to say about the data.

For the virtual world model, the pre-check is similar to the sensor fusion in that there is a pre-canned virtual world model that is momentarily established, and then updates are pumped to the modules.

The AI action plans modules are more challenging to test.

They have the greater variability in terms of for any set of inputs what their expected set of outputs will be. So, the PFCC provides a range of canned paths and goals, in order to see whether the AI action plan updates seem to be reasonable. This is a reasonableness form of testing.

In terms of the car controls commands, those are more straightforward for doing the canary check. Based on AI action plan directives, the car controls commands are relatively predictable. This also though requires pre-seeding the modules with the status of the car such that the car controls commands are within the allowed limits expected.

When the PFCC has finished, the question arises as to what to do next. If all is well, as best as can be ascertained, this should be conveyed such that the human owner or occupants knows that the self-driving car and the AI seem ready to proceed. If all is not well, this raises the question of not only notification but also whether the PFCC should indicate that the AI of the self-driving car is so out-of-whack that the self-driving car should not be permitted to proceed at all.

For some minor aspects, the AI of the self-driving car might already have been developed such that it can handle when minor anomalies exist. Thus, the PFCC can feed to the AI that there are now known issues and let the AI proceed accordingly. If the AI itself has issues, the PFCC might need to override the AI system and prevent it from trying to drive when it is not suitable to do so.

One consideration about a pre-flight canary check involves whether it might produce a false negative.

Suppose the PFCC reports that the LIDAR is not functioning, but it really is able to function properly. What then? The notion is that it is likely safer to err on the side of caution. The human owner or occupant will be notified and might end-up taking the self-driving car to the repair shop and discover that the PFCC falsely reported an issue. This though is able to be reported and via the OTA will be collected and determined as to whether it requires a global change to the PFCC or other changes are needed.

The more worrisome aspect would be a false positive. Let’s suppose the canary could not detect any issues with the forward facing radar, but there really are issues. This is bad. Of course, as stated earlier, the canary cannot guarantee that it will find all anomalies. In any case, during a journey, the AI system is intended to be keeping a log of anomalies discovered during the journey, and this is later on used by the PFCC to try and determine whether there were any issues that arose during the journey that might have been able to be detected earlier on.

Conclusion

Coal miners loved having their canaries.

For AI self-driving cars, right now the notion of having a “canary” that can do a pre-flight check is considered an “edge” problem.

An edge problem is one that is not at the core of a problem. For the core, most automakers and tech firms are focused on getting an AI self-driving car to properly drive on the streets, navigate traffic, etc. The use of an extensive and devoted effort of doing a pre-flight check is prudent and ultimately will be valued. Right now, most of the AI self-driving cars on our streets are pampered by the automaker or tech firm, but we’ll eventually have AI self-driving cars being used day-to-day by the everyday consumer.

Getting a professional quality pre-flight check is bound to make them as happy as a cheery chirping bird.

For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru

To follow Lance Eliot on Twitter: https://twitter.com/@LanceEliot

For his Forbes.com blog, see: https://forbes.com/sites/lanceeliot/

For his AI Trends blog, see: www.aitrends.com/ai-insider/

For his Medium blog, see: https://medium.com/@lance.eliot

For Dr. Eliot’s books, see: https://www.amazon.com/author/lanceeliot

Copyright © 2019 Dr. Lance B. Eliot

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store