Boeing 737 Showcases Crucial Lessons for AI and Driverless Cars: Nine Key Imperatives

Dr. Lance B. Eliot, AI Insider

Image for post
Image for post
Boeing 737 MAX 8 provides crucial lessons for AI self-driving car efforts

The Boeing 737 MAX 8 aircraft has been in the news recently, doing so sadly as a result of a fatal crashes.

I’d like to consider at this stage of understanding about the crashes whether we can tentatively identify aspects about the matter that could be instructive toward the design, development, testing, and fielding of Artificial Intelligence (AI) systems.

I’m going to concentrate my relevancy depiction on a particular type of real-time AI system, namely AI self-driving cars.

Please though do not assume that the insights or lessons mentioned herein are only applicable to AI self-driving cars. I would assert that the points made are equally important for other real-time AI systems, such as robots that are working in a factory or warehouse, and of course other AI autonomous vehicles such as drones and submersibles.

One overarching aspect that I’d like to put clearly onto the table is that this discussion is not about the Boeing 737 MAX 8 as to the actual legal underpinnings of the aircraft and the crashes. I am not trying to solve herein the question of what happened in those crashes.

Background About the Boeing 737 MAX 8

As part of the family of Boeing 737’s, the MAX series is based on the prior 737 designs and was purposely re-engined by Boeing, along with having changes made to the aerodynamics and the airframe, doing so to make key improvements including a lowered burn rate of fuel and other aspects.

Per many news reports, there were discussions within Boeing about whether to start anew and craft a brand-new design for the Boeing 737 MAX series or whether to continue and retrofit the design. The decision was made to retrofit the prior design. Of the changes made to prior designs, perhaps the most notable element consisted of mounting the engines further forward and higher than had been done for prior models. This design change tended to have an upward pitching effect on the plane. It was more so prone to this than prior versions, as a result of the more powerful engines being used (having greater thrust capacity) and the positioning at a higher and more pronounced forward position on the aircraft.

As to a possibility of the Boeing 737 MAX entering into a potential stall during flight due to this retrofitted approach, particularly doing so in a situation where the flaps are retracted and at low-speed and with a nose-up condition, the retrofit design added a new system called the MCAS (Maneuvering Characteristics Augmentation System).

The MCAS is essentially software that receives sensor data and then based on the readings will attempt to trim down the nose in an effort to avoid having the plane get into a dangerous nose-up stall during flight. This is considered a stall prevention system.

The primary sensor used by the MCAS consists of an AOA (Angle of Attack) sensor, which is a hardware device mounted on the plane and transmits data within the plane, including feeding of the data to the MCAS system. In many respects, the AOA is a relatively simple kind of sensor and variants of AOA’s in term of brands, models, and designs exist on most modern-day airplanes.

Algorithms used in the MCAS were intended to try and ascertain whether the plane might be in a dangerous condition as based on the AOA data being reported and in conjunction with the airspeed and altitude. If the MCAS software calculated what was considered a dangerous condition, the MCAS would then activate to fly the plane so that the nose would be brought downward to try and obviate the dangerous upward-nose potential-stall condition.

The MCAS was devised such that it would automatically activate to fly the plane based on the AOA readings and based on its own calculations about a potentially dangerous condition. This activation occurs without notifying the human pilot and is considered an automatic engagement.

Note that the human pilot does not overtly act to engage the MCAS per se, instead the MCAS is essentially always on and detecting whether it should engage or not (unless the human pilot opts to entirely turn it off).

During a MCAS engagement, if a human pilot tries to trim the plane and uses a switch on the yoke to do so, the MCAS becomes temporarily disengaged. In a sense, the human pilot and the MCAS automated system are co-sharing the flight controls. This is an important point since the MCAS is still considered active and ready to re-engage on its own.

A human pilot can entirely disengage the MCAS and turn it off, if the human pilot believes that turning off the MCAS activation is warranted.

In the case of the Lion Air crash, one theory is that shortly after taking off the MCAS might have attempted to push down the nose and that the human pilots were simultaneously trying to pull-up the nose, perhaps being unaware that the MCAS was trying to push down the nose. This appears to account for a roller coaster up-and-down effort that the plane seemed to experience.

Speculation based on that theory is that the human pilot did not realize they were in a sense fighting with the MCAS to control the plane, and had the human pilot realized what was actually happening, it would have been relatively easy to have turned off the MCAS and taken over control of the plane, no longer being in a co-sharing mode. There have been documented cases of other pilots turning off the MCAS when they believed that it was fighting against their efforts to control the Boeing 737 MAX 8.

One aspect that according to news reports is somewhat murky involves the AOA sensors in the case of the Lion Air incident. Some suggest that there was only one AOA sensor on the airplane and that it fed to the MCAS faulty data, leading the MCAS to push the nose down, even though apparently or presumably a nose down effort was not actually warranted. Other reports say that there were two AOA sensors, one on the Captain’s side of the plane and one on the other side, and that the AOA on the Captains side generated faulty readings while the one on the other side was generating proper readings, and that the MCAS apparently ignored the properly functioning AOA and instead accepted the faulty readings coming from the Captain’s side.

There are documented cases of AOA sensors at times becoming faulty. One aspect too is that environmental conditions can impact the AOA sensor. If there is build-up of water or ice on the AOA sensor, it can impact the sensor. Keep in mind that there are a variety of AOA sensors in terms of brands and models, thus, not all AOA sensors are necessarily going to have the same capabilities and limitations.

There are a slew of other aspects about the Boeing 737 MAX 8 and the incidents, and if interested you can readily find such information online.

Shifting Hats to AI Self-Driving Cars Topic

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. As such, we are quite interested in whatever lessons can be learned from other advanced automation development efforts.

Let’s consider some potential insights that can be gleaned from what the news has been reporting about the Boeing 737 MAX 8 incidents.

Here’s a list of the points I’m going to cover:

  • Retrofit versus start anew
  • Single sensor versus multiple sensors reliance
  • Sensor fusion calculations
  • Human Machine Interface (HMI) designs
  • Education/training of human operators
  • Cognitive dissonance and Theory of Mind
  • Testing of complex systems
  • Firms and their development teams
  • Safety considerations for advanced systems

Take a look at Figure 1.

Key Point #1: Retrofit versus start anew

Recall that the Boeing 737 MAX 8 is a retrofit of prior designs of the Boeing 737. Some have suggested that the “problem” being solved by the MCAS is a problem that should never have existed at all, namely that rather than creating an issue by adding the more powerful engines and putting them further forward and higher, perhaps the plane ought to have been redesigned entirely anew. Those that make this suggestion are then assuming that the stall prevention capability of the MCAS would not have been needed, which then would have not been built into the planes, which then would never have led to a human pilot essentially co-sharing and battling with it to fly the plane.

For purposes herein, think about AI systems and the question of whether to retrofit an existing AI system or start anew.

There are some AI self-driving car efforts that have built upon prior designs and are continually “retrofitting” a prior design, doing so by extending, enhancing, and otherwise leveraging the prior foundation. One consideration is whether the prior design might have issues that you are not aware of and are perhaps carrying those into the retrofitted version.

Another consideration is whether the effort to retrofit requires changes that introduce new problems that were not previously in the prior design. I routinely forewarn AI self-driving car auto makers and tech firms to be cautious as they continue to build upon prior designs. It is not necessarily pain free.

Key Point #2: Single sensor versus multiple sensors reliance

For the Boeing 737 MAX 8, I’ve mentioned that there are the AOA (Angle of Attack) sensors and they play a crucial role in the MCAS system. It’s not entirely clear whether there is just one AOA or two of the AOA sensors involved in the matter, but in any case, it seems like the AOA is the only type of sensor involved for that particular purpose, though presumably there must be other sensors such as registering the height and speed of the plane that are encompassed by the data feed going into the MCAS.

Let’s though assume for the moment that the AOA is the only sensor for what it does on the plane, namely ascertaining the angle of attack of the plane.

The reason I bring up this aspect is that if you have an advanced system that is dependent upon only one kind of sensor to provide a crucial indication of the physical aspects of the system, you might be painting yourself into an uncomfortable corner. In the case of AI self-driving cars, suppose that we used only cameras for detecting the surroundings of the self-driving car. It means that the rest of the AI self-driving car system is solely dependent upon whether the cameras are working properly and whether the vision processing systems is working correctly.

If we add to the AI self-driving car another capability, such as radar sensors, we now have a means to double-check the cameras. I’d like to then use the AOA matter as a wake-up call about the kinds of sensors that the auto makers and tech firms are putting onto their AI self-driving cars.

This does bring up another handy point, specifically how to cope with a sensor that is being faulty. The AI system cannot assume that a sensor is always going to be working properly. The tricky part is when a sensor becomes faulty but has not entirely failed.

Suppose a camera is having problems and it is occasionally ghosting images, meaning that an image sent to the AI system has shown perhaps cars that aren’t really there or pedestrians that aren’t really there. This could be disastrous.

The sensor and the AI system must have a means to try and ascertain whether the sensor is faulting or not.

AI self-driving car makers need to be thoughtfully and carefully considering how their sensors operate and what they can do to detect faulty conditions, along with either trying to correct for the faulty readings or at least inform and alert the rest of the AI system that faultiness is happening.

Key Point #3: Sensor fusion calculations

As mentioned earlier, one theory was that the Boeing 737 MAX 8 in the Lion Air incident had two AOA sensors and one of the sensors was faulting, while the other sensor was still good, and yet the MCAS supposedly opted to ignore the good sensor and instead rely upon the faulty one.

In the case of AI self-driving cars, an important aspect involves undertaking a kind of sensor fusion to figure out a larger overall notion of what is happening with the self-driving car. The sensor fusion subsystem needs to collect together the sensory data or perhaps the sensory interpretations from the myriad of sensors and try to reconcile them.

Would it be possible for an AI self-driving car to opt to rely upon a faulting sensor and simultaneously ignore or downplay a fully functioning sensor? Yes, absolutely, it could happen.

It all depends upon how the sensor fusion was designed and developed to work.

This point highlights the importance of designing the sensor fusion in a manner that best leverages the myriad of sensors, along with having extensive error checking and correcting, along with being able to deal with good and bad sensors.

Key Point #4: Human Machine Interface (HMI) designs

According to the news reports, the MCAS is automatically always activated and trying to figure out whether it should engage into the act of co-sharing the flight controls.

It seems that some pilots of the aircraft might not realize this is the case. Perhaps some are unaware of the MCAS, or maybe some are aware of the MCAS but believe that it will only engage at their human piloting directive to do so.

Besides this always-on aspect, perhaps there are some human pilots that don’t know how to turnoff the feature, or they might have once known and have forgotten how to do so. Or, maybe while in the midst of a crisis, they aren’t considering whether the MCAS could be erroneously fighting them and therefore it doesn’t occur to them to disengage it entirely. There is a potential large mental search space that the human pilot has to analyze.

What makes this seemingly even more subtle in the case of the MCAS is that it apparently will temporarily disengage when the pilot uses the yoke switch, but the MCAS will then re-engage when it calculates that there is need to do so. A human pilot might at first believe that they’ve disengaged entirely the MCAS, when all that’s happened is that it has temporarily disengaged. When the MCAS re-engages, the human pilot could be baffled as to why the control is once again having troubles.

You’ve got a confluence of factors that can begin to overwhelm the human pilot.

Will the human driver understand what the Level 3 capabilities are? Will the human driver know that the AI is trying to drive the car? Will the AI realize when the human opts to drive the car? Will the AI realize that a human driver is actually ready and able to drive the car?

Much of this center on the Human Machine Interface (HMI) aspects. When you are co-sharing the driving, both parties have to be properly and timely informed about what the other party is doing or wants to do or wants the other party to do. For a car, this might be done via indicators that light-up on the dashboard, or maybe the AI system speaks to the driver, but this is not necessarily a robust solution due to the inherent difficulties and time consuming aspects of undertaking such communication between human and machine.

Key Point #5: Education/training of human operators

One question that is being asked about the Boeing 737 MAX 8 situation involves how much education or training should be provided to the human pilots, in particular related to the MCAS, and overall how the human pilots were or are to be made aware of the MCAS facets.

Commercial airline pilots are governed by all kinds of rules about education, training, number of hours flying, certification, re-certification, and the like. For today’s everyday licensed driver of a car, I think we can all agree that they get a somewhat minimal amount of education and training about driving a car.

Part of the reason that we have been able to keep the amount of education and training relatively low for driving a car is because of the amazing simplicity of driving a conventional car.

I know many drivers though that have no idea how to engage their cruise control. They’ve never used it on their car. They don’t care to use it. I know many drivers that aren’t exactly sure how their Anti-lock Braking System (ABS) works, but most of the time it won’t matter that they don’t know, since it usually automatically works for you.

As the Level 3 self-driving cars begin to appear in the marketplace, one rather looming question will be to what extent should human drivers be educated or trained about what the Level 3 does.

Things are going to get dicey with the Level 3 systems and the human drivers. They are co-sharing the driving task. Should the human driver of a Level 3 car be required to take a certain amount of education or training on how to operate that Level 3 car?

Key Point #6: Cognitive dissonance and Theory of Mind

A human operator of a device or system needs to have in their mind a mental model of what the device or system can and cannot do. If the human operator does not mentally know what the other party can or cannot do, it will make for a rather poor effort of collaboration.

Having a mental picture of the other person’s capabilities is often referred to as the Theory of Mind. What is your understanding of the other person’s way of thinking?

If there is a mental gap between the understanding of the human operator and the device or system they are operating, it creates a situation of cognitive dissonance. The human operator is likely to fail to take the appropriate actions since they misunderstand what the automation is or has done.

Human drivers in even conventional cars can have the same lack of Theory of Mind about the car and its operations. In the case of having ABS brakes, you are not supposed to pump those brakes when trying to come to a stop, doing so actually tends to have the opposite reaction of your attempting to stop the car quickly. Some human drivers are used to cars that don’t have ABS and in those cars you might indeed pump the brakes, but not with ABS.

The same kind of cognitive dissonance will be more pronounced with Level 3 cars.

Key Point #7: Testing of complex systems

There is an ongoing discussion in the media about how the MCAS was tested.

Let’s suppose an advanced automation system is tested to make sure that it seems to work as devised. Maybe you do simulations of it. Maybe you do tests in a wind tunnel in the case of avionics systems, or for an AI self-driving car you take it to a proving ground or closed track.

If the tests are solely about whether the system does what was expected, it might pass with flying colors. Did the tests though include what will happen when something goes awry?

Suppose a sensor becomes faulty, what happens then? I’ve actually had engineers that tell me there was nothing in the specification about a sensor becoming faulty, so they didn’t develop anything to handle that aspect, therefore it made no sense to test it for a faulty sensor, since they could already tell you that it wasn’t designed and nor programmed to deal with it.

Another kind of test involves the HMI aspects and the human operator.

If the advanced automation is supposed to work hand-in-hand with a human operator, you ought to have tests to see if that really is working out as anticipated. One guffaw that I’ve often seen involves training the human operator and then immediately doing a test of the system with the human operator. That’s handy, but what about a week later when the human operator has forgotten about some of the training?

Key Point #8: Firms and development teams

Usually, advanced automation systems are designed, developed, tested, and fielded as part of large teams and within overall organizations that shape how these work efforts will be undertaken.

Crucial decisions about the nature of the design are not usually made by one person alone. It is a group effort. There can be compromises along the way. There can be miscommunication about what the design is or will do.

My point is that it can be easy to fall into the mental trap of focusing only on the technology itself, whether it is a plane or a self-driving car. You need to also consider the wider context of how the artifact came to be. Was the effort a well-informed and thoughtful approach or did the approach itself lend towards incorporating problems or issues into the resultant outcome.

Key Point #9: Safety considerations for advanced systems

The safety record of today’s airplanes is really quite remarkable when you think about it. This has not happened by chance. There is a tremendous emphasis on flight safety. It gets baked into every step of the design, development, testing, and fielding of an airplane, along with its daily operation. In spite of that top-of-mind about safety, things can still at times go awry.

In the case of AI self-driving cars, I’d suggest that things are not as safety conscious as yet and we need to push further along on becoming more safety aware. There are numerous steps to be baked into AI self-driving cars that will increase their safety, without which, I’ve prophesied we’ll see things go south and the AI self-driving car dream might be delayed or dashed.

Conclusion

I’ve touched upon some of the aspects that seemed to be arising as a result of the Boeing 737 MAX 8 aspects that have been in the news recently. My goal was not to figure out the deadly incidents.

Given how immature the field of AI self-driving car is today in comparison to the maturity of the aircraft industry, there’s a lot to be learned and reapplied.

For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru

To follow Lance Eliot on Twitter: @LanceEliot

Copyright 2019 Dr. Lance Eliot

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store