Lawsuits Coming for AI Driverless Cars, Makers Scramble to Fend Off

Dr. Lance B. Eliot, AI Insider

Image for post
Image for post
Lawsuits against driverless car makers is going to be a booming business

Cars, can’t live without them, can’t live with them, particularly if there are onerous defects or if they are half-baked about what the co-sharing of the driving task is between humans and the AI.

As an expert witness in court cases involving computer systems, and formerly an Arbitrator for the American Arbitration Association on their Computer Disputes panel, I want to take you into the world of computer related lawsuits as they emerge in the AI and self-driving realm.

Get ready for quite a ride.

We’ll start by considering major class action lawsuits in the automotive realm, beginning with some whoppers that were not about computers but instead involved various kinds of automobile equipment and car design related defects. This lays a handy foundation for the newly emerging lawsuits that involve AI in self-driving or driverless cars.

Do you remember the famous case of the Ford Motor Company scandal over the Pinto cars that seemed to ignite on fire when struck at the back of the car where the gas tank was mounted?

That was in the 1970s and eventually involved a class action lawsuit, during which it was revealed that Ford knew about the problem but opted to do nothing since it was calculated to be cheaper to pay out claims rather than get the problem fixed. Executives eventually were criminally indicated for negligent homicide. It was a huge story for several years.

Today, mentioning “Pinto” invokes a kind of keyword or implied suggestion that you are referring to a potentially severe defect and can be applied to any kind of product. Watch out for that washing machine, it’s a Pinto — which some reporters exclaimed when last year there was the case of a brand of washing machine that went a kilter and the internal spinning parts flew apart during normal use.

If you weren’t around in the 1970s and haven’t ever heard about the Ford Pinto, I offer the case of the Ford Explorer SUV that was prone to rollovers in the year 2000.

At first, critics said that it was the overall design of the SUV that made it defective. Presumably, the car was slightly lopsided by its design and so upon particular driving maneuvers it was easily topple over. Imagine when you try to balance an object on its edge and there is too much weight toward one side or the other. The National Highway Traffic Safety Administration investigated and they said it was the tires. Firestone made the tires and all of sudden they had the bright light of accusatory defects on them. It was a mess and class action lawsuits were involved.

One more famous case that’s even more recent involves the Toyota Lexus scandal in 2009.

Some people died when the Lexus would occasionally seemingly go out-of-control. Initially, Toyota claimed that the root cause was the floor mats. Their theory was that the floor mat would inch up toward the floor pedals and jam-up the braking and accelerator pedals from aptly being able to work. Doubts were cast on this theory. Ultimately, during the class action lawsuit, Toyota admitted that it might also be that there was a defective problem with the accelerator pedal.

At times, the “sticky pedal” would remain affixed in a given depressed level and was not readily budged by the human driver. In 2014, Toyota paid a $1.2 billion dollar fine and admitted that they had misled consumers, they had concealed the problem, and they had made deceptive statements about what they knew and what the problem was.

Why this history lesson about cars, defects, and class action lawsuits?

Because we are now entering into the age of self-driving car defects, along with the equally ubiquitous class action lawsuits to go along with the matter. Indeed, one of the first salvos was launched when a class action lawsuit against Tesla was filed in 2016 and after various back-and-forth was settled in 2018.

Let the battles begin, as they say.

There are some very hungry class action lawyers that would love to get some dough out of the bonanza of self-driving cars by going after the self-driving car makers.

The bigger the car maker or tech firm involved, the juicer the target.

I am guessing that class action lawsuit attorneys have a dartboard setup in their offices and that the name of each of the self-driving car makers and tech firms are shown at various positions on the board. At the center of the target board are the biggest ones.

It’s like the old joke about why the bank robber robs banks, and the answer is because that’s where the money is.

Going after startups that are making self-driving cars is not very smart and nor lucrative. The anticipation by these cagey lawyers aiming at the big auto makers and tech firms rolling out self-driving cars is like a tiger ready to pounce on its prey. Tesla right now is pretty much the most often cited such target because it has the most self-driving car related vehicles in the hands of consumers (though they aren’t actually self-driving cars, as I explain later on herein), and they are rich enough as a company to make it worthwhile to go for the big bucks out of them.

I would like to add that this is much more than just ambulance chasing.

We definitely have self-driving car makers that are not taking safety seriously enough (as I have emphasized in numerous of my books, articles, and speeches about AI and self-driving cars).

I have repeatedly exhorted the self-driving car makers to put due attention toward safety. The auto makers and tech firms are giving some attention to safety, but not enough, and the proof will be in the pudding, namely once their vaunted driverless cars our on our roadways and things go south. It is unfortunate that it will take a school of hard knocks to get the makers to truly take seriously safety throughout all elements of their driverless car efforts.

In spite of these exhortations, many are still not listening. Many are brazenly pushing ahead with the “fundamentals” of getting the AI to simply drive a car, and aren’t as focused about safety issues. A lot of the AI software developers are also of the types that think of safety as an after-thought. For them, until a self-driving car demonstrably exhibits safety issues and actually harms or kills, only then will the light bulb come on that maybe they should devote substantive attention to safety.

Let’s take a close look in a moment at the famous class action lawsuit filed against Tesla regarding their self-driving car capabilities.

First, some important background for you.

Keep in mind that when I say self-driving car capabilities, there is a set of official levels of self-driving cars as promulgated by the Society of Automotive Engineers (SAE) and that we are still not anywhere near the topmost and true self-driving car of a Level 5.

Right now, so-called (mislabeled) self-driving cars are around Level 2, such as the Tesla and its Autopilot feature. Level 2 and Level 3 are decidedly not self-driving cars, though the public tends to think that they are. Instead, they are cars with merely Advanced Driving Assistance Systems (ADAS).

Level 2 and Level 3 require that there must be a human driver present, a licensed driver, and the human driver must be attentive continuously to and be able to undertake the driving task at any time, and all of the time as needed. Even though the human driver and the automation are considered to be “co-sharing” the driving task, it is nonetheless asserted that the human driver bears the ultimate responsibility of the driving of the car.

The Level 3 such cars are just now coming into the marketplace. There is an ever present danger with these cars due to the aspect that the automation is more advanced than it was for Level 2, yet the automation is still not at a level of taking over the driving of the car autonomously. Human drivers will tend to falsely assume that the Level 3 automation can drive the car. These human drivers will become complacent, and when the automation gets stymied and tries to hand back control to the human, the human driver is going to be flustered and unlikely to handle the sudden emergency situation.

We’ll have regrettable and avoidable car accidents, including injuries and deaths. Lawsuits will arise.

One claim will be that the human driver did not comprehend that they were supposed to be actively ready to drive at all times, and it was the fault of the auto maker that the human driver was not well-informed. The claimant might point to the slick marketing materials of the auto maker that suggest the car could drive itself.

The claim will likely also say that the human driver did not get sufficient training or notification about the limits of the capabilities of the Level 3 car. Auto makers will try to say that they provided a car owners manual that describes the features, but this is bound to be a slim defense. Who looks at their owners manual? Traditionally, almost no one. Few juries will swallow the idea that just because an owners manual was provided that it somehow obviates the auto maker and gives the maker a free ride.

Auto makers might also claim in their defense that the sales person at the dealership explained the limits, but this is a slim defense, since the odds are that a dealership sales person was more intent on selling the car than explaining what it could do. The sales person might either have failed to provide any such full-bodied explanation, or might have even falsely overstated what the car could do.

There will also be the likely claim by the human driver that the Level 3 car did not present to the human driver the kinds of visible and audio indications that would have clued the human as to what the status of the automation driving was, and that the automation provide insufficient warning when handing the controls back to the human driver.

This insufficient warning might also be coupled with the aspect that the automation provided insufficient lead-time as to warning the human driver to take the controls.

And so on.

The auto makers and tech firms will need to showcase how they arrived at the automation, explaining in detail the nature of the design, development, and testing that was done. How did they decide what kinds of warnings and notifications to include? Where and when do those notifications appear? Are there enough of them? Are they well placed? What was the process by which the auto maker made these decisions and what kind of justification do they have for how they made these choices?

This is going to air some dirty laundry, mark my words. Essentially, safety related decisions were either not considered, or were given short-shrift, or might even have been overruled, sometimes doing so in a haste to get the car out-the-door and into the hands of the buying public. This will spill out during the ugly lawsuits that are going to be arising.

We also need to consider the aspect that there could be a bug or “defect” in the self-driving car. Perhaps the software entered into a section of code that caused the car to go awry or make a lousy decision. Or, maybe the hardware chosen to on-board had a glitch. This once again will call into question how the auto maker and tech firm developed the system, along with the testing that was undertaken. Were there enough tests? Were the tests comprehensive enough? And so on.

Let’s now take a look at one of the first such lawsuits.

In this first salvo, the lawsuit claimed that Tesla provided a nonfunctional Enhanced Autopilot AP2.0 capability. For those of you that aren’t devotees of Tesla, you might not be aware that around October 2016 there was an effort by Tesla to provide new features for their Autopilot that they referred to as AP2.0. It cost around $10,000 and was said to include 8 surround cameras, 12 ultrasonic sensors, and software that would be greatly improved over AP1. AP2 was supposed to provide or enhance the active cruise control, lane holding, collision warning, automatic emergency braking, and other nifty features. These are often referred to as Enhanced Autopilot (EAP) and Full Self-Driving (FSD).

These were marketed by Tesla, as easily proven by looking at their ads plastered on billboards and web sites. What the class action lawsuit claimed is that:

  1. Many of these features were delivered later than promised,

The first two claims, namely that Tesla was late in providing a feature or that it has not yet provided a promised feature, those are more so claims about the potential misleading of consumers. This involves showing that Tesla promised something and should be dinged because the consumer didn’t get it when promised or has never received it.

The marketplace has often let innovators get away with this kind of thing, and we’ve seen firms like Apple that had made promises for new technology and then didn’t quite deliver on-time. This is bad, certainly, but not as bad in a sense as perhaps the other two claims. Not getting something that you paid for is bad, yes, but as you’ll see in a moment, getting something that you paid for and if it is not working right, or worse if it works wrong, that’s the real hot water.

Allow me to emphasize that I am not letting Tesla or any self-driving car maker off-the-hook if they make a promise for delivering features and do not do so. It’s a typical dirty trick to try and convince consumers to wait and buy their product, creating doubt about getting a competing product that does not have those features. I think such hollow promises do need to be kept, and that anyone making such promises needs to pay the penalty for false promises.

I also don’t ascribe to the viewpoint that “they are innovators and so we can’t complain when they are unable to deliver on new innovations” kind of mindset. Many that are beloved Tesla buyers say that they aren’t upset when Tesla makes a promise and delivers late, since they are so devoted that they are willing to overlook such a guffaw. Genius takes time, they will offer defensively. I say hogwash. Promise, and keep to your promise. If you can’t accurately predict when you are going to deliver then you have no business making a promise, and else you must be accountable to what you pledged.

In terms of the other two claims, notably that the provided features don’t work as promised and that in some instances are defective, these are quite serious since it can make a life-or-death difference for those using the Tesla cars that have AP2.0.

Here’s some of the accusations:

  • Essentially unusable

These are obviously quite important accusations. The assertion that some of the features are erratic and demonstrably dangerous are clearly of great concern.

Engaging the Autopilot would presumably be done under the assumption that the capability is well-tested and works properly and safely. The lawsuit pointed out that Tesla had even promised that using the Autopilot AP2.0 would make driving “stress-free,” which is a bit of marketing hyperbole and we’ll have to see whether the court considers it as a true promise or something weaker and less binding.

I might tell you that the new tires on your car will make your driving stress-free, and the question arises as to whether that’s an outright promise or not, and what it means to be stress-free (can we ever be truly stress-free, a philosopher might ask?).

The suggestion that the AP2.0 drives like a drunk driver is especially interesting. In my books and presentations, I have tried to debunk the notion that by adopting self-driving cars that we’ll reduce to zero any car related deaths due to drunk driving. I have pointed out that though we might reduce the human drunk drivers, we are still faced with AI that might have flaws, errors, bugs, or omissions that cause it to sometimes be as lousy a driver as a drunk driver.

Like any class action lawsuit, the legal action is intended to cover those consumers that would have been potentially impacted by the claims. In this case, the lawsuit sought to encompass any Tesla buyer that within the designated time frame had either bought or leased certain models of the Tesla car brand. There were claimed economic losses to those Tesla impacted owners due to the alleged false promises.

What made this claim a bit less biting is that there weren’t actually Tesla owners that were directly injured or died due to these alleged false promises. I mention this because it is much easier to get a win if you have some visceral and dramatic actual injury damage that has occurred, rather than just trying to show that a promise was broken and that somehow the car owners were less safe or more stressed.

Imagine if there were Pinto cars that had not at first exploded or caught on fire, and if someone had figured out beforehand that there was a potential for danger and death due to the design. Launching a class action lawsuit to say that there is the potential for a Pinto to be a killer is one thing, but doing so after there have been actual cases is another. Courts seeing pictures of burning Pintos and grieving family and relatives makes for a more compelling case.

Whenever you file these kinds of class action lawsuits, you have to have someone specifically that was considered to be impacted by the claim and they must be explicitly named in the lawsuit. You can’t just file in general and say that someone somewhere might be impacted. The named claimants then are shown to be specific examples and then you make the case that anyone else can be logically considered equally so impacted. In this case, there are 3 named plaintiffs, each of which allegedly bought the in-scope Tesla, and it says they paid anywhere from $81,000 to $113,000 for the cars.

Usually, the named plaintiffs aren’t going to get incredibly enriched by these cases, which we think of as being true because of wild cases like having hot coffee spilled on you are at your local fast food chain. Mainly what happens is the plaintiffs get something, the members of the class get something, and usually the lawyer get a hefty payout.

I know it easy to be cynical and say that these lawsuits are only about the lawyers getting rich. The other side of the coin involves the aspect that the lawyers usually take these cases without any upfront promise of getting paid, and so they are at risk of making nothing on the case, which could require tons and tons of work on their part.

Also, one could say that they are “crusaders” in that they force companies to re-look at what they do and can therefore have a positive impact on getting car makers to be more serious about safety.

What is especially alarming in this Tesla case involves some anecdotes that were included in the filing. Such anecdotes needed to be taken with a grain of salt since they were apparently plucked from the Internet and seemingly might have been based on “fake news” about these cars.

Anyway, one anecdote said that their Tesla with AP2.0 supposedly was going 50 mph and the radar spotted a bridge ahead, which then caused the Autopilot to slam on the brakes. Notably, on a separate matter, in March 2018, a Tesla Model X slammed into a barrier while Autopilot was on, taking place in Mountain View, California.

This is all perhaps a helpful moment to point out that we need to be mindful that the sensors of self-driving cars can produce false readings, and that the AI software might also take even true sensory data and falsely make mistakes about interpreting it or taking inappropriate actions. This goes to the issue of making the AI software more robust and having safety controls all throughout it.

The attorneys for the lawsuit were asserting that Tesla marketed proverbial smoke-and-mirrors to consumers. In the end of the case, Tesla settled the case and as a result the case were never fulled aired.

As I have previously noted about Tesla, the darling of the industry automakers, they had dodged a bullet (so to speak) already by having the case involving the human driver that had the Autopilot on and had run into a truck, killing him, which the federal investigation put the blame on the human driver (since, in theory, Tesla continues to say that it is the human driver that must be aware at all times and holds the final responsibility for the driving of the car).

I believe that the “blame the human driver” defense is going to eventually wear thin.

It is not the proverbial get-out-of-jail-free card that some auto makers and tech firms seem to assume it to be. Where is the balance between what the auto maker could or should have done to aid the human driver, along with the line of what they did to inform the human driver about their duties and the duties of the car, rather than simply tossing things onto the shoulders of the driver.

As more such cases arise, the “pin-the-tail on the human” defense will gradually have holes poked into it.

For me, I want these early cases to hopefully be a wake-up call for the AI community and those AI developers participating in the grand experiment of creating self-driving cars. It is exciting to think that in our generation we will have self-driving cars, and somehow achieve a Jetson-like accomplishment akin to everyone having jet packs. AI developers must take this desire to be first with self-driving capabilities as also a responsibility to make them safe.

I realize that for those of you AI software engineers tolling away at a self-driving car maker that you might say that upper management doesn’t care about the safety and you are under pressure to churn out the code.

Not sure if that’s going to be a valid excuse down-the-road, once that self-driving car you helped code injures or kills. You might be faced with civil legal action, criminal legal action, and your own sense of humanity and whether you did what you could and did the right thing.

AI and self-driving cars are exciting and new, but also involve life-and-death. Trying to put your head in the sand isn’t going to be sufficient, and nor will the banner of being on a noble cause or quest. Safety first.

For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru

To follow Lance Eliot on Twitter: @LanceEliot

Copyright © 2019 Dr. Lance B. Eliot.

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store