On Whether There Might Be A Stockholm Syndrome Associated With AI Self-Driving Cars

Dr. Lance Eliot, AI Insider

Image for post
Image for post

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]

You might be vaguely aware of the Stockholm Syndrome.

From time-to-time, the news media will refer to a situation as somehow invoking the famous case of what happened in the 1970’s in Stockholm, Sweden.

In that case, bank robbers in Stockholm took several hostages and holed-up in the bank vault for six days, refusing to come out and refusing to give up the hostages. Once the siege ended, the hostages surprisingly later on refused to testify against the kidnappers/robbers, and were generally supportive of their captors.

This certainly seemed like a curious outcome.

We would have expected that the kidnapped victims would be upset and likely quite angry toward their kidnappers, maybe even wanting some kind of extensive revenge or at least demonstrative punishment for the crime committed. The local police brought in an expert psychiatrist/criminologist that said it was an example of brainwashing.

A name arose of calling it the Stockholm Syndrome and it seems to have stuck ever since.

Background About The Stockholm Syndrome

It is characterized as usually involving a bond developing between the hostages and the captors. The hostages might start out as rightfully hostile toward the captors, and then gradually shift toward having positive feelings toward them. This often slowly emerges during the period of captivity and is not usually instantaneous.

After getting out of captivity, the hostages might continue to retain the sense of positive bond. At first, the bonding often is quite high, and then dissipates over time. Ultimately, the hostages might someday change their minds and begin to have more pronounced negative feelings toward the captors. This all depends on a number of factors, such as the treatment of the hostages during the captivity portion, the interaction with their captors afterward, and so on.

If you carefully consider the phenomena, it might not seem particularly strange that during captivity the hostages might bond with their captors.

One could say that this is a coping mechanism.

It might increase your odds of survival. It might also be a means to mentally escape the reality of the situation. It could also be a kind of personal acquiescence to the situation and especially if you believe that you might not ever escape. Various psychological explanations are possible.

What tends to really puzzle outsiders is that after captivity the hostages would continue to retain that positive bond. It would seem that if you gained your freedom, and you were no longer under the belief that you had no other choice for pure survival purposes, you would pretty quickly bounce back with rage or some other similar reaction. We’d all allow that maybe for the first few minutes or hours after getting out of captivity that you might still be mired in what had occurred, but after days or even weeks or months, we’d assume that the hostages would re-calibrate mentally and no longer have that false bonding muddled in their minds.

Some might say that the after-effect lasts because the hostage maybe wants to self-justify the earlier bonding.

Stockholm Syndrome And AI Autonomous Cars

Which brings us to the next point, namely, what does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. As part of that effort, we’re also keenly interested in the trial tests of AI self-driving cars.

Google’s Waymo has one of the most well-publicized of the trial tests of AI self-driving cars. They have for example been using a selected area of Phoenix, Arizona that involves having everyday people making use of the Waymo self-driving cars. This is being done as a kind of experiment, or maybe you’d prefer to call it a Proof Of Concept (POC), or a pilot, or a test, or a trial run, or whatever. Cleverly, Waymo coined it the “Early Rider Program” and the participants are Early Riders.

Let’s clarify that those initial trial runs were not randomly picking people up off the street.

Even though these are genuinely public kinds of trial runs, the participants needed to first apply to the program.

Only those applicants then chosen by Waymo are then allowed to participate. You can say that it is open to anyone in the sense that anyone can apply. Merely pointing out that whatever selection criteria is used, it then becomes semi-selective out of the pool of whomever actually applies.

This is in contrast to say having AI self-driving cars roaming around and picking up anyone that happens to flag one down (which is something gradually starting to occur, including in other parts of the country too).

The stated purpose of the Early Rider Program was to provide an opportunity for residents in the geographical area to have access to these AI self-driving cars and provide feedback about them.

In that sense, you can imagine how exciting it might be to become a chosen participant.

You could help shape not only how Waymo is making AI self-driving cars, but maybe the entire future of AI self-driving cars.

Speaking of risks, how much risk are these participants taking on?

According to reports, the trial runs have had a back-up human driver from Waymo in the cars, thus, presumably, there has been a licensed driver ready to takeover if needed.

Presumably, this is not just any licensed driver, but one trained to keep their attention to the self-driving car and that is ready to step into the driving task when so needed. This definitely is intended to reduce the risks of the AI self-driving car going awry. But, this is also not necessarily a risk-free kind of ride, since there are numerous issues of having a so-called back-up driver and trying to co-share the driving task.

As recently indicated, the back-up driver will no longer be present in some rides and some locales.

Scope Of Trial Runs

One criticism is that these are indeed vendor selected participants, meaning that the auto maker or tech firm has chosen the people that are participating.

Suppose there is some kind of purposeful selection criteria that is weaning out certain kinds of people, or maybe a subliminal selection bias, in which case, whatever is learned during these trial runs is lopsided. It doesn’t presumably cover the full gamut of people.

Will the result be an AI self-driving car that has certain kinds of biases and those biases will be reflected in what AI self-driving cars do and how they behave?

Another reported aspect is that the participants in such trial runs are required to sign NDA’s (Non-Disclosure Agreements).

This presumably restricts the participants from freely commenting to the public at large about their experiences of riding in these AI self-driving cars.

You can certainly empathize with the automaker or tech firm that they want to keep the participants somewhat under-wraps about their newly emerging AI self-driving cars. Imagine if a participant makes an off-hand remark that they hate the thing and no one should ever ride in one. This could be a completely unfair and baseless statement, which would appear to have credence simply because the person was a participant in the trial runs.

There could also be proprietary elements underlying the AI self-driving cars that could be blurted out by a participant and undermine the secrecy of the Intellectual Property (IP) of the vendor.

There is already a lot of sneaking around to find out what other firms are doing. There’s a potential treasure trove that you might be able to get a participant to unwittingly divulge.

There are some that think the auto makers and tech firms should not restrict the participants in any manner whatsoever.

They argue that it is important for the public to know what these participants feel about AI self-driving cars. Good or bad. Right or wrong. Blemishes or not. It is for the good of the public overall to know what the participants have to say.

Furthermore, they would likely claim that it will help the other automakers and tech firms too. In other words, if you believe that AI self-driving cars provide great benefits to society, the sooner we get there, the better for all of society. Thus, the more that the auto makers and tech firms share with each other, the sooner the benefits will emerge.

Feedback Taken With A Grain Of Salt

Let’s though shift our attention to something else, but related to this whole topic.

At some of my recent presentations at industry conferences, I’ve been asked about some of the comments that participants in these trial runs have been saying so far.

The comments are usually quite glowing.

Even if there is a mention of something that went awry, the participants seem to then explain it away and the whole thing seems just peachy.

For example, a participant that reported an AI self-driving car that got somewhat lost in a mall parking lot trying to get to the rider’s desired destination, and later on the AI developers adjusted the system to instead go to a designated drop-off point. This is a lighthearted tale. No one was hurt, no apparent concern, other than maybe some excess time spent waiting for the AI self-driving car to find the proper spot. Plus, it was later fixed anyway.

Others with a more critical eye question these kinds of stories.

Shouldn’t we be concerned that the AI system wasn’t able to better navigate the mall parking lot?

Maybe there are other locations that it would have problems with too?

Shouldn’t we be concerned that the AI system itself wasn’t able to make a correction, and that instead it required human intervention by the developers?

If AI self-driving cars aren’t going to be self-corrective, it seems to undermine what we are expecting of Machine Learning and the abilities of the AI for self-driving cars? And so on.

In any case, here’s the question that I sometimes get asked — are these participants in these tryouts perhaps suffering from Stockholm Syndrome?

There are some that seem to be concerned that the apparently whitewashed commentary being provided by the trial run participants might be a form of Stockholm Syndrome.

Maybe the participants are being “brainwashed” into believing that the AI self-driving cars are fine and dandy. Perhaps this is coming out of then not by their own freewill, but by the droning of it into their heads.

I’ll admit that I was a bit taken aback the first time I was asked this question.

I believe my answer was, say what?

After some reflective thought, I pointed out that the “Stockholm Syndrome” is perhaps a misapplication in this case.

The commonly accepted notion of the Stockholm Syndrome is that you have some kind of hostages and some kind of captors.

I dare say, it doesn’t seem like these trial run participants are hostages.

They voluntarily agreed to participate.

They put themselves forth to become participants.

They weren’t grabbed up in the cover of darkness and thrown into AI self-driving cars.

So, I reject the notion that you can somehow compare these trial runs with a hostage-captor scenario.

The comparison might seem appetizing, especially if you are someone averse to the trial runs, or at least how you believe the trial runs are being run. It also has a clever stickiness to it, meaning that it could stick with the trial runs because it kind of sounds applicable on a surface basis.

Suppose I am going to create a new kind of ice cream. I ask for volunteers to taste it. Those that are volunteering are presumably already predisposed to liking ice cream. I select volunteers that are passionate about ice cream and really care for it. I then have them start tasting the ice cream. They like it, and it’s a flavor and type they’ve never before had a chance to try. They are excited to be one of the first. They also believe they are shaping the future of ice cream for us all.

Does this mean that they are suffering from the Stockholm Syndrome? Just because they bonded in a positive way, and kept that positive bonding later on? I think that strips out the essence of the Stockholm Syndrome, the hostage part of things. The mistreatment part of things.

The analogy or metaphor falls apart due to a key linking element that is not there.

Recap And Conclusion

Overall, here’s my key thoughts on this matter:

  • Trial runs are a generally good thing for progress on AI self-driving cars, though some argue we are dangerously being turned into guinea pigs
  • Auto makers and tech firms need to remain vigilant to undertake these trial runs safely
  • Participants might be somewhat muted about things that go awry
  • Participants will likely be reporting publicly only upbeat aspects, which we should consider and also at times take with a bit of salt
  • Calamities during the trial runs are likely to get leaked out and so it is probably going to be difficult for vendors to keep a lid on issues
  • It is understandable why there are various controls related to the release of info about the trial runs
  • There does not seem to be any conspiratorial concerns on this (I’ll add “as yet” for those that holdout for a conspiracy)
  • Trying to say this is a Stockholm Syndrome seems to be an overreach

We’ll need to keep our eye on the autonomous car tryouts, including the passengers and their reactions.

For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru

To follow Lance Eliot on Twitter: https://twitter.com/@LanceEliot

For his Forbes.com blog, see: https://forbes.com/sites/lanceeliot/

For his AI Trends blog, see: www.aitrends.com/ai-insider/

For his Medium blog, see: https://medium.com/@lance.eliot

For Dr. Eliot’s books, see: https://www.amazon.com/author/lanceeliot

Copyright © 2019 Dr. Lance B. Eliot

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store