AI-Human Interactions Include Social Reciprocity, Which Will Be Vital For AI Self-Driving Cars Success

Dr. Lance Eliot, AI Insider

Image for post
Image for post

[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column:]

Thank you for driving safely.

Or, suppose instead I said to you that you should “Drive Safely: It’s the Law” — how would you react?

Perhaps I might say “Drive Safely or Get a Ticket.”

I could be even more succinct and simply say: Drive Safely.

These are all ways to generally say the same thing.

Yet, how you react to them can differ quite a bit.

Why would you react differently to these messages that all seem to be saying the same thing?

Because how the message is phrased will create a different kind of social context that your underlying social norms will react to.

If I simply say “Drive Safely”, it’s a rather perfunctory form of wording the message.

Consider next the version that says “Thank You for Driving Safely.”

This message is somewhat longer, having now five words, and takes more effort to read. As you parse the words of the message, the opening element is that you are being thanked for something. We all like being thanked. What is it that you are being thanked for, you might wonder. You then get to the ending of the message and realize you are being thanked for driving safely.

What about the version that says “Drive Safely: It’s the Law” and your reaction to it?

In this version, you are being reminded to drive safely and then you are being forewarned that it is something you are supposed to do. You are told that the law requires you to drive safely.

The version that says “Drive Safely or Get a Ticket” is similar to the version warning you about the law, and steps things up a further notch.

The word-for-word wording of the drive safely message is actually quite significant as to how the message will be received by others and whether they will be prompted to do anything because of the message.

I realize that some of you might say that it doesn’t matter which of those wordings are used.

Aren’t we being rather tedious in parsing each such word?

Seems like a lot of focus on something that otherwise doesn’t need any attention. Well, you’d actually be somewhat mistaken in the assumption that those variants of wording do not make a difference. There are numerous psychology and cognition studies that show that the wording of a message can have an at times dramatic difference as to whether people notice the message and whether they take it to heart.

I’ll concentrate herein on one such element that makes those messages so different in terms of impact, namely due to the use of reciprocity.

Importance Of Reciprocity

Reciprocity is a social norm.

Cultural anthropologists suggest that it is a social norm that cuts across all cultures and all of time.

In essence, we seem to have always believed in and accepted reciprocity in our dealings with others, whether we explicitly knew it or not.

Is it in our DNA?

Is it something that we learn as children? Is it both?

There are arguments to be made about how it has come to be.

Regardless of how it came to be, it exists and actually is a rather strong characteristic of our behavior.

Let’s further unpack the nature of reciprocity.

Time is a factor in reciprocity too.

Difficulties Of Getting Reciprocity Right

Reciprocity can be dicey.

There are ample ways that the whole thing can get com-bobbled.

I do something for you, you don’t do anything in return.

I do something for you of value N, and you provide in return something of perceived value Y that is substantively less than N. I do something for you, and you pledge to do something for me that’s a year from now, meanwhile I maybe feel cheated because I didn’t get more immediate value and also if you forget a year from now to make-up the trade then I forever might become upset. And so on.

I am assuming that you’ve encountered many of these kinds of reciprocity circumstances in your lifetime. You might not have realized at the time they were reciprocity situations. We often fall into them and aren’t overtly aware of it.

One of the favorite examples about reciprocity in our daily lives involves the seemingly simple act of a waiter or waitress getting a tip after having served a meal. Studies show that if the server brings out the check and includes a mint on the tray holding the check, this has a tendency to increase the amount of the tip. The people that have eaten the meal and are getting ready to pay will feel as though they owe some kind of reciprocity due to the mint being there on the tray. Research indicates that the tip will definitely go up by a modest amount as a result of the act of providing the mint.

A savvy waiter or waitress can further exploit this reciprocity effect.

As mentioned, reciprocity doesn’t work on everyone in the same way.

The mint trick might not work on you, supposing you hate mints or you like them but perceive it of little value.

Here’s a recap then about the reciprocity notion:

  • Reciprocity is a social norm of tremendous power that seems to universally exist
  • Often fall into a reciprocity and don’t know it
  • Usually a positive action needs to be traded for another in kind
  • Usually a negative action needs to be traded for another in kind
  • An imbalance in the perceived trades can mar the arrangement
  • Trades can be services or products or combinations thereof
  • Time can be a factor as to immediate, short-term, or long-term

AI Autonomous Cars And Social Reciprocity

What does this have to do with AI self-driving driverless autonomous cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One crucial aspect of the AI will be the interaction with the human occupants of the self-driving car, and as such, the AI should be crafted to leverage reciprocity.

One of the areas of open research and discussion involves the nature of the interaction between the AI of a self-driving car and the human occupants that will be using the self-driving car. Some AI developers with a narrow view seem to think that all that the interaction consists of would be the human occupants saying to drive them to the store or to home, and that’s it.

This is a naive view.

The human occupants are going to want to have the AI much abler to carry on a conversation.

Let’s consider an example of social reciprocity involving a passenger and driver dialogue.

You get into the AI self-driving car and decide you want to go to work.

Almost as though you are giving an instruction to a GPS, you merely indicate the address of work, and the self-driving car then proceeds to drive you there. Suppose that during the journey, you want to get some coffee, doing so before you get to work. You would want to convey this interim destination to the AI.

Before the days of Alexa and Siri, we might have all accepted the idea that we might need to enter such instructions into an on-board computer tablet, but I think we are now expecting to be able to have a more natural audio interaction with the AI system. If you’ve gone to the trouble to buy an expensive and shiny AI self-driving car, you certainly would hope that some effort went into making the system interactive with the human occupants.

A barebones “conversational” AI might be this:

You: Take me to work.

AI self-driving car: Okay

What does the word “Okay” mean in this situation? Does it mean that the AI is acknowledging that you have said you want to go to work, but that it doesn’t necessarily mean that the AI should immediately do so. Or, maybe it means that the AI is going to immediately start the self-driving car and drive you to work. Or, perhaps the AI misunderstood what you said and thought you said “I hate work” and therefore the AI is merely saying “Okay” as a placeholder word to seem responsive.

Let’s try this again:

You: Take me to work.

AI self-driving car: Starting to your work now.

This is a little better since the AI has echoed back to you that you want to go to work, and furthermore it is stating when this action will occur, specifically that it will occur right away or now. This seems somewhat flat though. If you were conversing with a human chauffeur, they probably would be more embellishing in what they might say.

Here’s another try:

You: Take me to work.

AI self-driving car: Good morning, I will head to your work at 1234 Lauren Street. Hope you will have a pleasant day at work!

This provides more specific feedback, including the work address, which helps to confirm that you really intended to go to work. It might be that you said the word “work” out of habit and meant to say “home” — by hearing the address it might cause you to realize that you had said the wrong word. The part of the message that says “good morning” and the other part that says “have a pleasant day” are added embellishments that presumably give a softer feeling to the messaging and makes things seem less robotic.

One criticism of having the AI utter “good morning” and “have a pleasant day” is that it implies perhaps that the AI actually means those things.

When I speak those words to you, you assume that I as a human have measured out those words and that I presumably know what it means to have a good morning, and so with my knowledge about the nature of mornings, I am genuinely hoping that you have a good one. If you see the words “good morning” written on a poster, you don’t consider that the poster knows anything about the meaning of those words. When the AI system speaks those words, you are likely to be “fooled” into thinking that the AI system “understands” the nature of mornings and is basing those words on a sense of the world.

But, the AI of today is more akin to the poster, it is merely showcasing those words and does not yet (at least) comprehend the true meaning of those words.

Do we want the AI to seem to be more aware than it really is?

That’s an important question. If the human occupants believe that the AI has some form of human awareness and knowledge, the human occupant might get themselves into a pickle by trying to converse with the AI system. Suppose the human starts to suffer a heart attack, and believes that the AI has human-like understanding, and so the human says “help, I’m having a heart attack” — if you said this to another adult, the human adult would likely realize that you are in trouble, they might call 911, they might try to aid you, etc.

The AI of today would not know what you’ve said per se. You might have been misled into thinking that it would. If you believed that the AI was not so capable as a human, you might instead have uttered “take me to the nearest hospital” which then hopefully is similar to “take me to work” in that the system can parse the words and realize that it is a driving instruction. The AI would presumably then alter the driving path and instead drive the self-driving car to a nearby hospital.

This argument also comes up when AI developers put other human-like speaking mannerisms into their systems. Humans for example will often say “you know” or have pauses in their speaking or say “uh” while speaking. To-date, most of the AI natural language processing systems have generated spoken words that are streamlined and omit those human mannerisms. Lately, some AI developers have been including them. This gives the AI spoken words a closer to human-like feel.

Here’s an example:

You: Take me to work.

AI self-driving car: Uh, Okay, sounds good… will do so.

The added elements to the response are going to suggest to the human listener that the AI is composing speech in a manner similar to a human. This anthropomorphizes the AI system, which as I’ve mentioned before can be good if you believe that’s a desirable way to bond with the human occupants, but it can also be bad in that it can mislead as to what the AI system can and cannot really do and understand.

The joint interaction in a self-driving car between the human occupants and the AI are potentially going to involve life and death matters.

This is an important point to keep in mind. If you are simply developing an online search engine that verbally interacts with humans or a system that can aid them in ordering toothbrushes and towels, whether the human believes that the AI is actually comprehending them is not particularly life threatening. In the case of a self-driving car, the human might need to offer insight to the AI about the driving task, or the AI might need to get input from the human about the driving task, any of which if improperly undertaken could have dire consequences.

Digging Into The AI-Human Social Reciprocity

This then takes us to the reciprocity aspects.

Should the AI system of the self-driving car carry on conversations that also invoke the use of reciprocity?

If you are of a mind that the AI should not over-inflate what it can actually comprehend, you’d probably say that it should not also imbue reciprocity.

If you are seeking to have the AI appear as sentient as it can, the use of reciprocity would be a core element to achieve that sense, since it is so common among humans and part-and-parcel of our daily lives.

Here’s an example:

You: Take me to work.

AI self-driving car: Happy to take you to work. Please make sure to keep your seat belts on.

You likely can detect the subtle reciprocity involved in the dialogue. The AI implies that it is doing something for the human occupant, taking them to work, and the punch-up is that the AI is “happy” to do so. This creates a human-like aspect as though the AI was doing this as a service for you. The trade is that in return, you are being asked to keep your seat belts on.

If we strip away the reciprocity, we’d get something like this instead:

You: Take me to work.

AI self-driving car: Okay. Keep your seat belts on.

This version is similar to the earlier example about the drive safely message. We’ve now got a plainer and straightforward kind of instruction or maybe even an edict, which was the same with the “drive safely” message. The “happy to take you to work” was more akin to the “thank you for driving safely” message that created a kind of quid-pro-quo element to the dialogue.

If we make the messaging more along the negative side, it might be something like this:

You: Take me to work.

AI self-driving car: Okay. Keep your seat belts on or I’ll stop the car and you won’t get to work on time.

Whoa! This sounds like some kind of fierce AI that is threatening you.

There are AI developers that would argue that this message is actually better than the others because it makes abundantly clear the adverse consequence if the human does not wear their seat belts.

Yes, it’s true that it does spell out the consequences, but it also perhaps sets up a “relationship” with the human occupant that’s going to be an angry one. It sets the tone in a manner that might cause the human to consider in what manner they want to respond back to the AI (angrily!).


If the AI system is intended to interact with the human occupants in a near-natural way, the role of reciprocity needs to be considered.

It is a common means of human to human interaction. Likewise, the AI self-driving car will be undertaking the driving task and some kind of give-and-take with the human occupants is likely to occur.

We believe that as AI Natural Language Processing (NLP) capabilities get better, incorporating reciprocity will further enhance the seeming natural part of natural language processing.

It is prudent though to be cautious in overstepping what can be achieved and the life-and-death consequences of human and AI interaction in a self-driving car context needs to be kept in mind.

For free podcast of this story, visit:

The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.

More info about AI self-driving cars, see:

To follow Lance Eliot on Twitter:

For his blog, see:

For his AI Trends blog, see:

For his Medium blog, see:

For Dr. Eliot’s books, see:

Copyright © 2019 Dr. Lance B. Eliot

Written by

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store