Dr. Lance Eliot, AI Insider
[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]
The duck won’t quack.
Allow me to elaborate by first providing some helpful background.
Today’s children are growing up conversing with a machine.
The advent of Alexa and Siri has not only made life easier for adults, it also has enabled children to get into the game of talking with an automated system. These automated systems contain relatively advanced NLP (Natural Language Processing) capabilities, which many consider part of the AI umbrella of technologies. Improvements in NLP over the last decade or so has made them much less stilted and much more conversational.
Some are concerned that young children might not have the cognitive wherewithal to separate cleverly devised automation from real human interaction.
Children could potentially be fooled into believing that a voice processing system is indeed a human.
Suppose further that the child misunderstands the voice processing system. In other words, maybe the voice processing system said “People like to jump up and down,” but what the child thought they heard was “Jump up and down” (as though it was an edict).
The child doesn’t have the same contextual experience of the adult.
Forms Of Interaction
It is useful to consider these forms of interaction:
- Child with child
- Child with adult
- Child with AI
- Adult with AI
There’s human-to-human communication, consisting of child with child, child with adult, and there’s the human-to-machine communication, namely child with AI, adult with AI. I’ll use the indication of “AI” herein to refer to any reasonably modern day NLP AI based system that does voice processing, akin to an Alexa or Siri or equivalent.
A recent research study caught my eye about young children talking to technology and it dovetailed into some work that I’m involved in. The study was done at the University of Washington and involved having children speech-interact with a tablet device while playing a “Cookie Monster’s Challenge” game. When an animated duck appears on the screen, the child is supposed to tell the duck to make a quacking sound. The duck is then supposed to quack in response to the child telling it to make the quacking sound.
This seems straightforward.
The twist in the study is that the researchers purposely at times did not have the animated duck respond with a quack (it made no response). A child would then need to cognitively realize that the duck had not quacked, and that it had not quacked when it presumably was supposed to do so.
How would a child react to this scenario?
If this was a child-to-adult interaction, and suppose the adult was supposed to be saying “quack” whenever the child indicated to do so, you’d presumably get engaged in a dialogue by the child about wanting you to say the word (they were ages 3 to 5, so realize that the nature of the dialogue would be at that age level). Likewise, if it were a child-to-child interaction, one child would presumably use their natural language capabilities at that age-level to try and find out why the other child isn’t responding “correctly” as per the rules of the game.
For the use of the tablet, the children tended to either use repetition, thus repeating the instruction to quack, perhaps under the notion that the tablet had not heard the instruction the first time, or would increase their volume of the instruction, again presumably under the belief that the tablet did not hear them, or would use some other such variation.
AI Autonomous Cars And Interaction
What does this have to do with AI self-driving cars?
At the Cybernetic AI Self-Driving Car Institute, we are developing AI systems for self-driving cars. As part of that effort, we’re keenly interested in the interaction of children with an AI self-driving car. Allow me to explain why.
Some pundits of AI self-driving cars are solely focused on adults interacting with the AI of self-driving cars.
They seem to believe that only an adult will interact with the AI system. On the one hand, this seems to make sense because the thought of children interacting with the AI might be rather frightening — suppose a child tells the AI to drive the self-driving car from Los Angeles to New York City because they want to visit their favorite uncle. Would we want the AI self-driving car to blindly obey such a command and all of a sudden the self-driving car heads out for a rather lengthy journey?
I think we can all agree that there’s a danger that a child might utter something untoward to the AI of a self-driving car.
Let’s not assume that only a child can make seemingly oddball or untoward commands. Adults can readily do the same. If you are under the belief that an adult will always utter only the sanest and sincere of commands, well, I’d like to introduce you to the real-world. In the real-world there are going to be all kinds of wild utterances by adults to their AI self-driving cars.
So, I’d like to emphasize that regardless of the age of the human that might be directing the AI, the AI needs to have some form of calibration and filtering so as to not blindly obey instructions that are potentially injurious, hazardous, infeasible, or unreasonable. This is not so easy to figure out. It takes some hefty NLP and AI skills to try and do this, and especially do so with aplomb.
Let’s then reject the idea that children won’t be interacting with the AI of a self-driving car.
Indeed, I’ll give you another good reason why children are pretty much going to be interacting with the AI of the self-driving car.
Suppose you decide that it’s perfectly fine to send your kids to school via your shiny AI self-driving car sitting out on your driveway. The kids pile into the self-driving car and away it goes, heading to school.
I realize you are thinking that there’s no need for any child interaction because you, the parent, told the AI beforehand to take your kids to school. They are now presumably having a good time inside the self-driving car and have no role or say in what the AI self-driving car does next. One of the kids it turns out ate something rotten last night and begins to toss his cookies. He yells at the AI to take him home, quickly.
What do you want to have happen?
You might say that no matter what the child utters, the AI ignores it.
In this case, the AI has been instructed by you to drive those kids to school, and come heck or high water that’s what it is to do. No variation, no deviation. Meanwhile, suppose the self-driving car is just a block from home and twenty minutes from school. Do you really want the AI to ignore the child entirely?
This gets even more complicated because presumably the age of the child also comes to bear. If the child is a teenager, you might allow more latitude of what kinds of instructions that they might provide to the AI. If the child is a 3-year-old, obviously you’d likely be more cautious.
Some are wondering whether people are going to put their babies in an AI self-driving car and send the self-driving car on its way. This seems fraught with issues since the baby could have some form of difficulty and not be able to convey as such. I’m sure people will do this and they will at the time think it makes perfectly good sense, but from a societal perspective we’ll need to ascertain whether this is a viable way to make use of an AI self-driving car or not.
Rather than all of these fights about preventing children from interacting with the AI, I’d rather suggest that we do a better job on the AI so that it is more capable and able to interact with a child. If we had a human chauffeur driving the car, we would certainly expect that human to interact with a child in the sense of figuring out what makes sense to do and not do regarding where the car is going and how it is heading there. We ought to be aiming at the chauffeur level of NLP.
As earlier mentioned, we need to be cautious though in having the NLP seem so good that it fools the child into believing that it is truly as capable as a human chauffeur. I’d say that we are many years away from an NLP that can exhibit that kind of true interaction and “comprehension,” including that it would likely require a sizable breakthrough in the AI field of common sense reasoning.
We are doing research on how children might likely interact with an AI self-driving car.
Somewhat similar to the study about the quacking duck, we are aiming at having children that are supposed to be interacting with a self-driving car.
What might they say to the AI?
In what way should the AI respond?
These are important questions for the design of the NLP of the AI for self-driving cars.
It seems useful to consider two groups of children, one that is literate in using an Alexa or Siri, and the other that is not familiar with and has never used such voice processing systems. We presuppose that those that have used an Alexa or Siri are more likely to be comfortable using such a system, and have likely already form a contextual notion of the potential limits of this kind of technology. Furthermore, such children appear to have already adapted their vocabulary to such voice processing systems.
Studies of children that regularly use an Alexa or Siri have already shown some intriguing results.
Indeed, talk to the parent of such a child and you might get an earful about what is happening to their children. For example, children tend to treat Alexa or Siri in a somewhat condescending way after getting used to those systems. They will give a command or statement a question, and do so in a curt manner that they would be unlikely to do to an adult. Do this, do that, become strict orders to the system. There’s no please, there’s no thank you.
I realize you might argue that if the children did say please or thank you, it implies they are anthropomorphize the system.
Some worry though that this lack of politeness and courtesy is going to spillover in the child’s behavior such that it happens with other humans too. A child might begin to speak curtly and without courtesy to all humans, or maybe to certain humans that the child perceives in the same kind of role or class as the Alexa or Siri. I saw a child the other day giving orders to a waiter in a restaurant, as though the human waiter was no different than telling Alexa or Siri what is to be done.
For many of the automakers and tech firms, they are not yet doing any substantive focused work on the role of children in AI self-driving cars.
This niche is considered an “edge” problem, meaning that they are working on other core aspects, such as getting an AI self-driving car to properly drive the car, and so the aspects of children interacting with the AI self-driving car is far further down on the list of things to do. We consider it a vital aspect that will be integral to the success of AI self-driving cars, which we realize is hard to see as a valid aspect right now, but once AI self-driving cars become more prevalent, it’s a pretty good bet that people are going to wise up to the importance of children interacting with their AI self-driving cars.
My duck won’t quack.
That’s something to keep in mind.
You might recast the no quacking idea and say that the (inadequately designed) AI self-driving car won’t talk (with children).
Restated, we need to have AI self-driving cars that can interact with children since children are going to be riding in AI self-driving cars, often without any adult in the self-driving car and without the possibility that an adult is readily otherwise reachable. I urge more of you out there to join us in doing research on how AI systems should best be established to interact with children in the context of a self-driving car.
The AI needs to be able to jointly figure out what’s best for the humans and the AI, perhaps helping to say the day, doing so in situations where children are the only ones around to communicate with.
For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website
The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.
More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru
To follow Lance Eliot on Twitter: https://twitter.com/@LanceEliot
For his Forbes.com blog, see: https://forbes.com/sites/lanceeliot/
For his AI Trends blog, see: www.aitrends.com/ai-insider/
For his Medium blog, see: https://email@example.com
For Dr. Eliot’s books, see: https://www.amazon.com/author/lanceeliot
Copyright © 2019 Dr. Lance B. Eliot