What Scrabble Reveals About AI Understanding, Including For Self-Driving Cars
Dr. Lance Eliot, AI Insider
[Ed. Note: For reader’s interested in Dr. Eliot’s ongoing business analyses about the advent of self-driving cars, see his online Forbes column: https://forbes.com/sites/lanceeliot/]
If you are a Scrabble fan, you might remember the headlines in 2015 that blared that the winner of the French Scrabble World Championship was someone that did not understand a word of French.
Note that I spelled this stereotypical French phrase as it is spelled in the French language, as one word, rather than the Americanized version of two words with the accent (sacre bleu), which would be important if I was playing Scrabble right now.
Essentially, the word or phrase is an outdated and hackneyed curse that was never particularly used by the French, but crept into the English language and became employed for formulaic portrayals in movies and TV shows.
In any case, let’s focus on the aspect that the winner of the World Champion for the Francophone Classic Scrabble in 2015 was a non-French speaking contestant.
This feat seemed to be nearly impossible.
How could anyone manage to win in Scrabble, a board game dependent upon words, and yet not understand the words being used in this famous and popular sport?
Bizarre, some said.
A miracle, others stated.
I’d say it is nothing more than a magician pulling a rabbit out of a hat or finding your chosen card out of a deck of cards.
The Inner Game Of Scrabble
In Scrabble, there is a board consisting of squares arranged in a 15 by 15 grid.
Players have various tiles of letters and are supposed to lay down the tiles in a manner that spells out a word.
During your turn, you can play out some or all your tiles if there is a word that you can make, or you can pass but this means that you are giving up that turn and won’t get any points, or you can do an exchange of your subset of letters with whatever remains in the bag and as randomly selected out of the bag.
In Scrabble, there is no requirement that you actually understand the word that you are spelling out on the board.
You don’t have to state what the word means.
The word merely has to be a valid word.
The non-French speaking contestant had done something that was impressive, he had memorized all the words in the officially used French dictionary, doing so by only memorizing how the words were spelled.
He happened to have a photographic memory capability and was able in nine weeks to memorize the words.
He did not know what the French words meant.
I’ve now revealed how the magician pulled off the magic act.
Similar to describing how a rabbit got into that hat of the magician, or how your card was marked or planted into a deck of cards, the secret in this case of Scrabble is that you don’t need to understand the words and merely need to know how to spell them.
To him, the words were essentially icons or images.
More Twists To Scrabble
Being smart about the game play is essential in Scrabble, and especially at any vaunted tournament.
The strategies and tactics that you use in Scrabble are crucial to winning.
It turns out that the winner of the Francophone Scrabble Championship was a five-time winner of the North American Scrabble Championships and a three-time winner of the World Scrabble Championships.
All of those competitions were in English.
Regardless though of the language used in those competitions, the fact that he had won those contests demonstrated that he knew how to play the Scrabble game and must have finely tuned his strategies and tactics for it.
There is also the role of chance involved in the game, since you don’t know beforehand what letters you will get.
AI Playing Scrabble
There are some quite famous AI programs that play Scrabble well.
The most historically notable ones are likely Maven and Quackle.
The structure of Maven’s approach consists of dividing a Scrabble match into a mid-game, a pre-endgame, and an endgame set of phases (the mid-game is somewhat a misnomer since it also serves as the start-game capability too).
There is a simulation or “simming” done to try to look-ahead at various moves and countermoves, though in the initial incarnations it was only a two-ahead look (a 2-ply deep). This is considered a truncated version of the Monte Carlo simulation and not a full-bodied MCTS (Monte Carlo Tree Search) implementation.
Other variants of Maven included the use of a DAWG (Directed Acyclic Word Graph), which tends to run fast and doesn’t require an elaborate algorithm per se, and latter used the GADDAG (this naming was intended to be smarmy, it is the letters DAG for Directed Acyclic, spelled backwards and then forwards).
The end-game is a different kind of challenge and kicks-in once the bag of letters is empty.
Quackle came along after Maven and employs many similar game playing approaches, along with a few other nuances. If you are interested in Scrabble AI game play, the Quackle is readily available as open source and can be found in places such as GitHub.
Though they have had some impressive wins, it does not mean that they have “solved” the playing of Scrabble by AI.
Somewhat similar to the non-French speaking human winner of the Francophone Scrabble Championship, there is an added edge in this particular kind of game if you can have at the ready an entire dictionary of words.
Any human player that cannot commit to their own mental memory an entire dictionary of words is obviously at a disadvantage.
There is also the time factor involved.
A player that can assess more possibilities in the length of time allowed per move is presumably going to have a greater chance of making a better move than otherwise if they could not examine as many options. This limit applies to the human player and their mental processing, and likewise to the AI and its use of computer cycles for processing.
Of course, the depth of mental processing is not necessarily the winning approach since it could be that there are lots of possibilities that aren’t worth the mental effort, and nor time, when figuring out your next move.
In short, just because the computer can have at-the-ready an entire dictionary of words does not ergo mean it is going to win. Likewise, even if the AI has an algorithm that uses all kinds of short-cuts and statistics to try and ascertain the seemingly most prudent choice, there is still room for improvement in those algorithms.
This is not a done deal and should not be construed as such.
And the AI decidedly does not “understand” things in the way that we assume humans do.
Meaning Of Understanding Is A Key Matter
In playing Scrabble, any player, whether human or AI, does not need to “understand” the words since those are only being used as objects.
Now that we’ve carved out any need for “understanding” in terms of the dictionary of words used in Scrabble, we need to acknowledge the perhaps hidden form of “understanding” needed during the playing of the game.
The strategies and tactics used would be applicable to what we commonly refer to as having an “understanding” of something.
We don’t know for sure what goes on in the heads of a Scrabble player and can only guess at what they might be thinking during the playing of a game.
The AI algorithms and techniques employed in the Scrabble playing of Maven and Quackle are maybe similar to what happens in the human mind or maybe not. I’d dare say, most likely probably not. We have come up with some fascinating mathematical and computational approaches that appear to be useful and can compete against humans in a game such as Scrabble.
Does this mean that those AI systems “understand” the game of Scrabble?
You’d be hard pressed to say yes.
Revisiting The Chinese Room Argument
This is reminiscent of the famous Chinese Room argument.
For anyone involved in AI, you ought to be familiar with the thought experiment known as the Chinese Room.
We develop something we regard as AI which we’ll place into a room and that can take-in Chinese characters as input and will emit Chinese characters as output, doing so in a manner that a human that is feeding the Chinese characters as input and is reading the Chinese characters of output is led to believe that the AI is a human being. In that sense, this AI passes the infamous Turing Test.
The Turing Test is the notion that if you have a computer and a human, and another human asks questions of the two, when the inquiring human cannot differentiate the computer versus the human, the computer is considered as having passed the Turing Test. It therefore would seem that the computer is able to express intelligence as a human can.
Is the AI that’s inside that Chinese room able to “understand” in the same manner that we ascribe the notion of being able to “understand” things as people do?
You could ask that same question of the Turing Test, but the twist somewhat with the Chinese Room is the added element that I will describe next.
Suppose we put an actual human into this Chinese Room.
They do not understand a word of Chinese. We also give to the human the same computer program that embodies the AI system. This human endeavors to do exactly what the computer program does, following each instruction explicitly, perhaps using paper and pencil to do so. Notice that the AI is not going to be doing the processing per se, and instead the human inside the Chinese Room will be doing so, following carefully step-by-step whatever the AI would have done.
Presumably, the human inside the Chinese Room is going to once again be able to take-in the Chinese characters as input and emit Chinese characters as output, which we assume will occur due to abiding strictly by the steps of the already-successful AI and be able to convince the human outside the room that the room contains intelligence. The human in the Chinese room does not understand a word of Chinese, and yet has been able to respond to a Chinese inquirer as though they did understand Chinese, even though it was a “trick” because the human merely followed “mindlessly” the steps indicated of the AI program.
It is claimed that this showcases that there was no real sense of “understanding” involved by the AI and nor by the human that was inside the Chinese room.
A philosopher named John Searle proposed the Chinese Room thought experiment, doing so in 1980, and ever since then there has been quite a response to it. There are lots of arguments about alleged loopholes and fallacies in the thought experiment and this Chinese Room notion.
But what about the Scrabble game playing?
The AI program of Maven and Quackle, do they embody a sense of “understanding” about the playing of Scrabble, akin to when a human has “understanding” as they play the game?
Most would agree that those AI programs do not have any “understanding” in them.
They are the same as the Chinese Room.
Role Of Machine Learning And Deep Learning
You might be wondering whether Machine Learning or Deep Learning could maybe rescue us in this situation.
Typically, a Machine Learning or Deep Learning approach involves the use of a large-scale artificial neural network.
In any case, the assumption and future hope is that if we can keep making computer-based artificial neural networks more and more akin to the human brain, possibly we will have human intelligence emerge in these artificial neural networks. Maybe it won’t happen all at once and instead appear in dribs and drabs. Maybe it won’t ever appear. Maybe there is a secret sauce of the operation of the brain that we’ll never be able to crack open. Who knows?
There haven’t been many attempts to play Scrabble via the use of an artificial neural network.
The more straight-ahead methods of using various AI search space techniques and algorithms has been the predominant approach used. It seems to make sense that you would use these more overt or symbolic types of approaches, doing a direct kind of programming to solve the problem, rather than using a neural network, which is more of a bottoms-up approach rather than a top-down approach.
If you ponder the difference between a game like chess and a game like Scrabble, you’d readily notice some key attributes that make them very different. In chess, all the playing pieces are known and placed on the board at the start of the game. In the case of Scrabble, the letters are hidden in a bag and you are dealt out a subset at a time, therefore you have imperfect information and you are also going to be dependent upon random chance of what will occur during the game.
Collecting together a massive number of chess games and being able to feed those as data into an artificial neural network is somewhat easy task to be undertaken. Doing the same for Scrabble games is not so easily done. Even if you do this, the idea of pattern matching based on those games is going to be quite unlike the pattern matching of a chess game.
Here’s the rub.
If you believe that the use of Machine Learning or Deep Learning is our best shot at achieving human intelligence via AI, presumably we should be using Machine Learning or Deep Learning on trying to craft better and better Scrabble playing automation.
Here’s another thought to consider.
Are the Machine Learning and Deep Learning systems of today able to “understand” in the same manner that we assume that humans can “understand” things?
You’d be hard pressed to have any reasonable AI developer say yes.
AI Self-Driving Cars And Scrabble
What does this have to do with AI self-driving driverless autonomous cars?
At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One aspect that is not widely realized involves the lack of “understanding” that the AI of self-driving cars of today embody and whether that poses safety and risks that aren’t being well-discussed.
Returning to the topic at-hand, I’ve been discussing the nature of Scrabble and how humans and how AI systems embody or do not embody a sense of “understanding” in the meaning of what we believe humans can think about things.
When a human drives a car, do you believe that the human is employing “understanding” in some manner, such as understanding how a car operates, understanding how traffic flows and cars maneuver in traffic, and how humans drive cars, and how humans as pedestrians act when near cars, etc.?
If you say yes, this next question is then prompted by the Scrabble discussion and the Chinese Room discussion, namely, will the AI of self-driving cars need to also embody a similar sense of “understanding” in order to properly, safely, and appropriately be driving cars on our public roadways?
Yes or no?
I say that I caught you because if you say yes, and you are of the belief that the AI of self-driving cars needs to have a sense of “understanding” about driving as humans do, right now the auto makers and tech firms are not anywhere close to achieving “understanding” in these AI systems. Simply stated, the AI of today’s and even near-future AI self-driving cars do not embody “understanding” at all.
The AI of today’s and the near-future self-driving cars is akin to the Scrabble game AI.
By-and-large, most of the AI being used in an AI self-driving car is the programmatic type that uses various AI techniques and algorithms, but it is not what we would reasonably agree is any kind of “understanding” that is going on.
You might right away be claiming that since the AI of self-driving cars is often making use of Machine Learning and Deep Learning, it suggests that perhaps the AI is getting closer to having “understanding” in the manner that deep artificial neural networks might someday invoke.
Problematically, those neural networks of today are not yet far advanced toward what someday we all hope might happen with extremely large-scale neural networks and ones that are more closely modeled with the human brain. Furthermore, the neural networks aspects are currently just a small part of the AI stack for self-driving cars.
The use of Deep Learning or Machine Learning is primarily used in the sensors portion of the AI systems for self-driving cars. This makes sense when you consider the duties of the AI subsystems involved in the sensor portion of the driving task. The sensors collect a ton of data. This might be images from the cameras, this might be radar data, LIDAR data, ultrasonic data, and so on.
It is a ready-made situation to use Machine Learning or Deep Learning.
We can for example beforehand collect lots of images of street signs. Those can be used to train an artificial neural network. We can then put into the on-board self-driving car system the runnable neural network that will examine an image of a street scene and hopefully be able to detect where a street sign is, along with classifying what kind of street sign it found, such as a Stop sign or a Caution sign.
Once Again Understanding Rears Its Head
The AI of the self-driving car does not “understand” the street signs, at least not in the manner that we might believe a human has such an understanding.
The street sign is merely an object, akin to the tiles on the Scrabble board of letters that are lines and curves.
As I’ve repeatedly stated in my writings and presentations, the AI of self-driving cars does not have any common-sense reasoning capability.
In essence, we are for now going to be foregoing having AI that has any semblance of human “understanding” and furthermore this applies to the AI of self-driving cars.
When I earlier stated that I caught you, my question had been purposely posed to ask whether you thought that AI self-driving cars must have some semblance of human “understanding” to be able to properly and appropriately drive a car on our roadways.
The catch was that if you say yes, well, there then shouldn’t be any AI self-driving cars on our roadways as yet. If you say no to that question, you are then expressing a willingness to have AI that is less-than whatever human “understanding” consists of, and you are suggesting that you are comfortable with that kind of AI being able to drive on our roadways.
This brings me back to another earlier point too. I had mentioned that some AI developers falsely seem to believe that Scrabble has been “solved” as an AI problem. I presume that you now know that though progress has been made, there is still much room to go before we could somehow declare that AI has conquered Scrabble. The aspect that there are in existence some AI programs that can best a human, some of the time, would not seem to be a suitable way to plant a flag and say that the AI that has done so is the best that can be done.
It would hopefully be apparent that I am aiming to say the same thing about the AI for self-driving cars.
We are going to inextricably end-up with this version 1.0 of AI self-driving cars. Let’s assume and hope that they are able to drive on our roadways and do so safely (that’s a loaded word and one that can mean different things to different people!).
Will that mean that we’ve conquered the task of driving a car?
Some might want to say yes, but I beg to differ.
I’m betting that we are going to be able to greatly improve on that version 1.0, and reach a version 2.0, perhaps 3.0, and so on, each getting better and better at driving a car. This will include doing some things that human drivers do, while also doing some things that human drivers do that they ought not to do when driving a car.
Congratulations to the non-French speaking winner of the French-based Scrabble tournament.
Just to say, I would be offering the same congratulations if it was a non-English speaking French player that was able to win the English-speaking North American tournament.
Winning a Scrabble competition at the topmost level is a feat of incredible strategy and thinking.
I have used the Scrabble aspects as a means to draw your attention to the nature of “understanding” in the matter of human thinking. Per the Chinese Room, we appear to be still at a great distance in today’s AI of reaching to any kind of “understanding” that we might agree exists in humans. Whether you like the Chinese Room exemplar or not, it provides another means to bring up the importance of thinking about thinking and trying to figure out what “understanding” actually entails.
For AI self-driving cars, they are coming along, regardless of AI having not yet cracked the secrets of how to achieve the “understanding” that humans have. We are going to presumably accept the notion that we will have AI systems, minus “understanding” which will be driving around cars on our public roadways.
Can those presumed non-understanding AI systems be proficient enough to warrant driving multi-ton cars that will be making human-related life-and-death decisions at every moment as they zip along our streets and highways?
Time will tell.
Meanwhile, if we do get there, don’t fall into the mental trap that the matter has been solved and that there is no AI left to yet be further attained. I assure you, there will be plenty of AI roadway left to be driven and plenty of opportunity for AI developers and researchers in doing so. Hey, the word “opportunity” is an 11-letter word, I wonder if that will fit during my next Scrabble game.
For free podcast of this story, visit: http://ai-selfdriving-cars.libsyn.com/website
The podcasts are also available on Spotify, iTunes, iHeartRadio, etc.
More info about AI self-driving cars, see: www.ai-selfdriving-cars.guru
To follow Lance Eliot on Twitter: https://twitter.com/@LanceEliot
For his Forbes.com blog, see: https://forbes.com/sites/lanceeliot/
For his AI Trends blog, see: www.aitrends.com/ai-insider/
For his Medium blog, see: https://email@example.com
For Dr. Eliot’s books, see: https://www.amazon.com/author/lanceeliot
Copyright © 2020 Dr. Lance B. Eliot