AI & Law: The Turing Test

AI and the law could make use of the famous (infamous) Turing Test

by Dr. Lance B. Eliot

For a free podcast of this article, visit this link or find our AI & Law podcast series on Spotify, iTunes, iHeartRadio, plus on other audio services. For the latest trends about AI & Law, visit our website

Key briefing points about this article:

  • There isn’t any AI yet today that is sentient and nor reaches the level of full human intelligence
  • Nonetheless, there might someday be such vaunted AI and we’ll need to know that it exists
  • Within the AI field, there is a type of famous test for this known as the Turing Test
  • It makes sense to recast the Turing Test into the field of law as it relates to AI and the law
  • Once there is AI-enabled autonomous legal reasoning we’ll want to validate its capabilities


Did a human compose this sentence or did a computer do so?

Nowadays, it can be hard to readily discern whether something that you see or hear might be generated via a computer versus by the direct hand of a human. That being said, computers that embody the latest in Artificial Intelligence (AI) are still not sentient, not even close, and do not be fooled or misled otherwise. Someday, presumably, it is assumed that AI will indeed be either sentient or possibly achieve a semblance of human intelligence, though one of the biggest questions will be the simplest of them all, namely, how will we know when such AI has been produced?

Your first thought might be that it seems blatantly obvious that we’ll undoubtedly and instantly recognize when AI has achieved human-levels of intelligence. Along those lines, perhaps you are contemplating a notion that we’ll know it when we witness it.

And with that helpful preamble, let’s unpack the matter and take a closer look to try and figure out this conundrum.

When Is The There There

They say that beauty is in the eye of the beholder.

That might be true, but it is confoundingly problematic if you were to try and use that definition in a legal matter. In a sense, beauty would be relatively amorphous, semantically indeterminate, and be imbued with individualistic arbitrariness.

What else has those same qualms?

You might recall the famous utterance in 1964 by Supreme Court Justice Potter Stewart in the Jacobellis v Ohio case on obscenity, in which he indicated this about how to discern that which is hard-core pornography: “I shall not today attempt further to define the kinds of material I understand to be embraced within that shorthand description, and perhaps I could never succeed in intelligibly doing so. But I know it when I see it, and the motion picture involved in this case is not that.”

This became known as the “I know it when I see it” test.

One could reasonably assert that the now-legendary phrase of “I know it when I see it” is in the same camp as the “eye of the beholder” refrain, once again be susceptible to individual arbitrariness and incurring other challenging woes as a means to definitively ascertain such matters.

Let’s toss in an additional popular phrase into this word-salad mix.

You might be unfamiliar with the poetry of James Whitcomb Riley from the late 1800s, but you surely know the variants of his especially memorable poetic line: “When I see a bird that walks like a duck and swims like a duck and quacks like a duck, I call that bird a duck.” Today’s equivalent catchphrase is typically along the lines of if it walks like a duck and quacks like a duck, then it most probably is a duck.

Okay, so what does all of this add-up to and why does any of this make a difference?

Here’s why this is a weighty and quite crucial matter: It has to do with Artificial Intelligence (AI) and the law.

First, let’s start with the AI part of that equation and then bring back into view the law part of it.

A longstanding and vexing issue in the realm of AI has been the question of when we will know that we have successfully arrived at the sought-for destination. In this case, the destination presumably consisting of being able to craft a machine or computer-based system that can fully and truly embody or exhibit human intelligence.

Ponder this puzzling aspect for a moment and try to identify what techniques or methods you would use to ascertain that a computer with AI running on it has become equal to human intelligence (or, possibly even surpassed human intelligence, progressing beyond the mortal sphere and entering into the uncharted waters of superhuman intelligence).

Would you give the AI a copy of the SAT or ACT college entrance exam and use that as your indicator of it having reached the instantiation of human intelligence, presumably asking the AI to try and answer all of the posed test questions and then see what it says?

This might be an interesting way to get things started on the path toward assessing human intelligence embodiment, but I believe we would all likely agree that it is not especially satisfying and certainly not a complete form of testing.

Indeed, suppose that I cleverly decided to craft an AI system that was exclusively focused on passing the SAT or ACT tests, using various narrow-AI technologies, and pretty much programmed the computer to specifically be adept at those particular tests. If you then opted to have the AI take the SAT or ACT tests, and it passed with flying colors, would it be fair and reasonable to declare that AI has fully been achieved? I dare say, no, it would not be, and in a manner of speaking, the AI-existence test that was being used was faulty because it could too easily be fooled or otherwise be passed without striking at the heart of what AI is supposed to be.

Time to tie this all together into a tidy bow, using the opening remarks about commonplace word-salads.

You might argue that you’ll know it when you see it, in terms of whether a computer has fully become AI-powered. Furthermore, if the computer can talk like a human, act like a human, and seem to think like a human, by gosh it most probably is true AI (this is a variant of the duck test, one might so suggest). Unfortunately, these clichés merely get us mired into the same muck of individual arbitrariness and does little to overcome those misfortunes.

There is a way to shift those sentiments into something that does have sharper teeth.

It turns out that insiders within the AI field have already come up with a type of test, or more akin to a structure or template of a test, known as the Turing Test.

Unfolding The Turing Test

Created by and named for the esteemed mathematician Alan Turing, he originally proposed in 1950 a means of testing AI that he described as an imitation game and it has stood the test of time, as it were, still highly regarded and cited in today’s modern times.

The Turing Test or imitation game that he devised consists of a person that takes on the role of conducting an interrogation, asking questions of two subjects or participants, one being a human and the other being an AI system (neither of the two is directly seen by the interrogator). Imagine that the two subjects are hidden behind a curtain on a stage and that the interrogator can only interact indirectly via speaking or writing a message to them but cannot see them directly. This hiding of the subjects aids in what otherwise would be a rather perfunctory exercise of merely looking at the participants and visually ascertaining which is the human and which is the machine (assuming that the machine is not a robot fashioned to look identically like a human).

The interrogator does not know beforehand which of the two is the human and nor which of the two is the AI. For sake of convenience, label one of them as X and the other as Y. The interrogator asks questions or makes queries of the X and Y, and at some point, ascertains that the effort should be concluded. Upon so ending the effort, the interrogator is then to state whether X is the human or whether Y is the human, which alternatively could be stated by indicating whether X is the AI or whether Y is the AI.

The aim of the Turing Test is that if the interrogator is unable to differentiate between the two subjects, presumably the AI is thusly indistinguishable from the human, in terms of thinking, and thus we can conclude that the AI has achieved the equivalence of human intelligence. This greatly simplifies the seemingly intractable problem of trying to define what human intelligence consists of. If the AI can demonstrate intelligence to the same degree as a human, it can be said to be a thinking machine and have reached the aspirations of AI.

At a glance, this likely seems a handy way to solve this problem of how to ascertain the achievement of AI. Please be aware though that there are various criticisms about the Turing Test, encompassing numerous limitations or weaknesses about it, and there is a slew of proffered suggestions about how to bolster or strengthen it.

In any case, consider how this then applies to AI and the law.

There are ongoing efforts to craft AI that can perform legal reasoning. It is believed that gradually, inexorably, these AI legal-beagle systems will be able to fully conduct legal reasoning and proffer legal advice, on par with that of human attorneys.

How will we know that the AI legal reasoning capabilities are sufficiently capable to practice law?

Aha, in many ways, this is the “I’ll know it when I see it” potential malady, and, as such, perhaps the Turing Test could be reapplied to the context of the legal realm, entailing the use of the Turing Test to ascertain the legal proficiency and legal acumen of an AI system.

For more details on this provocative and controversial topic, see my research paper entitled “Turing Test and the Practice of Law: The Role of Autonomous Levels of AI Legal Reasoning” (


As additional food for thought, and for which it might make some crazed, if we are presumably going to eventually have AI that embodies human intelligence, we might readily assume that this AI could also become an attorney or lawyer, doing so in the same manner that a human with human intelligence can do so. The AI would study the law, perhaps be subject to taking a bar exam, and then upon doing so be considered the equivalent of a licensed attorney.

Not everyone is keen on the idea, some insist it won’t ever be allowed to happen, whilst others sleep soundly at night by assuming that no AI will ever be able to cross that lofty threshold.

For the latest trends about AI & Law, visit our website

Additional writings by Dr. Lance Eliot:

And to follow Dr. Lance Eliot (@LanceEliot) on Twitter use:

Copyright © 2020 Dr. Lance Eliot. All Rights Reserved.

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store