AI & Law: Robo-Lawyer Hyperbole Unmasked

Robo-lawyer phrasing is misleading and needs to be replaced

by Dr. Lance B. Eliot

For a free podcast of this article, visit this link or find our AI & Law podcast series on Spotify, iTunes, iHeartRadio, plus on other audio services. For the latest trends about AI & Law, visit our website

Key briefing points about this article:

  • The “robo-lawyer” phrase is pervasive and a stand-in for robot-like AI Legal Reasoning
  • Unfortunately, the phraseology tends to be mainly hyperbole and overinflates expectations
  • It will be hard to mitigate the catchphrase since it has innate stickiness and a flavorful allure
  • One proposed alternative would be to instead refer to levels of autonomy of AI in the law
  • Akin to levels associated with self-driving cars, the new phrasing is catchy too and more apt


A particular phrase can become pervasive and have an abundance of stickiness, despite limitations or paucity in everyday practical terms of the merits entailed in the wording. If the phraseology happens to be catchy, it will tend to catch on.

Furthermore, when a vacuum exists such that there is nothing else as alluring to fill the void, you can bet your bottom dollar that the otherwise questionable catchphrase will prevail. The odds are that it will take on a life of its own, being bandied about and used time and time again, gaining added traction as it merrily rolls along.

Want a tangible example of this somewhat abstract precept?

A notably prime example would be the nebulous and now disconcertingly infamous reference to “robo-lawyer,” also stated less poetically as robot lawyer.

A glance online provides voluminous indications of articles and postings that relish using the robo-lawyer moniker as a headline-grabbing stunt. Admittedly, the usage tends to work and gets the attention of readers. In that sense, it is an effective trigger or communicative signal.

Plus, one might argue that there is nothing else that so succinctly conveys the same notion, namely the idea that Artificial Intelligence (AI) is gradually entering into the legal domain and increasingly becoming part-and-parcel of the latest LegalTech offerings.

Unfortunately, the robo-lawyer notion carries a lot of baggage with it.

The innate implication of the vaunted phrase is that there exist instances of AI that are sentient or that have reached a level of capability equal to human intelligence.

Lawyers and other legal professionals become instinctively worried that they are on the verge of being replaced by such AI.

The public-at-large is led to believe that a legal chatbot can provide comparable legal advice as would a human lawyer.

And so on.

Let’s be clear about this and please know that there is no such AI today that can analyze the law and proffer legal advice in anything akin to that of a flesh-and-blood attorney. Sure, AI can demonstrably aid a lawyer by piecing together contract passages when drafting a new contract or conduct a quick search across a corpus of court cases to find ones that are salient and notable for a new case being pursued, but none of this is the same as exhibiting the robust and cognitive prowess embodied in humanoid lawyering expertise.

Over time, the AI is getting better and being improved by advances in Natural Language Processing (NLP), Machine Learning (ML), Knowledge-Based Systems (KBS), and so on, though the possibility of sentience is a farfetched dream and likewise so is the use of an AI-based robot lawyer that autonomously dispenses bona fide legal advice.

For now, we need to keep a balanced perspective that AI does have merit, and will additively boost those that practice law, and yet realize too that AI has a long way to go before it gains substantive autonomy.

In the meantime, what shall we do about the handy robo-lawyer naming that regrettably overstates and misleads in both tone and substance?

A sounder approach consists of referring to levels of autonomy and as specifically geared toward the field of AI and law.

Levels of Autonomy for AI Legal Reasoning

Based on my research, there are seven levels of autonomy that can be used to appropriately designate the application of automation and AI as it applies to legal discourse and reasoning:

Level 0: No legal automation

Level 1: Simple Automation for legal assistance

Level 2: Advanced Automation for legal assistance

Level 3: Semi-Autonomous Automation for legal assistance

Level 4: Domain Autonomous legal advisor

Level 5: Fully Autonomous legal advisor

Level 6: Superhuman Autonomous legal advisor

Those of you familiar with self-driving cars are likely aware that a worldwide standard exists for referring to levels of autonomy in the case of cars, trucks, and other ground-based vehicles, and uses a similar format or convention to depict the various levels of autonomy.

Having such a systematic means to refer to self-driving is handy since it can readily denote whether a given AI-based driving system is merely aiding a human driver or intended to replace a human driver. This also allows for catching those that seem to exaggerate or stretch what they offer as an AI driver by asking them which level of autonomy their vehicle has actually achieved.

We can do the same in the field of law.

For example, a new feature added into an e-discovery software package might make use of NLP, allowing for queries to be made by using everyday conversational language rather than having to use an arcane and highly tech-oriented set of commands.

Is this added capability of NLP able to unabashedly be touted as a robo-lawyer?

A vendor might certainly hope so, and due to there not being any particular restriction or global definition regarding the meaning of robo-lawyer, this kind of grandstanding can easily be undertaken.

If the NLP were to be compared to the levels of AI and law autonomy, presumably it would come into the Level 2 category, in which case, we would all immediately know that the NLP has not leaped into the autonomous realm as yet.

After a while, once the levels become familiar and commonly referenced, the succinctness of referring to level numbers would be quickly recognized, just as is the case for those in the self-driving arena. Existing Tesla’s using Autopilot is known to be at Level 2, while Waymo and it’s self-driving public tryouts consist of Level 4 vehicles. Autopilot is still at the driver assistance stages, while Waymo is piloting at the autonomous no-driver-needed levels.

At first, it will be hard to promulgate the autonomous levels of AI and law and relinquish the flashier robo-lawyer convention. Gradually and inexorably, it will become apparent that all parties should have an alternative means to realistically establish where AI stands in terms of performing legal reasoning and do so with the quickness of stating what level of autonomy a LegalTech system or piece of software has attained.


It is time to drop the robo-lawyer hyperbolic references and embrace a prudent and reasonably crisp set of levels for fairly stating what AI is legitimately contributing to the field of law.

Admittedly, doing so has quite an uphill battle since there is nothing seemingly more evocative of AI & Law than making a toss-away reference to a robo-lawyer. Well, robo-judge comes pretty close too.

For the latest trends about AI & Law, visit our website

Additional writings by Dr. Lance Eliot:

And to follow Dr. Lance Eliot (@LanceEliot) on Twitter use:

Copyright © 2020 Dr. Lance Eliot. All Rights Reserved.

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store