AI & Law: Autonomous Levels of AI Legal Reasoning

Image for post
Image for post
Introducing the seven autonomous levels of AI legal reasoning

by Dr. Lance B. Eliot

For a free podcast of this article, visit this link https://ai-law.libsyn.com/website or find our AI & Law podcast series on Spotify, iTunes, iHeartRadio, plus on other audio services. For the latest trends about AI & Law, visit our website www.ai-law.legal

Key briefing points about this article:

  • A misleading media portrayal about AI is that all AI is presumptively the same (it is not)
  • There are various Levels of Autonomy (LoA) associated with AI, proffering increasing capacities
  • A new approach to LoA for AI-based Legal Reasoning systems has been gaining attention
  • AI Legal Reasoning (AILR) is ranked into seven distinct categories of autonomous capability
  • This provides a handy means to compare AI & Law products and LegalTech offerings

Introduction

AI is increasingly being applied to the field of law and the legal industry, including that many LegalTech providers are inexorably augmenting their wares via advances in Machine Learning (ML) and Natural Language Processing (NLP).

Unfortunately, it is difficult for legal professionals to readily discern what value these AI improvements add. The vendors are apt to either overstate the capabilities of the AI-empowerment or proffer outsized and outrageous marketing absurdity that suggests the tech has miraculously become a vaunted “robo-lawyer” (implying a semblance of sentience or outstretched superhuman abilities).

One of the principal reasons that this confounding state of affairs exists is due to the noteworthy lack of a convenient and readily usable way to measure or gauge the degree of AI being infused into these emerging state-of-the-art LegalTech offerings.

As astutely generally opined by the famous management guru Dr. Peter Drucker, you can’t manage that which you can’t or don’t measure.

In short, there is no existing standard that depicts the range of levels of how AI can be applied to the law and thus leaves open a gap or omission in sizing up computer-based systems within the legal industry. It would indubitably be helpful and immensely prudent for stakeholders in the legal realm to consider establishing a reasoned set of measures that could be utilized and serve as a rationalizing approach to conveying the extent that AI is being employed.

Analogous Case Of Self-Driving Cars

Shifting gears, we can consider how other domains have dealt with a similar conundrum.

Consider the case of AI-enabled Autonomous Vehicles (AV).

There is a widely known worldwide standard for self-driving cars that have been successfully promulgated by the globally recognized SAE standards group. You might be vaguely aware of the standard and its various stated levels of autonomy.

For example, existing Tesla’s using their Autopilot software are considered to be semi-autonomous and rated as a Level 2 on the official SAE scale that ranges from zero to five. At Level 2, the vehicle requires that a human driver must be present and attentive at all times for the driving task. The AI is considered an Advanced Driver-Assistance System (ADAS), merely assisting the human, and is decidedly not operating at a truly autonomous level of functionality.

Waymo is known for its pursuit of Level 4 self-driving cars. Per the SAE standard, a Level 4 is rated as an autonomous vehicle, meaning that there is no need for a human driver and no expectation that a human driver is at the wheel. Level 4 is somewhat restrictive in that the AI driving system is allowed to only be able to drive in particular settings, such as able to drive exclusively in sunny weather or specific geography of a defined geo-boxed area.

The topmost level of autonomy is stipulated as Level 5, which exceeds the capabilities of Level 4 by stipulating that the AI that must be able to drive in essentially any circumstance that a human driver would potentially be able to drive a car, thus removing the restrictions associated with Level 4.

The advantage of such a taxonomy is that it is relatively easy to compare various brands of self-driving cars, merely by indicating the level that each has attained. Furthermore, this is a succinct way to communicate about autonomy and consists of simply stating the level number achieved.

That’s what is likewise needed in the arena of applying AI to the law.

To wit, another famous idiom that pertains here is to try to avoid reinventing the wheel, as it were, and we might as well reuse that which already works well elsewhere. Let’s marry together the facets of AI autonomy as used in self-driving cars to the need for a set of autonomy levels for the field of AI as applied to law, doing so with a mindful realization to adjust and transform those AI autonomous principles into ones that are directly fitting to the AI and law domain.

For my in-depth research paper on this topic, see the paper entitled “An Ontological AI-And-Law Framework For The Autonomous Levels Of AI Legal Reasoning” at this link here: https://orcid.org/0000-0003-3081-1819

The Seven Levels Of AI Legal Reasoning

Based on the latest research, there are seven levels of autonomy that can be used to appropriately designate the application of automation and AI as it applies to legal discourse and reasoning:

  • Level 0: No legal automation
  • Level 1: Simple Automation for legal assistance
  • Level 2: Advanced Automation for legal assistance
  • Level 3: Semi-Autonomous Automation for legal assistance
  • Level 4: Domain Autonomous legal advisor
  • Level 5: Fully Autonomous legal advisor
  • Level 6: Superhuman Autonomous legal advisor

Briefly, the levels range from 0 to 6, establishing that the least amount of legal AI automation is designated as zero and the topmost would be a six. This numbering is easy to remember and provides a comprehensive range to encompass all facets of AI embellishment.

Most of today’s LegalTech would rate as a Level 1, and for those that have extended their software with some amount of bona fide AI add-ons they would possibly rate as a Level 2.

Level 3 consists of semi-autonomous automation that provides legal assistance, which today is mainly done on a pilot or experimental basis and represents the cusp over which the next step lands incrementally into the actual autonomous territory or designated as Level 4. At Level 4, the AI autonomy is considered restrictive and pertains to a subdomain of law, while at Level 5 the AI is considered fully capable across all avenues of the law.

There is also a futuristic Level 6, providing a level for the possibility, though presumably remote, that the AI could someday eclipse human intelligence and become so-called super-human. In contrast, the SAE self-driving car standard tops out at the human driving capability and does not portend that one day it might be feasible for cars to be driven by the AI in a manner beyond that of human drivers (this is a matter that some critics say should be added to the SAE standard).

Conclusion

Overall, the notion presented here is that by thoughtfully applying an already accepted convention of AI autonomy, yet judiciously transforming it to the needs of AI as applied to law, we can reduce the exaggerated claims and other unruly chatter about AI in the legal industry.

For the latest trends about AI & Law, visit our website www.ai-law.legal

Additional writings by Dr. Lance Eliot:

And to follow Dr. Lance Eliot (@LanceEliot) on Twitter use: https://twitter.com/LanceEliot

Copyright © 2020 Dr. Lance Eliot. All Rights Reserved.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store