AI & Law: Explainable AI (XAI)

Explaining AI explainability amidst a legal context

by Dr. Lance B. Eliot

For a free podcast of this article, visit this link https://ai-law.libsyn.com/website or find our AI & Law podcast series on Spotify, iTunes, iHeartRadio, plus on other audio services. For the latest trends about AI & Law, visit our website www.ai-law.legal

Key briefing points about this article:

  • Explanations are an integral part of our daily lives
  • We are expected to provide explanations, and we expect to receive explanations
  • Proffering explanations is a lot harder than it seems
  • AI systems are increasingly being tasked to provide explanations, known as XAI
  • There are several crucial dimensions of the law pertaining to the XAI matter

Introduction

Time to dive into the pool and see what is going on with the nature of explanations, including explaining explanations and ensuring that explanations are truly being explanatory. That might seem like a somewhat flippant imperative remark, but there is a serious uproar emerging about the importance of explanations.

Notably, the legal field is smackdab in the center of it all.

Let’s back up for a moment and start the tale at its proper beginning.

In daily life, we take for granted that explanations are all around us. You take your car to a local repair shop and the automotive mechanic might explain why your car has gone kaput. Later that day, you visit with your physician and get an explanation for why your shoulder has been hurting lately. Upon getting home for the evening, your child explains why they did poorly on their latest math test at school.

Explanations galore!

In the legal field, explanations are abundantly infused into just about everything encompassing the practice of law. Your client explains why they did what they did. Score a point for hearing an explanation. You have to craft a legal case that proffers explanations of your legal position on the matters at hand. Another point scored, this time for writing an explanation. A judge hands down a court decision and includes an explanation articulating the legal reasoning used. That’s a scored point on your part for reading a likely in-depth and legally complex explanation.

The everyday use of explanations is generally taken for granted. Sometimes an explanation is freely provided by someone, while in other instances you have to tease an explanation out of a person. For example, a physician might just tell you to go home and rest, failing though to explain why the prescriptive act of resting is material to your hurting shoulder. As such, you might have to inquire of the doctor as to what is the explanation for that particular piece of advice.

Overall, we can easily open up Pandora’s box on this topic by taking a moment to contemplate the myriad of nuances associated with explanations. Suppose the physician goes into tremendous detail covering the technical facets of human anatomy when explaining the issues associated with your shoulder. If you don’t have any prior medical training, the explanation is bound to go over your head. On the other hand, imagine that the doctor simply says “trust me” and won’t provide any semblance of an explanation. That’s another sour indication about explanations and the manner in which explanations are oftentimes badly framed and inadequately conveyed.

XAI And The Law

The legal profession has its own difficulties associated with explanations.

Laypeople that don’t know about the law and arcane legal matters are apt to be confused by deeply crafted explanations that embody all kinds of legal jargon. That can be problematic.

When standing in the courtroom and being asked to provide an explanation to a judge, an attorney has to tune their explanation to the circumstances of the case and as seemingly best honed to the style and preferences of the judge. This can make or break your case.

The overarching point is that pinning down the particulars of what is an explanation, how to best compose an explanation, the right ways to express an explanation, and so on, ostensibly is an exceedingly slippery and amorphous phenomenon.

The reason this is recently coming to the forefront is due to the advent of AI.

Suppose that you visited your physician, and it was an AI system rather than a human being. Would you expect the AI to be able to explain why it has recommended that you get some rest, having made such a recommendation after assessing your shoulder? Seems like you would decidedly want an explanation.

Explainable AI has its own catchy acronym now too, namely labeled as XAI. Some assert that all AI will inexorably have to utilize XAI and be programmed to explain itself, in all circumstances and in varied modes that fit the situation and the needs of the human desirous of receiving an explanation.

This brings us to the legal side of things.

There is the potential for legal requirements requiring that AI provide explanations, such as via consumer protection laws. Liability can potentially arise too for those providing AI systems that fail to provide explanations. That is one dimension of the legal XAI consideration.

Another dimension is found within the field of law itself. AI that is crafted specifically for legal reasoning purposes is likely to be expected to provide suitable explanatory expositions. The use of AI capabilities including Machine Learning (ML) and Deep Learning will profoundly impact the law and become a vital tool for attorneys, judges, and the entire judicial process. For details on this and other AI and law topics, see my book entitled “AI and Legal Reasoning Essentials” at this link here: https://www.amazon.com/Legal-Reasoning-Essentials-Artificial-Intelligence/dp/1734601655

As emphasized in a research paper about XAI and the law, researchers Katie Atkinson, Trevor Bench-Capon, and Danushka Bollegala assert that legal cases imbue a right-to-explanation: “In a legal dispute there will be two parties and one will win and one will lose. If justice is to be served, the losers have a right to an explanation of why their case was unsuccessful. Given such an explanation, the losers may be satisfied and accept the decision or may consider if there are grounds to appeal. Justice must not only be done, but must be seen to be done, and, without an explanation, the required transparency is missing. Therefore, an explanation is essential for any legal application that is to be used in a practical setting” (as expounded in their paper entitled “Explanation in AI and Law: Past, Present, and Future” published in the journal Artificial Intelligence).

In the early days of combining AI and the law, there were AI-augmented LegalTech systems that used rules-based or expert systems technologies, which provided an intrinsic means to automatically generate an explanation. Likewise, pioneering proof-of-concept systems in the field of AI and law such as TAXMAN, HYPO, CATO, made use of case-based reasoning and conveniently enabled the generation of explanations.

Today’s Machine Learning systems are less amenable to producing explanations. This is partially due to the utilization of inherently complicated computational pattern matching models and the arcane mathematics involved. Should those ML systems be let off the hook about needing to provide explanations?

Researchers Bernhard Waltl, Technical University of Munich, and Roland Vogl, Executive Director for the Stanford Center for Legal Informatics make a demonstrative case that omitting XAI or trying to somehow after-the-fact cobble a supplemental XAI to an Algorithmic Decision Making (ADM) system is not going to cut the mustard, namely that XAI must be intrinsically included: “In our framework, explainability is not treated as an optional characteristic or additional external property that can be “plugged-in” to machine learning classifiers like adding a new component or application to a set of given functions and capabilities. Instead, it is an intrinsic property of every ADM that can be improved and enhanced just as other (performance) attributes can be improved” (as identified in their research paper entitled “Explainable Artificial Intelligence: The New Frontier in Legal Informatics”).

Conclusion

I could go on and on, nearly all day long, discussing and debating the topic regarding explanations and AI, but then again, just as you might seek a shorter abridged explanation when your shoulder is having some mild aches and pains, perhaps we’ll end the explanation about explanations at this juncture.

Just know that there is a lot more on explaining to do on explanations and you can readily dive into explanations to your heart’s deepest content.

For the latest trends about AI & Law, visit our website www.ai-law.legal

Additional writings by Dr. Lance Eliot:

And to follow Dr. Lance Eliot (@LanceEliot) on Twitter use: https://twitter.com/LanceEliot

Copyright © 2021 Dr. Lance Eliot. All Rights Reserved.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store