AI & Law: Explainable AI Needs To Have Legal Meaningfulness

Lance Eliot
7 min readAug 7, 2022
XAI (Explainable AI) is said to need legal meaningfulness

by Dr. Lance B. Eliot

For a free podcast of this article, visit this link https://ai-law.libsyn.com/website or find our AI & Law podcast series on Spotify, iTunes, iHeartRadio, plus on other audio services. For the latest trends about AI & Law, visit our website www.ai-law.legal

Abstract: There is a pell-mell rush to build and release AI systems in all areas of commerce. Many such AI systems embed “algorithmic decision making” (ADM) capabilities, rendering at times quite important “decisions” impacting those using the AI. A rising call for XAI or explainable AI has taken place. One legally important stipulation will be that the explanations are meaningful (a potentially murky notion needing legally robust clarification).

Key briefing points about this article:

  • AI continues to be rolled out in an ever-increasing number of avenues
  • Often, the AI contains algorithmic decision making (ADM) rendering key decisions
  • Regulators are rapidly seeking to ensure that the AI also provides ADM explanations
  • This is known as XAI or explainable AI
  • For XAI, the notion of legal meaningfulness is becoming a core characteristic

--

--

Lance Eliot

Dr. Lance B. Eliot is a renowned global expert on AI, successful startup founder, global CIO/CTO, , was a top exec at a major Venture Capital (VC) firm.