by Dr. Lance B. Eliot
For a free podcast of this article, visit this link https://ai-law.libsyn.com/website or find our AI & Law podcast series on Spotify, iTunes, iHeartRadio, plus on other audio services. For the latest trends about AI & Law, visit our website www.ai-law.legal
Abstract: There is a pell-mell rush to build and release AI systems in all areas of commerce. Many such AI systems embed “algorithmic decision making” (ADM) capabilities, rendering at times quite important “decisions” impacting those using the AI. A rising call for XAI or explainable AI has taken place. One legally important stipulation will be that the explanations are meaningful (a potentially murky notion needing legally robust clarification).
Key briefing points about this article:
- AI continues to be rolled out in an ever-increasing number of avenues
- Often, the AI contains algorithmic decision making (ADM) rendering key decisions
- Regulators are rapidly seeking to ensure that the AI also provides ADM explanations
- This is known as XAI or explainable AI
- For XAI, the notion of legal meaningfulness is becoming a core characteristic
Not all explanations are necessarily handy or insightful.
An explanation might have no particular meaning to you. Sure, the explanation is laid out for you, but trying to make heads or tails of it is confounding. In that sense, we can reasonably ask or expect that a proper explanation should at least be meaningful.
A meaningful explanation would seemingly be a solid step forward in ensuring that the explanation can be useful. But the general vibe or notion of meaningfulness can be difficult to pin down.
In other words, if someone or something provides you an explanation, could you ascertain whether the explanation so provided is characterized as being meaningful?
Maybe yes, maybe no. A key crux of the problem hinges on exactly how we ought to define meaningfulness. In addition, once an explanation has been rendered, the next hurdle entails trying to assess the essence of the…