by Dr. Lance B. Eliot
For a free podcast of this article, visit this link https://ai-law.libsyn.com/website or find our AI & Law podcast series on Spotify, iTunes, iHeartRadio, plus on other audio services. For the latest trends about AI & Law, visit our website www.ai-law.legal
Abstract: A crucial aspect of today’s AI entails making sure that an explanatory function is included in the AI system whenever some form of decision-making is taking place. This is referred to as XAI. To guide lawmakers struggling with crafting legislation and regulations about AI, we can use the GDPR provisions about explainability as a handy source of legal language and inspiration.
Key briefing points about this article:
- AI advancement is rapidly occurring and has generated intense interest all-told
- We are finding ourselves at the whim of AI-based decision-making algorithms
- It is reasonable to expect that the AI would explain its decisions (known as XAI)
- Efforts to craft AI-related regulations and legislation should include such provisions
- The GDPR provides legal language related to XAI and can be handily leveraged
Section 1: Introduction
A significant concern about the advent of AI throughout society is the potential lack of explainability underlying how the AI works and makes various algorithmic decisions. In short, many of today’s AI systems are opaque and contain seemingly inexplicable mathematical formulations, lacking a capacity to explain why a given decision was rendered.
We usually expect that we can get someone or something to explain why it is doing whatever it is doing.
Suppose you chat with a mortgage broker about seeking a new home loan. The broker asks you a series of questions about your financial situation, your job, and other facets of your personal circumstances. After collecting this altogether quite private information, the broker informs you that you do not qualify for a loan.
A decision has been made.