AI & Law: Dualism Of Law And Morality In AI Agency

Dualism duels will occur amidst AI-enabled moral agents and AI legal-focused agents

by Dr. Lance B. Eliot

For a free podcast of this article, visit this link https://ai-law.libsyn.com/website or find our AI & Law podcast series on Spotify, iTunes, iHeartRadio, plus on other audio services. For the latest trends about AI & Law, visit our website www.ai-law.legal

Key briefing points about this article:

  • The law has a close cousin, morality, for which there is at times agreement and sometimes not

Introduction

Two peas in a pod.

Except that sometimes there is only one pea in the pod and the other one is left out altogether.

What am I referring to?

The focus herein entails the crucial roles that both the law and the tenets of morality undertake in our overall socio-economic and legal world. As will be discussed, advances in technology via the use of Artificial Intelligence (AI) are quickly gaining ground in terms of embodying morality oriented facets into everyday computer applications, sometimes referred to as MoralTech, and meanwhile, there is an increasingly widespread use of LegalTech that encompasses embedding law-stipulating AI-agents into complex software systems.

Let’s begin this discussion with the fundamentals of how the law and morality are at times fully aligned and at other times are utterly opposed to each other, and then ease into the auspicious notion of having AI-powered law-oriented agency and moral-oriented agency.

The Fundamentals Of Law And Morality

Some perceive that the law and morality are inextricably intertwined, being at times fully compatible with each other while at other times regrettably acting as bickering foes.

The tension between the law and the doctrines of morality can be traced to (at least) the days of Plato. Of course, you don’t need to go back that far in time to get mired in the morass of where the laws begin and end, and where morality begins and ends, therein. Legal scholars have repeatedly exhorted that there is a conundrum involving laws that permit an act that would otherwise be considered as immoral, and meanwhile also legally bans certain acts that are either morally permissible or possibly even considered as morally obligatory.

Morality might be hidden within the laws and not be immediately apparent to the eye. Some assert that morality is found more so within the shadow of the law, consisting of those facets not directly stated in the law that nonetheless we associate with the law. It is said that the law casts a large shadow for which many of the acts that we are bound to either undertake or avoid are not explicitly stated per se in the laws and instead are presumed to be implied by various semantic overlay and open-ended moral interpretation.

Things are indubitably rough on the public at large when there is a conflict between what the law states and what morality seems to dictate. A salient remark in the famous legal treatise entitled The Law by Frederic Bastiat in 1850 illuminates vividly this grueling point: “When law and morality contradict each other, the citizen has the cruel alternative of either losing his [her] moral sense or losing his [her] respect for the law.”

You can find two diametrically opposed perspectives on what to do about any contradictions between the law and morality. Some fervently insist that the laws are the laws, and nobody is above the law, even if they are wanting to claim that they imbue some morality or moral code that rightfully takes them outside the law. On the other side of that coin are the arguments that whenever legality and morality butt heads, legality shall be the loser, and unquestionably by the adamant and abundant degree it is morality that must be upheld.

As they say, be wary of the web that we weave when trying to straighten out the moments when law and morality are not well aligned.

AI Enters Into The Picture

This is all quite interesting and at times perhaps vague and unspecified, but it becomes an even greater challenge as we head toward a future consisting of Artificial Intelligence (AI) and the law. In a sense, the rubber is about to meet the road.

How so?

There is an increasing hue and cry that AI systems are being fostered into this world and for which those AI mechanizations are violating at times various vital ethical principles. For example, consider the use case of an AI-enabled application that decides who will be granted a home loan or a car loan. Via the use of today’s advances in Machine Learning (ML) and Deep Learning (DL) techniques and technologies, an AI system is crafted that can quickly assess the applications submitted for a loan. By eschewing the normal laborious effort incurred via human loan assessors, the AI can perform the evaluations faster, more consistently, and without the vagaries or foibles that the human evaluators possess.

But then again, maybe not all is so readily tried and true.

It turns out that sometimes these AI systems using ML/DL are landing into a rather forbidden territory. Keep in mind that Machine Learning and Deep Learning are nothing more than computational pattern matching algorithms that mathematically analyze data to find identifiable patterns. They are not sentient AI and nor do we have as yet any semblance of such sentience. Indeed, there isn’t any common-sense reasoning embodied within the current AI.

This makes a difference in that the AI-powered loan processing system can readily land upon a computationally “satisfying” pattern that is based on race or perhaps gender (satisfying means, in this context, that it offers a statistically substantive approach, regardless of the societal implications). Those that have developed the AI might not readily be able to ferret out how the AI is deciding upon the loan applications. As such, the AI might proceed along, doing so for thousands upon thousands of loan applications, perhaps in the millions of applicants, with nobody realizing how the loan selection is being rendered.

One of the growing concerns about AI ML/DL is that it tends to lack any kind of transparency and only provides mathematically arcane indications that defy logically sensible explanations.

An effort to cope with these matters involves having companies adopt a set of AI Ethics principles to govern how they build AI systems and how they select AI systems for licensing or acquisition. Assuming that the adopters of the AI Ethics precepts abide by the stated conditions, presumably, there will be a lower likelihood of AI systems that violate ethical or moral stipulations.

What some seem to be neglecting or overlooking is the need to equally ensure that the AI is legally abiding by the laws.

In short, the expectation must be that the AI is both lawful and morally above board.

A recent trend involves implanting into AI systems a component for the ethics or morals side of the matter. Imagine a loan granting AI system that is keeping track of loan approvals and loan rejections, and then detects by itself that perhaps there is an internal bias shaping the choices, ascertaining that the use of race or gender is occurring (for example). The AI would then attempt to alert to such a malady or might seek to self-heal itself, adjusting how it is making the decisions and aim to eliminate the reliance upon those biased factors.

The Disagreement Dilemma Rears Its Head

This brings us to the moment of keen intrigue herein.

Suppose the AI has such an embedded morality component or so-called agent (the accepted parlance in the AI field is to refer to AI entities oftentimes as “agents”) and also has another component that focuses on the legal or lawful agency aspects. Two such agents are working in real-time, one that is assessing the moral or ethical ramifications of the AI, and the other evaluating and attempting to regulate the legal facets.

Here’s the million-dollar question, as it were: What happens when the embedded moral agent and the embedded legal agent emit differing assessments that are entirely opposed in terms of what the AI ought to be doing (or ought to not be doing)?

For sure, this is a hefty quandary and mirrors the same points made earlier about the arduous and insidious issues that arise when the law and morality clash. To date, this ugly problem has not been especially apparent within AI since so few AI systems have these embedded components. Also, realize too that if the AI has only one such component, perhaps the morality agent, it wins by default since it is the only agency within the AI, and likewise, if the legal agent is the only agency then it wins due to the lack of any counterbalancing morality agent.

My prediction is that we are going to gradually and inexorably become more aware of this problematic issue and will need to put in place practical and usable ways to deal with it.

For my recent research paper offering details on how to contend with this dilemma, and that was accepted into the prestigious Harvard AI annual conference sponsored by the Harvard Center for Research on Computation and Society, see “The Neglected Dualism Of Artificial Moral Agency And Artificial Legal Reasoning In AI For Social Good” available at this link here: https://orcid.org/0000-0003-3081-1819

Conclusion

All told, trying to get automation to resolve conflicts between the law and morality is beyond the pay grade of current AI. Nonetheless, we can and should expect the AI to be able to detect when such disputes arise and can partake in limited and focused conflict resolution efforts, doing so within predetermined boundaries.

Meanwhile, society as a whole is undeniably going to wrestle with seemingly intractable disagreements between the law and morality, doing so presumably for a long time, possibly forever, and for which even if Plato were alive today could not be readily and fully resolved.

For the latest trends about AI & Law, visit our website www.ai-law.legal

Additional writings by Dr. Lance Eliot:

And to follow Dr. Lance Eliot (@LanceEliot) on Twitter use: https://twitter.com/LanceEliot

Copyright © 2020 Dr. Lance Eliot. All Rights Reserved.

Dr. Lance B. Eliot is a renowned global expert on AI, Stanford Fellow at Stanford University, was a professor at USC, headed an AI Lab, top exec at a major VC.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store