by Dr. Lance B. Eliot
For a free podcast of this article, visit this link https://ai-law.libsyn.com/website or find our AI & Law podcast series on Spotify, iTunes, iHeartRadio, plus on other audio services. For the latest trends about AI & Law, visit our website www.ai-law.legal
Abstract: The use of Machine Learning and Deep Learning proffers significant strides forward in trying to devise substantive AI-based legal reasoning systems. That being said, there are crucial drawbacks, limitations, and gotchas that underpin the use of said techniques and technologies. One such qualm entails the insidious nature of computational overinterpretation, essentially elevating legal nonsense above the use of sounder legal logic.
Key briefing points about this article:
- AI-based legal reasoning systems are getting a big boost from the latest AI advances
- The trending AI approaches consist of Machine Learning and Deep Learning
- These are computational pattern matching techniques and technologies
- At times, the output displayed indicates a high certainty or level of confidence
- Unfortunately, legal nonsense rather than legal logic might be at play
Perhaps you’ve had an occasion to be speaking with someone about a matter of the law and at one point either or both of you exclaimed that the underlying logic makes absolutely no legal sense.
We usually expect that any serious legal matter ought to ultimately be rooted in something legally sensible. You might not necessarily agree with the legal underpinnings per se, but at least you tip your hat to the fact that a semblance of legal logic was employed. Not all legal arguments are convincing, though we would hope that a bona fide legal argument is intelligible and argumentatively solidly embodied.
I bring up this hallowed point about being legally sensible to highlight a concern that is appearing amid the avid use of Machine Learning (ML) and Deep Learning (DL) for crafting AI-based legal reasoning systems. The overarching idea behind the use of ML and DL in the law consists of using…