by Dr. Lance B. Eliot
For a free podcast of this article, visit this link https://ai-law.libsyn.com/website or find our AI & Law podcast series on Spotify, iTunes, iHeartRadio, plus on other audio services. For the latest trends about AI & Law, visit our website www.ai-law.legal
Key briefing points about this article:
- There is much discussion these days about AI for Good
- This has startled people into realizing that there is also the potential of AI for Bad
- Regulators are considering passing new laws to stem the possibility of Bad AI
- The EU supposedly is considering a severe penalty for firms that promulgate Bad AI
- Attorneys versed in AI and the law are going to find themselves facing a goldmine of work
With the rush toward devising AI systems, there was an initial gleeful gushing about how great AI will be and the moniker of AI for Good magically came into prominence.
As will be pointed out in a moment, all good things must come to end, as they say, and also the reality is beginning to settle in about whether AI is intrinsically and axiomatically going to be solely for the purposes of good and worthy endeavors. That seems like a rather dubious and unlikely premise.
Meanwhile, please do keep in mind that there isn’t any kind of AI as of yet that is sentient, therefore you need to realize that the AI for Good is ostensibly about those that develop and field AI systems. If those people responsible for putting together an AI system are aiming to provide a form of automation that is beneficial, and assuming that it indeed produces a beneficial or good outcome, they are presumably doing the right thing and attaining AI for Good.
As they say, there is always the other side of a coin. By that account, there is also the AI for Bad that can be fostered onto others too.
Once again, do not fall for the mental trap that this is AI that has on its own “decided” to be an evildoer. Humans are working as the Wizard of Oz, doing their wizardry efforts behind the scenes of devising and promulgating their AI for Bad, and they rightfully need to be held accountable for their actions. To clarify, they don’t necessarily have in their active mind some malevolent thoughts aiming to do bad things. They might unleash an AI system with the best of intentions, and yet the AI is faulty or lacks sufficient guardrails that it subsequently shifts into the AI for Bad cauldron.
Some simply denote this as Bad AI.
How might Bad AI be related to lawyers and the practice of law?
Oddly enough, the emergence of Bad AI is going to be altogether a goldmine for lawyers. The basis for such a claim is due to the legal wrangling’s that are going to enormously and fervently arise due to Bad AI. Bad AI will stridently push the advent of AI into the legal sphere and therefore those attorneys that are well-prepared have a chance to make a bundle from the burgeoning trend of those that are willy-nilly tossing AI systems into the public arena.
For details on this and other AI and law topics, see my book entitled “AI and Legal Reasoning Essentials” at this link here: https://www.amazon.com/Legal-Reasoning-Essentials-Artificial-Intelligence/dp/1734601655
New Regulations About Bad AI
News reports have been stating for a while that the European Union (EU) has been drafting a “Bad AI” piece of legislation that purportedly calls for a financial penalty for companies that unleash non-compliant AI systems. Rumors have been floating endlessly about the potential severity or magnitude of the proposed penalties.
According to recent news reports about a leaked early draft of the “Regulation on a European Approach for Artificial Intelligence,” there was supposedly a proposed provision that businesses could be fined up to 4% of their global revenue for having promulgated Bad AI (this was apparently just a proposed provision, but it is seemingly likely there will be some form of eventually mandated sharp-toothed penalties).
The odds are that whatever the EU does, the US is likely to eventually do something of a similar nature. Other countries are bound to follow that same path.
In any case, 4% of annual revenue or any such akin sizable penalty is going to potentially be a quite big chunk of dough. Ergo, the specter of big bucks surrounding those that are (alleged or actual) purveyors of Bad AI means a big opening for attorneys.
You can bet your bottom dollar that when a company is accused of having violated such regulations, they will clamor for versed attorneys that can defend them from the Bad AI accusations and prosecutions. Furthermore, the allegations of fostering Bad AI will be spurred in a herd mentality way, causing a tsunami of such claims to veritably come out of the woodwork. Even firms that have AI for Good will undoubtedly get accused of actually generating Bad AI. In some instances, the claim will be utterly false, while in other instances the contention will be potentially valid.
Throughout those matters, astute attorneys that know about the ins and outs of AI and the law will be brought on board to aid in dealing with these substantive considerations.
Eventually, there will be so much of the Bad AI flying around, along with a plethora of accusations about Bad AI being in existence, firms will realize they need to get ahead of the Bad AI game. Businesses will seek out attorneys that can proactively participate in trying to ensure that the AI being produced by a firm will not get summarily blanketed as Bad AI.
Some are saying that if society goes the route of trying to regulate AI as to Bad or Good that this will essentially kill the golden goose. In other words, the societal all-told benefits by providing advances in automation as enabled via AI will be perceived by businesses as not worth the costs. The adverse attention of possibly having let loose Bad AI, along with costly criminal cases or civil lawsuits, will stoke firms to stop making any kind of AI systems.
This seems like a relatively rational form of response. By avoiding anything that might be construed as AI, perhaps the costly gauntlet of getting caught up in Bad AI can be avoided. This certainly on the surface seems indubitably sensible and a prudent notion for all businesses to observe.
Wait for a second, there is yet another twist and it once again ties into attorneys.
As I’ve mentioned in my prior columns, there is still a great deal of looseness as to the definition of AI and what legally constitutes an AI system. There is a solid chance that whatever regulation is devised about so-called Bad AI will include definitions of AI that you can drive a Mack truck through.
The point is that attorneys will be needed to help ascertain whether a firm demonstrably produced an AI system or not. More work for suitably versed lawyers.
I’ve saved the most frightening twist for the tail end of this legal tale.
Are you ready?
Suppose a law firm opts to adopt a Bad AI system (let’s assume unknowingly so), or perhaps opts to partner or tightly collaborate with a LegalTech firm to produce AI for the legal profession. To what degree will the law firm be considered culpable under any regulation related to Bad AI? It could be that lawyers will need to advise law firms and LegalTech companies about how to keep from affronting the woes of Bad AI.
Overall, despite that last bit of scary news, the primary takeaway is that as AI becomes an increasingly prevalent tech-induced part of our world, there is going to be an increasing need for lawyers with the right acumen about AI and the law to proffer their expertise in this relatively new specialty.
Bad AI might just be good for some attorneys, though I think we would all agree that Bad AI is inherently bad and we aren’t wishing for Bad AI, and instead merely wanting to prevent Bad AI from seeing the light of day.
For the latest trends about AI & Law, visit our website www.ai-law.legal
Additional writings by Dr. Lance Eliot:
- For Dr. Eliot’s books, see: https://www.amazon.com/author/lanceeliot
- For his Forbes column, see: https://forbes.com/sites/lanceeliot/
- For his AI Trends column, see: www.aitrends.com/ai-insider/
- For his Medium column, see: https://lance-eliot.medium.com/
And to follow Dr. Lance Eliot (@LanceEliot) on Twitter use: https://twitter.com/LanceEliot
Copyright © 2021 Dr. Lance Eliot. All Rights Reserved.