by Dr. Lance B. Eliot
For a free podcast of this article, visit this link https://ai-law.libsyn.com/website or find our AI & Law podcast series on Spotify, iTunes, iHeartRadio, plus on other audio services. For the latest trends about AI & Law, visit our website www.ai-law.legal
Abstract: A well-known and commonly accepted best practice in building and fielding today’s computer systems consists of attacking those systems on a pretend basis to see what exploits can be uncovered (then mitigating those vulnerabilities). You might do this with a contracted trusted outside team, known generically as a red team. We need to start using red teams regarding AI systems, including AI in the law too.
Key briefing points about this article:
- Computer systems are exceedingly under cyber-attack these days
- Preparation of a computer system ought to include figuring out its vulnerabilities
- A means to discover vulnerabilities consists of using an outside trusted team of attackers
- The use of such teams, known generically as red teams, has become a best practice
- Red teams need to be used on AI systems too, including and especially AI in the law
Section 1: Introduction
Sometimes you have to think like the enemy.
In the instance of preparing for a legal case, the usual advice entails putting yourself into the shoes of the opposing counsel. What arguments are they going to make? Furthermore, what kind of counterarguments will they proffer regarding the legal arguments you are stridently going to produce?
I realize that referring to the opposing side as the enemy is a bit overstated since the notion of an enemy seems perhaps overly harsh, but in some overall sense, you could suggest they are actively opposed or directly hostile to your legal position and accordingly constitute an enemy-like flavor. Maybe we should be softer in our wording and refer to them as your adversary.
This notion of thinking like an enemy or an adversary is well-known in the cybersecurity realm.