AI & Law: Using Adversarial Red Teams For AI In The Law Is Judiciously Prudent

Lance Eliot
7 min readSep 27, 2022
Adversarial red teams are vital for vetting AI-based LegalTech applications

by Dr. Lance B. Eliot

For a free podcast of this article, visit this link https://ai-law.libsyn.com/website or find our AI & Law podcast series on Spotify, iTunes, iHeartRadio, plus on other audio services. For the latest trends about AI & Law, visit our website www.ai-law.legal

Abstract: A well-known and commonly accepted best practice in building and fielding today’s computer systems consists of attacking those systems on a pretend basis to see what exploits can be uncovered (then mitigating those vulnerabilities). You might do this with a contracted trusted outside team, known generically as a red team. We need to start using red teams regarding AI systems, including AI in the law too.

Key briefing points about this article:

  • Computer systems are exceedingly under cyber-attack these days
  • Preparation of a computer system ought to include figuring out its vulnerabilities
  • A means to discover vulnerabilities consists of using an outside trusted team of attackers
  • The use of such teams, known generically as red teams, has become a best practice
  • Red teams need to be used on AI systems too, including and especially AI in the law

--

--

Lance Eliot

Dr. Lance B. Eliot is a renowned global expert on AI, successful startup founder, global CIO/CTO, , was a top exec at a major Venture Capital (VC) firm.