← All series
A Limited Series

AI Privacy Litigation: What the Courts Are Telling SaaS Founders

The AI legal content you see everywhere covers copyright disputes about training data and regulatory debates about AI safety. Those cases matter if you are OpenAI or Anthropic. They do not matter if you are a seed-stage SaaS company that shipped an AI feature last quarter.

Every case in this series involves a B2B SaaS vendor. These are active lawsuits and enforcement actions against companies that built AI features, processed customer data through them, and had legal stacks that did not account for what the product actually did.

Each post takes one case, extracts the principle, and maps it to the specific provision in your contracts that needs to change. The goal is not to survey the landscape. It is to answer a concrete question: if you are a B2B SaaS company shipping AI features, what do these cases mean for your legal stack?

The Series
Privacy and Wiretapping
01
The Capability Test: How Courts Decide Whether Your SaaS Product Is a Wiretap
Two federal courts have allowed wiretapping claims against AI-powered SaaS vendors to proceed. Both adopted the capability test: if your infrastructure gives you the ability to use customer data for your own purposes, you may be a third-party interceptor, regardless of whether you exercise that ability.
Mar 10, 2026
02
AI Scribes, Fabricated Consent, and Regulated Data: What the Sharp HealthCare Case Means for SaaS Vendors
A patient's medical visit was recorded by an AI scribe without consent. His chart said he was advised and consented. He says that never happened. The AI appears to have generated the consent documentation itself. When regulated data enters the picture, the wiretapping pattern compounds with sector-specific privacy statutes and a record integrity problem no contract disclaimer can fix.
Mar 12, 2026
Regulated Data and BAA Breach
03
The Verily HIPAA Whistleblower Case: What Happens When a SaaS Vendor Breaches Its BAA
A health-tech SaaS vendor allegedly used patient data from 25,000 people for marketing and research without authorization, delayed breach notifications while negotiating contract renewals, and fired the executives who raised concerns. The case is a wrongful termination suit, not a HIPAA enforcement action. But the underlying facts are a roadmap for how BAA obligations can unravel a SaaS vendor's customer relationships.
Mar 14, 2026
Algorithmic Discrimination
04
Algorithmic Discrimination and the SaaS Vendor: Mobley v. Workday at Class Certification
A nationwide class action against Workday alleges its AI hiring tools had a disparate impact on applicants over 40. Workday argued it merely provides the platform. The court disagreed. If your product scores, ranks, or filters people in ways that touch a protected class, this case applies to you.
Mar 16, 2026
Marketing and Enforcement
05
FTC AI-Washing Enforcement: What SaaS Founders Get Wrong About Marketing AI Features
The FTC has brought over a dozen enforcement actions against companies that overstate what their AI does. If your marketing says 'AI-powered insights you can trust' and your terms say 'as-is, may be inaccurate,' you have a contradiction that creates exposure on two fronts. Here's how the enforcement pattern works, why it survived a change in administration, and what it means for your contracts.
Mar 16, 2026
Agentic AI
06
Amazon v. Perplexity: When the Platform Sues Your AI Agent
Your user gave your AI agent permission to act on their behalf. The platform where your agent operates did not. A federal court ruled that user permission is not platform authorization and issued a preliminary injunction barring Perplexity's AI shopping agent from accessing Amazon's website. This is a platform owner using computer fraud statutes to shut down an AI product.
Mar 16, 2026
Biometric Data
07
Clearview AI and BIPA: Why Biometric Data Is the Highest-Risk Category for SaaS Vendors
Clearview AI's facial recognition class action settled for $51.75 million — paid partly in equity because the company couldn't afford cash. The statute behind it, Illinois BIPA, applies to any company that collects biometric identifiers, including the SaaS startup whose product processes a single fingerprint scan or voiceprint. Here's what the exposure looks like and what your contracts need to say.
Mar 16, 2026
08
Does Your AI Know Who's Talking? The Microsoft Teams Voiceprint Case and What It Means for Every SaaS Product with Speaker Attribution
Five Illinois residents sued Microsoft alleging that Teams' live transcription feature collects voiceprints without the notice, consent, or retention policies BIPA requires. If your SaaS product identifies, attributes, or stores who said what in a meeting or call, this case applies to you.
Mar 18, 2026
Compliance Map
09
AI Disclosure Laws by State: A Practical Compliance Map for B2B SaaS
Utah, Texas, Colorado, California, Illinois. Each state has enacted AI-specific disclosure or transparency requirements, and more are coming. Here's which laws apply to B2B SaaS, which document in your legal stack each one affects, and what you need to add before your next enterprise deal.
Coming soon
Stay current

Cases are decided, regulations take effect, and new enforcement actions are filed. New posts in this series publish as the landscape shifts.

The Foundation

This series covers what the courts are doing. It assumes your product has a legal stack and that you have read the companion series on AI contract provisions.

For the specific contractual framework covering data training commitments, output ownership, LLM provider terms, subprocessor disclosure, and AI-specific acceptable use, see the AI-Enabled SaaS series. For the foundational legal stack every B2B SaaS company needs, start with Series I.

AI-Enabled SaaS: Legal Foundations → Series I: The B2B SaaS Legal Stack →

No Boiler generates customized legal documents for B2B SaaS companies, including AI-specific provisions for Terms of Service, DPAs, Privacy Policies, and Acceptable Use Policies.