← All posts
ai litigation copyright regulation saas enterprise

AI in the Courtroom: What Recent Litigation Means for B2B SaaS Providers

Six cases, six principles, six specific provisions in your legal stack that need to change. Mobley v. Workday, Taylor v. ConverseNow, Saucedo v. Sharp HealthCare, NYT v. OpenAI, FTC v. Air AI, and California AB 316 — what each one means if you're a B2B SaaS company shipping AI features.

No Boiler ·

Every law firm with an AI practice group publishes quarterly litigation roundups. This is not one of those.

This post takes six cases and regulatory actions, extracts the principle each one establishes, and maps it to the specific provision in your legal stack that needs to address it. The goal is not to survey the landscape. It is to answer a concrete question: if you are a B2B SaaS company shipping AI features, what do these cases mean for your contracts?

The cases are selected because they directly implicate the SaaS provider, not just the LLM developer or the end user. They cover the six risk categories that this series has addressed throughout: algorithmic discrimination, privacy and data interception, regulated data and consent, copyright and training data, marketing claims, and output liability. Each one connects to a specific earlier post in this series.

A note on structure: cases settle, appeals are decided, new cases are filed. This post is built around the principles these cases establish, not their procedural status. The principles will persist regardless of how individual cases resolve.

Mobley v. Workday: Vendor Liability for Algorithmic Discrimination

In May 2025, a federal court in California certified a nationwide class action against Workday, alleging that the company’s AI-powered hiring tools had a disparate impact on applicants over the age of 40. The lead plaintiff, Derek Mobley, alleged that after applying to over 100 jobs through Workday’s platform, he was rejected each time within minutes, suggesting automated screening.

The case is significant for B2B SaaS providers because the lawsuit was brought against Workday as the software vendor, not against the employers who used its tools. Workday argued that it merely provides the platform and does not make hiring decisions. The court disagreed. It found that Workday was sufficiently involved in the decision-making process to be potentially liable as an agent of the employers. The court noted that the software does not simply implement employer criteria in a mechanical way. It participates in the process by recommending some candidates and rejecting others.

The court certified a collective action covering all applicants over 40 who were processed through Workday’s platform since 2020. Workday disclosed that over one billion applications were rejected using its tools during the relevant period. The court ordered Workday to produce its customer list so that affected applicants could be notified. In July 2025, the court expanded the scope to include applicants processed through HiredScore, an AI product Workday acquired separately.

The principle: B2B SaaS vendors can be liable for how their AI features affect their customers’ end users, even when the vendor is not the decision-maker. The “we just provide the tool” defense does not insulate you from discrimination claims if your tool participates meaningfully in the decision.

What this means for your contracts: Your acceptable use policy needs to restrict fully automated decision-making that produces legal or similarly significant effects on individuals without appropriate human review, as covered in this series’ post on AI-specific acceptable use. Your indemnification structure needs to account for downstream discrimination exposure. And your limitation of liability may need to treat algorithmic bias claims as a carve-out from the general cap, the same way confidentiality breaches and IP infringement are typically carved out. If your AI feature scores, ranks, filters, or recommends in any context that touches a protected class (hiring, lending, insurance, housing), this case applies to you.

Taylor v. ConverseNow: Privacy and Data Interception

In 2025, a federal court in California allowed a putative class action to proceed against ConverseNow Technologies, a SaaS company that processes restaurant customer phone calls using an AI assistant. The claim was brought under the California Invasion of Privacy Act (CIPA), alleging that the AI assistant constituted unlawful interception of communications.

The court’s analysis turned on a specific distinction: whether the data was used exclusively to benefit the consumer (processing the call to take the order) or was also used for the vendor’s own commercial purposes (system improvement, model training). Where the data served both purposes, the court found plausible grounds for wiretapping liability. The SaaS vendor, not the restaurant, was the target of the claim.

The principle: SaaS vendors deploying AI that interacts with their customers’ end users can face interception and privacy claims under state electronic communications laws. The line between processing data to serve the end user and processing data to improve the vendor’s product is where liability attaches.

What this means for your contracts: If your AI feature processes voice, chat, or other real-time end-user communications, your data flow map needs to distinguish between inference-only processing (data used to generate a response and then discarded) and data retained for any other purpose (including system improvement). Your privacy policy and DPA need to disclose AI processing of end-user communications specifically. Your terms need to allocate responsibility for end-user consent. And your data training provisions, covered in this series’ second post, need to be precise about whether end-user interaction data is used for anything beyond the immediate request-response cycle.

The ConverseNow case involves restaurant phone orders. The Sharp HealthCare case involves doctor-patient conversations. The same legal theory applies, but the consequences compound when the data is regulated.

In November 2025, a patient named Jose Saucedo filed a proposed class action against Sharp HealthCare in San Diego, alleging that Sharp used Abridge’s ambient AI clinical documentation tool to record his medical visit without his knowledge or consent. During a routine physical exam, the conversation was captured by a microphone-enabled device, transmitted to Abridge’s cloud, and used to generate a draft clinical note.

The core allegation is a consent failure: California is an all-party consent state, and Saucedo says he was never told the visit was being recorded. But the case goes further. Saucedo discovered documentation in his patient portal indicating that he had been “advised” about the recording and had “consented.” The complaint alleges this language was false, that the AI platform automatically inserted consent documentation into medical charts even when patients were never actually informed. When Saucedo asked Sharp to delete the recording, he was told the vendor retains data for 30 days and the recording could not be promptly deleted.

The lawsuit was brought against Sharp (the healthcare provider), not against Abridge (the AI vendor). But Abridge is deeply implicated: the data was transmitted to their cloud, their platform allegedly generated the false consent documentation, and their retention policy prevented immediate deletion. Attorneys estimate 100,000 patient encounters were recorded since the Abridge rollout in April 2025. Statutory damages under California’s penal code allow $5,000 per violation.

The claims include violations of CIPA (wiretapping), the California Confidentiality of Medical Information Act (sharing identifiable medical information with an outside company without written authorization), and HIPAA (transmitting protected health information to a third-party vendor without proper patient consent). The ambient AI medical scribe market is growing fast, with spending growing 2.4x in 2025 and the market projected to reach $3 billion by 2033, which means this case has implications well beyond one health system.

The principle: When AI processes regulated data (healthcare, financial, educational), the privacy and consent exposure from the ConverseNow theory compounds with sector-specific regulations. And auto-generated compliance documentation that does not reflect what actually happened is not a technical glitch. It is a record integrity problem.

What this means for your contracts: If your AI feature processes regulated data on behalf of your customers, three things need to be airtight. First, your terms and implementation documentation need to clearly allocate responsibility for end-user consent. If the customer (the healthcare provider, the financial institution) is responsible for obtaining consent before data enters your AI feature, your terms need to say so explicitly, and your product needs to support the consent workflow rather than bypass it. Second, if your product generates any compliance documentation (consent records, audit logs, processing records), that documentation must accurately reflect what occurred. Auto-inserting language claiming consent was obtained when it was not is a liability that no contractual disclaimer can fix. Third, your data retention and deletion practices need to support your customers’ obligations to their end users. If a patient or consumer asks for their data to be deleted and your retention policy prevents that, your customer is the one facing the regulatory complaint, and your vendor agreement is where they will look for recourse.

Over 50 copyright lawsuits are currently pending against AI model developers. The highest-profile is The New York Times v. OpenAI, which alleges that OpenAI used millions of copyrighted articles to train its models without consent. Summary judgment in that case is expected in mid-2026. In a separate action filed in January 2026, Universal Music, Concord Music Group, and ABKCO Music sued Anthropic for over three billion dollars, alleging mass infringement of music lyric copyrights in Claude’s training data.

Three judges have ruled on fair use in AI training so far. Two ruled in favor of AI developers (including a summary judgment in Meta’s favor). One ruled against (in Thomson Reuters v. ROSS Intelligence, now on appeal). Notably, in the Bartz case itself, the court ruled that using lawfully acquired books for AI training was fair use, but that downloading pirated copies of those same books for training was not. The distinction was not about the training activity but about how the data was acquired. This is the clearest signal yet on where the fair use line sits, and it suggests that data provenance (where the training data came from and whether it was lawfully obtained) may matter as much as the transformative nature of the use. No appellate court has addressed the question. Meanwhile, the Bartz v. Anthropic class action settled for $1.5 billion (the largest publicly reported copyright settlement in US history, covering approximately 500,000 pirated books used in training), and Warner Music pivoted into a licensing partnership with Suno, the AI music generator it had previously sued.

For B2B SaaS companies, the direct exposure is limited: these cases target the model developers, not the companies that build products on top of their APIs. But the indirect exposure is real. If a court rules that training on copyrighted data is not fair use, the models you build your product on have a legal defect. That defect flows downstream. If your customer uses an AI-generated output from your product and a third party brings an infringement claim, the question is whether you indemnified the customer and whether you have upstream indemnification from your LLM provider to backstop that commitment.

The principle: The copyright status of AI training data is unresolved, the financial stakes are enormous, and if the models you build on are found to infringe, the risk flows downstream through the three-actor chain.

What this means for your contracts: Your LLM provider agreement’s IP indemnification terms directly constrain what you can promise your customers. If your provider does not indemnify you for output infringement, you cannot credibly extend that protection downstream. Your customer-facing IP indemnification clause needs to either carve out AI-generated outputs explicitly or be backed by upstream coverage that supports the commitment, as covered in this series’ posts on AI outputs and LLM provider contracts.

FTC v. Air AI and the AI-Washing Crackdown: Marketing Claims and Enforcement Risk

The Federal Trade Commission has brought over a dozen enforcement actions against companies that overstate what their AI does. In August 2025, the FTC sued Air AI, alleging deceptive claims that its agentic AI could fully replace human sales representatives and deliver unrealistic business results. Separately, the Utah Artificial Intelligence Policy Act now makes companies liable for deceptive or unlawful practices carried out through AI tools as if they were the company’s own acts.

The principle: Marketing claims about AI capabilities are subject to the same truth-in-advertising standards as any other product claim. There is no AI exemption from consumer protection law.

What this means for your contracts: This is not strictly a contractual issue, but it has direct contractual consequences. If your marketing page says “AI-powered insights you can trust” and your terms say “AI outputs are provided as-is, may be inaccurate, and should not be relied upon,” you have a contradiction that creates exposure on two fronts: FTC enforcement and customer claims. The fix is alignment across marketing copy, sales materials, contractual warranties, and product documentation. Put your marketing page and your warranty disclaimer side by side. If a reasonable customer would see a contradiction, you have a problem that no contract language will solve, as covered in this series’ post on AI outputs and the marketing claims problem.

California AB 316: The Death of the Algorithm Defense

Effective January 1, 2026, California enacted legislation that prohibits AI software developers from asserting defenses claiming that the AI system, not the developer, is legally responsible for AI-caused harms. This is a statutory codification of a principle courts were already moving toward, but it removes any ambiguity in the most commercially significant US jurisdiction.

The practical implication is straightforward: you own your AI’s behavior. If your product generates an output that causes harm, you cannot point to the model and say the algorithm is responsible. You developed the product. You chose the model. You designed the feature. You are the defendant.

The principle: The “the algorithm did it” defense is legislatively dead in California and likely to spread to other jurisdictions.

What this means for your contracts: Your warranty disclaimers need to be specific to AI features, framing the limitation as a shared responsibility model rather than a blanket abdication. The provider provides the tool with stated limitations. The customer is responsible for validating outputs for their use case. This framing, covered in the post on AI outputs, is both more honest and more defensible than a broad “use at your own risk” disclaimer that a court or regulator may view as an attempt to disclaim responsibility that the law now says you cannot disclaim. Your AUP restrictions on high-risk use cases also serve a liability-limiting function: by defining the intended use boundaries and restricting use in high-risk automated decision-making without human oversight, you limit your exposure to claims arising from uses you explicitly told customers not to pursue.

The Through-Line

The legal system is not creating new theories to regulate AI. It is applying existing frameworks to AI-enabled products. Anti-discrimination law reaches algorithmic bias. Wiretapping statutes reach AI processing of communications. HIPAA and state medical privacy laws reach AI scribes that record without consent. Copyright law reaches training data. Consumer protection law reaches marketing claims. Product liability principles reach AI outputs.

Every one of these frameworks existed before AI features were added to your product. The difference is that your contracts, drafted for deterministic software, do not account for the ways these frameworks now apply to what your product does.

This series has covered the specific contractual provisions that need to change: data training commitments, output ownership and disclaimers, upstream provider terms, subprocessor disclosure, acceptable use restrictions, consent workflows for regulated data, billing terms, and insurance alignment. The six cases in this post are the consequence of not making those changes. They are not hypothetical risks. They are active litigation, enforcement actions, and statutes that are already reshaping how B2B SaaS companies contract for AI-enabled products.

The question is not whether these issues will reach your company. The question is whether your contracts will be ready when they do.


This is the final post in the AI-Enabled SaaS series. Previous: AI and Insurance: What Changes in Your Cyber and Tech E&O Coverage. For the full series, see the AI-Enabled SaaS series page.

For the foundational B2B SaaS legal stack (Terms of Service, Privacy Policy, DPA, SLA), see our core series on B2B SaaS legal frameworks.

No Boiler provides self-service legal document generation and educational content. This material and our service is not a substitute for legal advice. Please have a qualified attorney review any documents before relying on them.

No Boiler

Generate your legal stack in minutes.

Terms of Service, Privacy Policy, DPA, and Sub-Processor List — built on counsel-reviewed baselines, customized to your product.

Get started →