← All posts
ai privacy cipa wiretapping litigation

The Capability Test: How Courts Decide Whether Your SaaS Product Is a Wiretap

Two federal courts have allowed wiretapping claims against AI-powered SaaS vendors to proceed. Both adopted the capability test: if your infrastructure gives you the ability to use customer data for your own purposes, you may be a third-party interceptor, regardless of whether you exercise that ability.

No Boiler ·

Courts are applying a 1967 wiretapping statute to AI-powered SaaS products. If your product processes customer communications and has the technical ability to use that data for product improvement, you may already be on the wrong side of the line. Here’s how the legal test works, where the case law is splitting, and what it means for your contracts.


In January 2025, a woman named Eliza Taylor called Domino’s Pizza to place a delivery order. She gave her name, her address, and her credit card number. She thought she was talking to Domino’s.

She was talking to ConverseNow Technologies, an AI voice assistant that processes restaurant phone calls. Taylor had no idea. Neither did the millions of other customers whose calls ConverseNow had been handling across its restaurant clients.

Taylor sued. Not Domino’s. ConverseNow. The claim was straightforward: ConverseNow intercepted her phone call without her knowledge or consent, in violation of the California Invasion of Privacy Act. In August 2025, a federal court denied ConverseNow’s motion to dismiss and allowed the class action to proceed.

The ruling turned on a single legal question that every B2B SaaS founder shipping AI features needs to understand: when a software vendor processes communications between its customer and its customer’s end users, is the vendor a party to the conversation or a third-party eavesdropper?

The answer depends on which legal test the court applies. And right now, California federal courts are split.

The Statute: CIPA Sections 631 and 632

The California Invasion of Privacy Act was enacted in 1967. It predates the internet, mobile phones, and every technology currently subject to litigation under it. But its core prohibitions are broad enough that courts have applied them to website tracking pixels, AI chatbots, session replay tools, and now AI voice assistants.

Section 631 prohibits three things: unauthorized wiretapping, intercepting the contents of any wire communication, and using or attempting to use information obtained through interception. Section 632 separately prohibits eavesdropping on or recording a confidential communication without consent from all parties.

Section 637.2 creates a private right of action with statutory damages of at least $5,000 per violation, no proof of actual harm required. That number matters. ConverseNow processes millions of calls per month. The potential class size makes this an existential claim for any SaaS company on the receiving end.

Both sections exempt parties to the conversation from liability. Domino’s can record its own customer calls. The customer can record the call. The question is whether ConverseNow, as the technology provider sitting between them, is also a party to the conversation, or whether it is an unauthorized third party intercepting communications that don’t belong to it.

The Two Tests: Extension vs. Capability

California federal courts have developed two competing frameworks for answering that question when a software vendor is involved.

The Extension Test

Under the extension test, a software vendor is treated as an extension of its client (and therefore not a third party) if it functions like a tape recorder: it captures data, hosts it, and makes it available for the client to use. The vendor does not independently benefit from the data. It does not use the data for its own product development, marketing, or analytics. It is a tool, not a participant.

The leading case for this approach is Graham v. Noom (N.D. Cal. 2021), where the court held that FullStory, a session replay tool embedded on Noom’s website, was not a third-party interceptor because it merely captured user interaction data and made it available for Noom to analyze. FullStory did not use Noom’s data for its own purposes. It was, in the court’s framing, a digital tape recorder.

Under this test, what matters is actual use. If the vendor does not independently exploit the communications it processes, it is not a third party, regardless of its technical capabilities.

The Capability Test

Under the capability test, actual use is irrelevant. What matters is whether the vendor has the technical capability to use the intercepted communications for its own purposes. If the vendor’s infrastructure allows it to access, analyze, or benefit from the data, even if it has not done so, even if its contracts prohibit it, the vendor can be treated as a third-party interceptor.

The capability test emerged from Javier v. Assurance IQ (N.D. Cal. 2023) and has since been adopted by a growing number of district courts. Its logic rests on the structure of CIPA itself. Section 631(a) contains multiple clauses, one of which already includes a “use” requirement. Courts adopting the capability test reason that importing a use requirement into the other clauses would be redundant and inconsistent with the statute’s purpose of broadly protecting privacy.

Ambriz v. Google: The Capability Test Applied to AI

In February 2025, the Northern District of California denied Google’s motion to dismiss in Ambriz v. Google, a class action challenging Google Cloud Contact Center AI. GCCCAI is a product that businesses like Verizon, Hulu, GoDaddy, and Home Depot use to power their customer service call centers. It transcribes calls, analyzes them using natural language processing, and provides human agents with real-time suggestions and “smart replies.”

The plaintiffs alleged that when they called these companies’ customer service lines, Google intercepted their communications through GCCCAI without their knowledge or consent. Google raised several defenses. It argued that it merely provided a tool to its business clients for them to lawfully record and analyze their own calls. It argued that it was contractually prohibited from using the call data to train its AI models without customer permission. It argued that the software, not Google, was the entity conducting any interception, and software is not a “person” under CIPA.

The court rejected every argument. Adopting the capability test, it held that Google did not dispute it was technologically capable of using call data for an independent purpose, and that was what the capability test measures. Contractual restrictions were irrelevant to the analysis. Google’s technical architecture gave it the ability to access and benefit from the data. That was enough.

The court also rejected Google’s argument that it was merely an extension of its business clients. Under the capability test, a vendor with independent capacity to exploit communications is not equivalent to a tape recorder, even if it promises not to press play.

Taylor v. ConverseNow: The Capability Test Meets AI Voice Assistants

Six months after Ambriz, the same analysis reached ConverseNow. But the ConverseNow case is worse for the defendant, because ConverseNow did not just have the capability to use customer data for its own purposes. It was doing it.

ConverseNow’s own website and privacy policy stated that the company processes millions of live conversations each month and that its self-learning system evolves to improve the guest experience. The company disclosed that caller data is used to improve its ordering platform, advertisements, products, and services. These were not allegations inferred from technical architecture. They were admissions from the company’s own marketing materials.

The court found that Taylor met every element of a CIPA Section 631 claim. On interception: Taylor alleged she believed she was speaking directly to Domino’s, but her call was redirected to ConverseNow’s AI assistant without notice. On intent: ConverseNow’s business model depended on recording and analyzing calls. On third-party status: applying the capability test, the court held that ConverseNow’s capability and actual use of data to improve its own product made it a third-party interceptor rather than an extension of Domino’s.

The court separately found that Taylor stated a claim under Section 632. ConverseNow had argued, in what the court called a “flippant” submission, that pizza orders did not warrant a reasonable expectation of privacy because there is no privacy interest in pepperoni, sausage, or mushrooms. The court disagreed. Taylor provided her name, address, and credit card details during the call. That was sufficient to allege a confidential communication.

Gutierrez v. Converse Inc.: The Other Direction

The capability test is not the only game in town. In July 2025, the Ninth Circuit affirmed summary judgment for Converse (the shoe company, not the AI company) in a CIPA case involving a Salesforce-powered chat widget on the Converse website.

The plaintiff alleged that Salesforce intercepted her web chat communications as a third party. The Ninth Circuit found insufficient evidence that Salesforce actually intercepted or used the data. Justice Bybee wrote separately to note a potentially more significant point: Section 631(a)‘s first clause, which prohibits wiretapping using any instrument connected to a telephone wire, line, or cable, may not apply to internet communications at all. The statute was written for telephone wires. An online chat message sent on a smartphone is not obviously covered.

This is an unpublished decision without precedential force, but it signals a different judicial disposition toward CIPA claims involving internet-based (as opposed to telephone-based) communications. For SaaS founders, the distinction matters: AI voice assistants processing phone calls face a clearer path to liability than web-based chatbots, at least until this question is resolved at the circuit level.

The Popa Wrinkle: Standing After the Ninth Circuit

The capability test faces a separate challenge from the Ninth Circuit’s August 2025 decision in Popa v. Microsoft. In Popa, the court held that routine website tracking did not constitute a concrete injury sufficient for Article III standing. The plaintiff could not demonstrate that session replay technology captured embarrassing, invasive, or otherwise private information.

Popa did not address AI voice assistants or the capability test directly. But it gives defendants in CIPA cases a procedural weapon. If the mere capability to misuse data does not cause a concrete injury to the plaintiff, then courts may dismiss these claims for lack of standing before ever reaching the merits.

The tension between Ambriz (capability to use data is enough to state a claim) and Popa (routine data collection without concrete harm is not enough for standing) is unresolved. Defendants in pending AI-related CIPA cases are already citing Popa in their motions. Whether this argument succeeds will likely depend on the specific facts: a plaintiff who provided a credit card number to an AI voice assistant has a stronger concrete injury argument than a plaintiff whose website browsing was tracked by a cookie.

Why This Matters for SaaS Founders

If your product processes communications between your customer and your customer’s end users, you need to understand where you sit on this map. The specific technology matters considerably. Phone calls routed through telephone networks fall squarely within CIPA’s original scope, and the ConverseNow and Ambriz cases both involved telephone calls. Web-based chat, email, and other internet communications are on less certain ground, particularly after Justice Bybee’s concurrence in Converse Inc. questioning whether Section 631(a)‘s first clause applies to internet communications at all. That question is unresolved at the circuit level, and founders should not assume web-based products are safe. But the strongest current exposure is for products that process voice calls, because those fit cleanly within a statute written for telephone wires.

Regardless of the communication channel, the core data flow question is the same: end-user communications enter your system, your system processes them, and you have the technical ability to use that data for purposes beyond the immediate request-response cycle.

CIPA prohibits interception without consent. If valid all-party consent is obtained before the communication happens, the wiretapping theory largely collapses.

Taylor’s claim worked because she had no idea she was talking to ConverseNow instead of Domino’s. If the call had opened with a disclosure (“This call is processed by an AI assistant provided by ConverseNow Technologies and may be recorded and used to improve our service”), the core Section 631 interception claim becomes much harder to sustain. Taylor would have known a third party was involved and proceeded anyway.

This sounds simple. In practice, it breaks down for a specific reason: most SaaS vendors leave consent to their customer, and the customer doesn’t do it, or does it poorly. Domino’s presumably had some form of call recording disclosure, but it did not disclose that a third-party AI vendor was processing the call. ConverseNow’s product was designed to be invisible to the caller. The entire value proposition was a seamless experience where the customer thinks they’re talking to the restaurant.

That product design choice is what created the lawsuit.

For SaaS founders building voice AI products, this means consent is not just a contractual allocation problem. It is a product design problem. Your terms can say the customer is responsible for obtaining end-user consent. But if your product is architecturally designed to hide your involvement from the end user, contractual allocation will not protect you when the end user sues you directly. The product needs to surface a consent mechanism, not just your terms.

If end users are properly informed that a third-party AI system will process their communication, and they proceed, the foundational element of a CIPA claim disappears. The end user consented. There is no unauthorized interception.

The Training Question: A Spectrum, Not a Binary

A natural reaction to these cases is to decide never to train or fine-tune on voice data. That is the safest legal position, but it is not the only defensible one, and it may not be the right business decision for every company.

The cases create a spectrum of exposure. Where you fall depends on the interaction between consent, data use, and technical architecture.

Worst case: no consent, active training, public marketing. This is ConverseNow. End users had no idea a third party was involved. The company was actively using call data to improve its product. Its website advertised that its self-learning system evolves from processing conversations. The court cited this marketing language as evidence. Every fact cut against the defendant.

Middle case: consent obtained, no training, but capability exists. You disclose third-party AI processing. End users proceed with knowledge. You do not train on any voice data. But your infrastructure technically allows you to access recordings. Under the extension test, you are likely safe: you are functioning as a tool, not exploiting data for your own purposes. Under the capability test, you still face some exposure, because the test asks whether you could use the data, not whether you did. But the factual narrative is significantly weaker for a plaintiff. No actual misuse. Informed consent. The case is harder to bring and harder to win.

Best case: consent obtained, no training, technical controls in place. You disclose third-party AI processing and obtain consent. You do not train on voice data. And you implement technical controls that constrain your own access: end-to-end encryption, automated deletion after processing, architecture that prevents human access to recordings, contractual and technical prohibitions on using data for model improvement. This position weakens even the capability argument, because the capability itself is technically constrained. A plaintiff would need to argue that you could theoretically circumvent your own security architecture, which is a harder claim to sustain at the pleading stage.

The point is not that training on voice data is always wrong. The point is that each decision along this spectrum, consent, data use, technical architecture, marketing language, independently affects your legal exposure. A company that gets consent and trains on data is in a materially different position from one that gets no consent and trains on data. A company that does not train but has no technical controls is in a different position from one that does not train and has implemented access restrictions.

The ConverseNow facts were bad on every dimension simultaneously. Most companies do not need to be in that position.

The AI series on this blog covers the specific contractual provisions in detail. Posts on customer data and AI training, AI subprocessors, and AI-specific acceptable use walk through the drafting. What follows is the checklist of provisions that the ConverseNow line of cases makes urgent.

End-user consent: allocation and mechanism. Your terms need to clearly allocate responsibility for obtaining end-user consent before communications enter your AI system. If your customer (the restaurant, the call center, the healthcare provider) is responsible for disclosing that a third-party AI system will process the communication, your terms need to say so explicitly. But allocation alone is not enough. Your product should support the consent workflow rather than bypass it. If your product is designed so that the end user never learns a third party is involved, contractual allocation will not insulate you from a direct claim by the end user. Build the disclosure into the product experience. A pre-call message, a chat banner, a clear notification before AI processing begins. Taylor’s entire case rested on the fact that she had no idea she was talking to anyone other than Domino’s.

Customer indemnification for consent failures. Allocating consent responsibility to the customer is necessary but not sufficient. You also need the customer to indemnify you for third-party claims arising from their failure to obtain that consent. ConverseNow is the defendant in this case, not Domino’s, even though it was Domino’s responsibility to disclose the AI assistant to callers. Without an indemnification provision, the provider bears the full cost of defending a class action caused by the customer’s failure to follow through on a contractual obligation. Your terms should require the customer to indemnify, defend, and hold harmless the provider against any third-party claims arising from the customer’s failure to provide required disclosures or obtain required consents from end users. This does not prevent end users from suing you directly, but it gives you contractual recourse against the customer whose non-compliance created the exposure.

Data training provisions. Your terms of service and DPA need to explicitly address whether end-user communication data is used for any purpose beyond the immediate service. If your AI improves from processing customer data, say so and ensure the consent mechanism covers that use. If it does not, say that clearly. Ambiguity is what creates class actions. The distinction that matters is between inference-only processing (data enters the model, a response is generated, the data is discarded) and any form of retention or use for model improvement. If you choose to train, the consent disclosure needs to cover training specifically, not just processing.

Technical controls that constrain capability. Under the capability test, contractual promises not to use data are irrelevant. What matters is whether your infrastructure gives you the ability to access and benefit from the data. If you are not training on voice data, implement technical controls that reflect that decision: end-to-end encryption, automated deletion after processing, access restrictions that prevent human review of recordings, audit logs that demonstrate compliance. These controls do not eliminate capability-test exposure entirely, but they materially weaken the factual basis for a claim and make the “you could have used it” argument harder to sustain.

Privacy policy disclosures. Your privacy policy needs to disclose AI processing of communications specifically. Not buried in a general “we use service providers” clause. If your system records, transcribes, analyzes, or learns from end-user communications, the privacy policy should describe that processing, identify the categories of data involved, and state the purpose.

Subprocessor list updates. If your AI system routes communications through a third-party LLM provider, that provider is a subprocessor. If you have not updated your subprocessor list and notified DPA customers, you are in breach of your own DPA before you even get to CIPA.

Marketing and technical documentation alignment. Review every public statement about how your AI improves from customer interactions. Your website, pitch deck, blog posts, and product documentation all become evidence if someone files a CIPA claim. If your marketing says the system learns from every conversation but your terms say data is not used for training, you have a contradiction that creates exposure on multiple fronts. ConverseNow’s own marketing language was quoted in the court’s opinion. If you have made a business decision not to train on voice data, make sure your marketing reflects that decision. If you do train, make sure your consent disclosures and contractual provisions cover it.

Where This Goes Next

Neither ConverseNow nor Ambriz has gone to trial. No class has been certified in either case. The capability test has not been reviewed by the Ninth Circuit in the context of AI voice assistants.

But the trajectory is now clear. Two separate federal courts have allowed CIPA claims against AI-powered SaaS vendors to survive motions to dismiss. In both cases, the court adopted the capability test. In both cases, the vendor’s “we just provide a tool” defense failed. That is enough to establish a litigation template. Plaintiffs’ firms have already filed over a dozen similar cases targeting companies that use AI to process customer communications. Expect more. The playbook is proven: identify a SaaS vendor processing voice communications through AI, allege the vendor has the capability to use that data for its own purposes, and seek $5,000 per violation in statutory damages across a class of every end user whose call was processed.

The California legislature attempted to reform CIPA through SB 690, which passed the Senate unanimously but stalled in the Assembly. If reintroduced and enacted, the earliest effective date would be January 2027. That leaves at least two more years of litigation under the current statute, and plaintiffs’ firms know it.

The practical question for SaaS founders is not whether this law will change. It is whether your legal stack and your product are ready for the current state of play. The good news is that the primary mitigation, proper end-user consent, is within your control. The cases that have survived motions to dismiss share a common fact pattern: the end user had no idea a third-party AI vendor was involved. Fix that, and you have addressed the core element of the claim. Layer in clear data training commitments and technical controls that constrain your own capability, and you are in a materially different position from the defendants in these cases. The time to make those changes is before someone files.


This is the first post in the AI Privacy Litigation series. Next: The CIPA Wave: AI Chatbots, Session Replay, and the Privacy Lawsuits Heading Your Way.

For the contractual framework, see the AI-Enabled SaaS series, particularly Customer Data and AI Training and AI Subprocessors, the EU AI Act, and the Regulatory Disclosure Gap.

No Boiler provides self-service legal document generation and educational content. This material and our service is not a substitute for legal advice. Please have a qualified attorney review any documents before relying on them. No Boiler is not a law firm, and communications with us do not create an attorney-client relationship or carry any expectation of confidentiality. Use of our platform and content is governed by our Terms of Service and Privacy Policy.

No Boiler

Generate your legal stack in minutes.

Terms of Service, Privacy Policy, DPA, and Sub-Processor List — built on counsel-reviewed baselines, customized to your product.

Get started →