← All posts
ai privacy hipaa cipa medical litigation

AI Scribes, Fabricated Consent, and Regulated Data: What the Sharp HealthCare Case Means for SaaS Vendors

A patient's medical visit was recorded by an AI scribe without consent. His chart said he was advised and consented. He says that never happened. The AI appears to have generated the consent documentation itself. When regulated data enters the picture, the wiretapping pattern compounds with sector-specific privacy statutes and a record integrity problem no contract disclaimer can fix.

No Boiler ·

A patient discovers his doctor’s visit was recorded by an AI scribe he never consented to. His medical chart says he was “advised” and “consented.” He says that never happened. The AI appears to have generated the consent documentation itself. The vendor retains the recording for 30 days and cannot delete it on demand. This is not a hypothetical. It is an active class action with an estimated 100,000 affected patients.


In July 2025, Jose Saucedo went to a Sharp Rees-Stealy clinic in San Diego for a routine physical exam. During the visit, a microphone-enabled device on the clinician’s phone captured the entire conversation. The audio was transmitted to Abridge, a Pittsburgh-based AI company that converts patient-clinician conversations into structured clinical notes. Saucedo was not told any of this was happening.

He found out later, when he read his medical notes in the patient portal. The notes appeared to be AI-generated. More concerning, the documentation stated that Saucedo had been “advised” that his visit was being recorded and that he had “consented.” Saucedo says neither of those things happened.

When he contacted Sharp and asked them to delete the recording, he was told that Abridge retains data for 30 days after a visit and that the recording could not be promptly deleted. Sharp offered to remove or modify the AI-generated clinical note. But the recording itself, sitting on a third-party vendor’s cloud servers, was beyond immediate reach.

On November 26, 2025, Saucedo filed a proposed class action in San Diego Superior Court. The lawsuit names Sharp HealthCare and its affiliated medical groups. Abridge is not named as a defendant, but its technology, data practices, and retention policies are central to every claim.

The case matters for SaaS founders because it takes the ConverseNow pattern (third-party AI vendor processes communications without end-user consent) and compounds it with regulated data, fabricated compliance documentation, and a vendor retention policy that prevented the customer from meeting its deletion obligations. Each of those layers independently creates a lesson for anyone building or selling AI-enabled SaaS into regulated industries.

The Claims

The complaint alleges violations of three California statutes, each targeting a different dimension of the same conduct.

The California Invasion of Privacy Act (CIPA), Sections 631 and 632. CIPA is a 1967 anti-wiretapping statute that prohibits the interception or recording of communications without consent. It carries criminal penalties and a private right of action with statutory damages of at least $5,000 per violation, no proof of actual harm required. The same statute at issue in the ConverseNow line of cases. Saucedo alleges that Abridge’s technology intercepted and recorded his conversation with his doctor without his knowledge or consent. California is an all-party consent state. Recording a confidential conversation without consent from everyone involved is a criminal act.

The Confidentiality of Medical Information Act (CMIA). California’s medical privacy statute requires written authorization before a healthcare provider can share identifiable medical information with an outside company. The complaint alleges that Sharp transmitted Saucedo’s medical conversation, including symptoms, diagnoses, medications, treatment plans, and personal identifiers, to Abridge without obtaining written authorization. CMIA is separate from HIPAA and provides its own private right of action under California law.

HIPAA (as background, not a direct claim). HIPAA does not provide a private right of action. Patients cannot sue providers directly for HIPAA violations. But HIPAA obligations form the backdrop: transmitting protected health information to a third-party vendor without proper patient consent implicates the provider’s obligations under the HIPAA Privacy Rule, and a HIPAA violation can trigger enforcement by the HHS Office for Civil Rights and state attorneys general. The complaint references HIPAA to establish the regulatory framework, even though the direct claims run through CIPA and CMIA.

Plaintiffs’ attorneys estimate the proposed class includes approximately 100,000 patient encounters recorded since Sharp’s Abridge rollout began in April 2025. At $5,000 per CIPA violation, the statutory damages exposure is significant before you reach CMIA claims, punitive damages, or injunctive relief.

Why This Case Is Worse Than ConverseNow

The ConverseNow case involved pizza orders. The court rejected the argument that there was no privacy interest in pepperoni and sausage, because the caller also provided her name, address, and credit card number. Fair enough. But the data at issue was commercial transaction data.

The Sharp case involves the most sensitive category of information the law recognizes. Every word spoken in a medical exam room is potentially protected: symptoms, diagnoses, medications, mental health disclosures, sexual health discussions, substance use, family medical history. The privacy interest is not a close question. California law treats medical conversations as presumptively confidential.

That difference matters in three ways.

First, the consent requirements are more demanding. CIPA’s general wiretapping prohibition requires all-party consent in California. But CMIA separately requires written authorization before medical information is shared with third parties. Verbal consent is not sufficient. A pre-recorded message is not sufficient. The provider needs the patient’s written authorization before transmitting identifiable medical information to an outside company. Sharp allegedly had no compliant consent process in place.

Second, the regulatory overlay is thicker. A SaaS vendor processing restaurant phone calls needs to worry about CIPA. A SaaS vendor processing medical conversations needs to worry about CIPA, CMIA, HIPAA, and potentially Washington’s My Health My Data Act, state consumer protection statutes, and emerging state AI transparency laws. California’s AB 3030, effective January 1, 2025, requires healthcare providers using generative AI to include disclaimers in patient communications and provide instructions for contacting a human provider. The compliance surface area expands with every regulatory layer.

Third, the damages potential is higher. Medical privacy claims carry reputational weight that commercial privacy claims do not. Juries understand that their doctor’s appointment was recorded without their knowledge. The emotional response is immediate and visceral in a way that intercepted pizza orders are not.

The most significant allegation in the Sharp complaint is not the recording itself. It is that the AI system generated documentation falsely stating the patient had been advised and had consented.

Saucedo’s medical chart contained language indicating he was informed about the recording and agreed to it. He says that conversation never took place. The complaint alleges that Abridge’s system automatically inserted this consent language into clinical notes as part of its documentation workflow.

If true, this is not a disclosure failure. It is a record integrity failure. The system did not merely fail to obtain consent. It fabricated evidence that consent was obtained.

For SaaS vendors, this is the most dangerous fact pattern in the entire case. Every other issue (missing consent, third-party data transmission, retention policies) is a process failure that can be fixed with better workflows. Auto-generated compliance documentation that does not reflect reality is a product design defect that undermines the entire trust model.

Consider what this means in context. A healthcare provider deploys an AI scribe. The AI scribe generates clinical notes. Those clinical notes are part of the patient’s medical record, a legal document. If the AI inserts language stating the patient consented when the patient did not, the medical record is inaccurate. The provider has a legal obligation to maintain accurate medical records. The patient has a right to request corrections. And if the fabricated consent language was relied upon to justify the recording (by Sharp, by Abridge, by an insurer, by a regulator), every downstream action based on that false predicate is compromised.

This is not a problem unique to healthcare. Any SaaS product that generates compliance-relevant documentation, audit logs, consent records, processing records, data deletion confirmations, carries the same risk. If the documentation your product generates does not accurately reflect what happened, you have created a liability that no contractual disclaimer can address.

The Vendor’s Shadow Exposure

Sharp is the defendant. Abridge is not. But Abridge’s exposure is real, even without being named.

Abridge’s technology captured the audio. Abridge’s cloud stored the recording. Abridge’s system allegedly generated the fabricated consent language. Abridge’s retention policy prevented Sharp from deleting the recording when the patient requested it. If Sharp faces liability, its first call will be to its vendor agreement with Abridge.

The dynamics here mirror ConverseNow, but with an additional complication. In the ConverseNow case, the end user sued the vendor directly. In the Sharp case, the end user sued the healthcare provider (the customer), and the customer’s recourse against the vendor depends entirely on what the vendor agreement says.

This creates three questions every SaaS vendor selling into regulated industries needs to answer about their own agreements.

Who bears the cost when the customer gets sued? If Sharp’s vendor agreement with Abridge does not include an indemnification provision covering third-party claims arising from the vendor’s technology, Sharp bears the full cost of defending this class action alone. From the vendor’s perspective, if your product processes regulated data and your customer gets sued because of how your product handles that data, your agreement needs to address indemnification, or your customer relationship will not survive the first lawsuit.

Whose retention policy controls? Saucedo asked Sharp to delete his recording. Sharp told him Abridge retains data for 30 days. The patient’s right runs against the provider. The provider’s ability to honor that right depends on the vendor’s data retention practices. If the vendor’s retention policy prevents the customer from meeting its legal obligations to the end user, the vendor agreement has a structural defect. Your DPA needs to give your customer the ability to direct deletion of end-user data, and your infrastructure needs to support that directive within the timeframes your customer’s regulatory obligations require.

Who is responsible for the accuracy of generated documentation? If Abridge’s system inserted consent language into clinical notes that did not reflect what actually occurred, the question of liability depends on whether the vendor agreement allocates responsibility for the accuracy of AI-generated content. Most AI vendor agreements include broad disclaimers about output accuracy. But when the output is a medical record, the stakes of inaccuracy are fundamentally different from a hallucinated summary or an incorrect data point in a business report. If your product generates documentation that your customer will treat as a record of what happened, your agreement needs to address what happens when that documentation is wrong.

The Ambient AI Scribe Market

This case is not an isolated incident. It sits at the center of the fastest-growing segment of healthcare AI.

Ambient AI scribe spending grew 2.4x in 2025, generating an estimated $600 million in revenue. Abridge, valued at $5.3 billion after a $300 million Series E in June 2025, is deployed in over 150 health systems, including Kaiser Permanente, Mayo Clinic, Johns Hopkins, and Duke Health. The company reports supporting 50 million medical conversations per year. Competitors Nuance (owned by Microsoft), Ambience Healthcare, and others are scaling at similar rates.

The market is growing faster than the compliance infrastructure supporting it. Vendors are deploying into health systems that have not built adequate consent workflows. State laws vary: 13 states, including California, Florida, and Massachusetts, require all-party consent to record conversations. The remaining states allow one-party consent. But even in one-party consent states, HIPAA and state medical privacy statutes impose separate authorization requirements for sharing medical information with third-party vendors.

Every one of these deployments faces the same structural question the Sharp case exposes: who is responsible for obtaining patient consent, and what happens when that consent is not obtained?

What SaaS Vendors Selling Into Regulated Industries Need to Address

The Sharp case teaches lessons that extend well beyond healthcare. Any SaaS vendor whose AI product processes regulated data on behalf of its customers faces the same pattern. The specific regulations differ (HIPAA for healthcare, GLBA for financial services, FERPA for education), but the structural exposure is identical: your customer has regulatory obligations to its end users, your product touches the data those obligations protect, and your vendor agreement determines who bears the cost when something goes wrong.

Consent workflows must be built into the product, not delegated to the customer. The Sharp complaint makes clear that contractual allocation of consent responsibility is not enough when the product is designed to operate without surfacing the vendor’s involvement. If your AI product records, transcribes, or processes end-user interactions, the product itself needs to support a consent mechanism that meets the applicable legal standard. For healthcare in California, that means written authorization under CMIA. For general communications in California, that means all-party consent under CIPA. Delegating consent to the customer and hoping they figure it out is how you end up as the technology behind a class action.

Generated documentation must be accurate. If your product creates any document that your customer will rely on as a record of what occurred, including consent records, audit logs, clinical notes, compliance certifications, processing records, that documentation must accurately reflect reality. Auto-generating language that claims consent was obtained, or that a process was followed, when it was not, is not a technical bug. It is a product defect with legal consequences that cannot be disclaimed away.

Deletion must be operationally possible on the customer’s timeline. If your customer’s end users have a legal right to request deletion of their data (and under HIPAA, CCPA, GDPR, and most state privacy laws, they do), your retention practices need to support that right. A 30-day minimum retention period may serve your operational needs, but if it prevents your customer from honoring a deletion request, you have created a compliance gap in your customer’s regulatory posture. Your DPA should give customers the ability to direct deletion, and your infrastructure should be able to execute that directive within the timeframes the customer’s obligations require.

Indemnification needs to reflect regulated data risk. Standard SaaS indemnification provisions are typically limited to IP infringement claims. When your product processes regulated data, the indemnification structure needs to be broader. Your customer needs contractual recourse if your product’s data handling, retention practices, or generated documentation results in regulatory enforcement or third-party claims. Conversely, you need your customer to indemnify you for their failure to obtain required consents or authorizations before data enters your system. The indemnification should flow both directions, reflecting the shared responsibility model that regulated data requires.

Your BAA, DPA, or data processing terms need to account for AI-specific data flows. If your AI product processes data through a pipeline that involves cloud transmission, temporary storage, model inference, and output generation, your data processing terms need to describe that pipeline accurately. Broad language about “processing data on behalf of the customer” does not capture the specific steps involved in ambient AI documentation: audio capture, cloud transmission, transcription, note generation, EHR integration, and retention. Each step involves a different data handling decision, and each decision needs to be reflected in the agreement.

The Through-Line

The Sharp HealthCare case sits at the intersection of two trends this series has been tracking. The first is the ConverseNow pattern: AI-powered SaaS vendors processing end-user communications face wiretapping claims when consent is absent. The second is the Verily pattern (covered in the next post in this series): SaaS vendors handling regulated data face compounding exposure when their data practices do not match their contractual commitments.

Sharp adds a third dimension that neither ConverseNow nor Verily present: the product itself generated false compliance documentation. That is not a contractual gap or a consent workflow failure. It is a product design choice that created a record integrity problem, and it is the allegation most likely to define the trajectory of this case.

For SaaS founders building AI products that touch regulated data, the lesson is that getting the contract right is necessary but not sufficient. The product itself needs to support the compliance workflows your customers depend on. If it does not, or worse, if it generates documentation suggesting compliance occurred when it did not, no amount of contractual engineering will protect you.


This is the second post in the AI Privacy Litigation series. Previous: The Capability Test: How Courts Decide Whether Your SaaS Product Is a Wiretap. Next: The Verily HIPAA Whistleblower Case: What Happens When a SaaS Vendor Breaches Its BAA.

For the contractual framework, see the AI-Enabled SaaS series, particularly Customer Data and AI Training and AI Subprocessors, the EU AI Act, and the Regulatory Disclosure Gap.

No Boiler provides self-service legal document generation and educational content. This material and our service is not a substitute for legal advice. Please have a qualified attorney review any documents before relying on them. No Boiler is not a law firm, and communications with us do not create an attorney-client relationship or carry any expectation of confidentiality. Use of our platform and content is governed by our Terms of Service and Privacy Policy.

No Boiler

Generate your legal stack in minutes.

Terms of Service, Privacy Policy, DPA, and Sub-Processor List — built on counsel-reviewed baselines, customized to your product.

Get started →