← All posts
ai ftc marketing enforcement litigation

FTC AI-Washing Enforcement: What SaaS Founders Get Wrong About Marketing AI Features

The FTC has brought over a dozen enforcement actions against companies that overstate what their AI does. If your marketing says 'AI-powered insights you can trust' and your terms say 'as-is, may be inaccurate,' you have a contradiction that creates exposure on two fronts. Here's how the enforcement pattern works, why it survived a change in administration, and what it means for your contracts.

No Boiler ·

Your marketing page says “AI-powered insights you can trust.” Your terms of service say “AI outputs are provided as-is, may be inaccurate, and should not be relied upon.” The Federal Trade Commission (FTC) sees both documents. So does the plaintiff’s attorney. Here’s how the enforcement pattern works, why it survived a change in administration, and what it means for your contracts and your marketing.


In September 2024, the FTC launched Operation AI Comply, a coordinated enforcement sweep targeting companies that overstated what their AI products could do. The initial wave included five cases: three business opportunity schemes (Ascend Ecom, Empire Holdings/Ecommerce Empire Builders, and FBA Machine/Passive Scaling) that promised consumers AI-powered passive income, a company called DoNotPay that marketed itself as “the world’s first robot lawyer,” and Rytr, an AI writing assistant whose tools enabled users to generate fake reviews at scale.

The common thread was not sophisticated AI fraud. It was ordinary deception dressed up in AI language. Ascend Ecom charged consumers tens of thousands of dollars for AI-powered ecommerce storefronts that rarely produced any income. DoNotPay promised consumers they could “sue for assault without a lawyer” and “generate perfectly valid legal documents in no time,” but the company had never tested whether its chatbot’s output was comparable to a human lawyer’s work and did not employ or retain any attorneys. The settlement included a $193,000 fine and advertising restrictions.

A SaaS founder reading those cases might conclude that AI-washing enforcement targets obvious scams and does not apply to legitimate software companies. That conclusion is wrong.

The Enforcement Pattern Survived a Change in Administration

Operation AI Comply launched under FTC Chair Lina Khan. When the Trump administration took over and Andrew Ferguson became Chair, there was speculation that AI enforcement would soften in favor of a more innovation-friendly posture. That did not happen.

Throughout 2025, the FTC brought additional cases that followed the same blueprint. In March 2025, the FTC filed against Click Profit, an online business opportunity that allegedly cost consumers at least $14 million through false AI-powered earnings claims. In August 2025, the FTC finalized an order against Workado (formerly Content at Scale AI) for advertising its AI content detection tool as “98 percent accurate” when FTC testing showed approximately 53 percent accuracy in general settings. The tool had been trained largely on academic text and performed significantly worse outside that domain.

The most significant 2025 action was against Air AI Technologies, filed in August 2025. Air AI marketed a conversational AI product to entrepreneurs and small businesses, claiming it could fully replace human sales representatives. The FTC alleged the product was either unavailable or failed at basic tasks like making calls, scheduling, recording emails, and answering questions accurately. The estimated losses for businesses using these products reached $250,000.

The Air AI case is the one most relevant to SaaS founders because the defendant was not a get-rich-quick scheme. It was a company selling agentic AI to business customers, exactly the category of product that seed-stage SaaS companies are building today. The FTC’s complaint targeted the gap between what the product was marketed to do and what it actually did.

In January 2026, the FTC resolved its case against Growth Cave, another AI-related marketing enforcement action. FTC Chairman Ferguson stated that the agency has found that companies’ representations about their AI products are “not infrequently” wildly inaccurate. The Bureau of Consumer Protection’s director emphasized that AI cannot be broadly adopted without trust, and that the FTC will target companies that make false or misleading claims about what their AI can do.

The Three Patterns That Create Exposure

Across all the enforcement actions, three patterns emerge. Each one applies to legitimate SaaS companies, not just fraudulent schemes.

Pattern 1: Claiming capabilities your product does not have

This is the most straightforward violation. If your marketing says your product does something and it does not, that is a deceptive trade practice regardless of whether AI is involved. AI does not create a new legal theory here. It amplifies an existing one.

The Workado case illustrates this precisely. The company claimed its AI content detection tool was 98 percent accurate. The FTC tested it and found 53 percent accuracy outside the training domain. That is not a close call. It is an unsubstantiated claim.

For SaaS founders, the risk is more mundane than outright fabrication. It looks like this: your product uses a GPT wrapper for one feature, but your website describes the entire product as “AI-powered.” Your AI feature works well in controlled demos but underperforms in production. Your accuracy benchmarks were measured on your training set, not on real-world inputs. Your marketing references “machine learning” when the underlying logic is a rules engine with static thresholds.

The FTC’s standard is not whether you intended to mislead. It is whether a reasonable consumer would take away a misleading impression from your marketing. If your headline says “AI-powered compliance in 48 hours” and the product cannot deliver compliance to legal standards, the headline is the violation.

Pattern 2: Claiming AI-driven results without substantiation

The business opportunity cases (Ascend Ecom, FBA Machine, Click Profit, Growth Cave) all involved earnings claims tied to AI capabilities. The FTC’s position is that if you claim your AI produces specific outcomes, you need substantiation for both the AI capability and the outcome.

For SaaS companies, this pattern shows up in case studies, testimonials, and ROI calculators. “Our AI reduced churn by 40%” requires evidence that the AI (not some other factor) produced the reduction and that the number is representative. “AI-powered insights that save 10 hours per week” requires substantiation that the time savings are real and attributable to the AI feature. The FTC expects companies to have a reasonable basis for claims before they make them, not after someone challenges them.

Pattern 3: The gap between marketing and contractual disclaimers

This is the pattern most relevant to SaaS founders and the one your legal stack needs to address directly.

Your marketing page says your AI product delivers accurate, reliable results. Your terms of service say AI outputs are provided as-is, without warranties of accuracy, and should not be relied upon for any specific purpose. Both documents are public. Both are attributable to your company. Together, they tell a story: you are marketing capabilities you are simultaneously disclaiming.

The FTC has not brought a case specifically on this theory yet. But the logic is inevitable. If your marketing creates expectations that your contractual disclaimers explicitly contradict, a regulator or a plaintiff can argue that the marketing is deceptive because the company itself does not stand behind the claims it makes. The disclaimer in your terms is evidence that you know the product does not perform as marketed.

This is also the gap that creates contractual exposure with your customers. If your marketing says “insights you can trust” and your customer relies on an AI output that turns out to be wrong, your warranty disclaimer may protect you contractually. But the customer’s attorney will put your marketing page next to your disclaimer and argue that the marketing created a reasonable expectation that the disclaimer cannot override.

The SEC Parallel: AI-Washing for Companies Raising Capital

If you are a SaaS founder raising venture capital, there is a parallel enforcement track you need to know about. In March 2024, the Securities and Exchange Commission (SEC) settled charges against two investment advisers, Delphia (USA) Inc. and Global Predictions Inc., for making false and misleading statements about their use of AI. Delphia paid $225,000 and Global Predictions paid $175,000.

The SEC’s enforcement theory is the same as the FTC’s, applied to securities disclosures rather than consumer marketing. If you tell investors your product is “AI-powered” or that you use AI to drive specific outcomes, those statements need to be accurate. SEC Chair Gary Gensler stated that “AI washing hurts investors” and that investment advisers should not “mislead the public by saying they are using an AI model when they are not.”

For SaaS founders, the SEC risk surfaces in pitch decks, investor updates, and public statements about your technology. If your deck tells investors your product uses “proprietary AI” and the product is built on a third-party API with a prompt template, you have a disclosure problem. If your investor materials claim AI-driven growth metrics that you cannot substantiate, you face the same exposure Delphia and Global Predictions faced. The penalties were modest in those cases, but the precedent is set.

Align your marketing with your warranty disclaimers. Put your marketing page and your terms of service side by side. If a reasonable person would see a contradiction between what your marketing promises and what your terms disclaim, you have a problem that needs to be resolved in one direction or the other. Either strengthen your warranty to match your marketing (if your product actually delivers what you claim), or moderate your marketing to match your disclaimer (if your product has limitations the disclaimer reflects). The current state of affairs at most SaaS companies, aggressive marketing paired with blanket disclaimers, is precisely the gap regulators and plaintiffs exploit.

Substantiate AI claims before you publish them. Every claim about what your AI does, including accuracy percentages, time savings, cost reductions, and performance benchmarks, needs a reasonable basis before it goes on your website, in a case study, or in a pitch deck. Document the testing methodology, the data set, the conditions under which the claim was measured, and any limitations. If the claim was measured on a training set and you have not validated it on real-world inputs, do not present it as a general performance claim.

Do not describe your product as “AI-powered” unless AI is a material component. The FTC’s scrutiny is highest when “AI” is used as a marketing hook that does not reflect the product’s actual architecture. If your product uses a rules engine, a statistical model, or a third-party API call for a single feature, describing the entire product as “AI-powered” invites the question of whether that characterization is substantiated. Be specific about what the AI does and where it operates in your product.

Disclose AI limitations in your terms and your product documentation. Your warranty disclaimer should not be a blanket “as-is” statement buried in your terms. It should describe the specific limitations of your AI features: the types of inputs where accuracy has been validated, the use cases where the AI has not been tested, the categories of decisions that should not rely solely on AI output. This specificity protects you more effectively than a broad disclaimer, because it demonstrates that you have thought about the limitations and communicated them honestly.

Review investor materials with the same rigor as marketing materials. If the SEC is enforcing AI-washing against investment advisers, the same standards apply to the claims you make to your own investors. Your pitch deck, investor updates, and board materials should accurately represent your AI capabilities, the stage of development, and the basis for any performance claims.

The Through-Line

The first four posts in this series covered cases where SaaS vendors were sued by end users, patients, or job applicants. This post covers a different actor: the regulator. The FTC does not need a plaintiff. It does not need a class action. It investigates, files a complaint, and imposes orders, penalties, and bans.

The principle connecting all five posts is the same. In every case, the exposure arose from a gap between what the company said and what the company did. ConverseNow’s marketing said its system learns from every conversation. Its terms did not disclose third-party interception. Sharp’s AI generated consent documentation that did not reflect reality. Verily represented HIPAA compliance while knowing breaches had occurred. Workday claimed its tools were neutral while the algorithms produced disparate impact. And the FTC’s targets marketed AI capabilities their products could not deliver.

For SaaS founders, the lesson is alignment. Your marketing, your terms, your privacy policy, your DPA, your investor materials, and your product documentation all need to tell the same story. When they don’t, someone will notice. Whether it is an end user, a customer, a regulator, or a plaintiff’s attorney, the gap is the exposure.


This is the fifth post in the AI Privacy Litigation series. Previous: Algorithmic Discrimination and the SaaS Vendor: Mobley v. Workday at Class Certification. Next: AI Disclosure Laws by State: A Practical Compliance Map for B2B SaaS.

For the contractual framework on AI output disclaimers and marketing alignment, see AI Outputs: IP Ownership, Accuracy Warranties, and the Marketing Claims Problem in the AI-Enabled SaaS series.

No Boiler provides self-service legal document generation and educational content. This material and our service is not a substitute for legal advice. Please have a qualified attorney review any documents before relying on them. No Boiler is not a law firm, and communications with us do not create an attorney-client relationship or carry any expectation of confidentiality. Use of our platform and content is governed by our Terms of Service and Privacy Policy.

No Boiler

Generate your legal stack in minutes.

Terms of Service, Privacy Policy, DPA, and Sub-Processor List — built on counsel-reviewed baselines, customized to your product.

Get started →