Your AI feature generates a report for a customer. The customer publishes it. A third party claims the report incorporates their copyrighted content. Your customer turns to you and asks: whose problem is this?
Your AI feature recommends a course of action. The customer follows it. The recommendation is wrong. The customer suffers a loss and points to your marketing page, which says “AI-powered insights you can trust.” Your terms say outputs are provided as-is. Your customer’s lawyer asks: which one controls?
Your AI feature produces an analysis that looks identical to what it produced for another customer last week. Both customers believe they own what your product generated for them. Neither is wrong, exactly, and neither is right.
These are not hypothetical scenarios. They are the practical consequences of shipping AI features under contracts that were drafted for deterministic software. Traditional SaaS produces predictable outputs from defined inputs. AI produces probabilistic outputs from variable inputs, and the legal framework for who owns those outputs, who is responsible when they are wrong, and who is liable when they infringe is still being written.
This post covers the three questions your contracts need to answer about AI outputs: ownership, accuracy, and infringement. It also covers a fourth problem that sits outside your contracts but directly affects your legal exposure: the gap between what your marketing says about your AI and what your terms actually promise.
Who Owns the Output?
Under current US copyright law, works that are purely machine-generated, with no meaningful human creative input, are not eligible for copyright protection. The US Copyright Office has taken the position that copyright requires human authorship. A work produced entirely by an AI system, regardless of how sophisticated the prompt or how specific the instructions, does not qualify.
This creates an ownership gap. Your customer assumes they own what your product generates for them. Your terms may assign output ownership to the customer. But if the output is not copyrightable, what exactly are you assigning? You cannot transfer a property right that does not exist.
The practical answer is that the contractual allocation still matters, even if copyright does not attach. As between the parties, you can define who has the right to use, distribute, and commercialize the output. The assignment may not create a copyright, but it establishes a contractual right that is enforceable between the parties. This is analogous to how parties allocate rights in factual compilations or databases that may not qualify for copyright protection individually but are still commercially valuable.
Here is how to structure it. The customer should own the outputs generated from their inputs, to the extent those outputs are capable of ownership under applicable law. You retain all rights to the underlying models, algorithms, and training data. The assignment should be qualified: “to the extent assignable” acknowledges the unsettled legal landscape without overpromising.
Three provisions need to accompany the ownership assignment.
First, a non-uniqueness acknowledgment. Your AI feature may generate substantially similar or identical outputs for different customers who provide similar inputs. Two customers who ask the same question may get the same answer. The customer needs to understand that owning the output does not mean owning it exclusively. Without this acknowledgment, a customer could claim exclusive rights to a common output and attempt to prevent you from serving other customers.
Second, a no-originality guarantee. You are not warranting that the output is an original creative work. The availability and scope of IP protection for AI-generated content varies by jurisdiction and is not guaranteed. This manages expectations and prevents the customer from assuming the output has protections it may not have.
Third, an upstream limitation. If your AI features are powered by a third-party LLM provider, that provider’s terms may impose conditions or limitations on output usage. Some providers retain certain rights over outputs, restrict specific commercial uses, or impose conditions on how outputs can be represented to end users. You cannot grant your customer rights that conflict with your upstream obligations. Review your LLM provider agreements before making output ownership commitments.
When the Output Is Wrong
AI outputs are wrong more often than most marketing pages suggest. The failure mode is not a crash or an error message. It is a confident, well-formatted answer that happens to be inaccurate. A hallucinated citation. A fabricated statistic. A recommendation based on a pattern in the training data that does not apply to the customer’s context.
Your existing warranty disclaimer probably says something like: “The Service is provided as-is, without warranties of any kind, express or implied.” This is standard SaaS language. It was drafted for software bugs, not for a feature that generates novel content with an inherent accuracy limitation.
An AI-specific warranty disclaimer needs to do more than the standard as-is language. It needs to affirmatively state that outputs are generated through machine learning processes and are not tested, verified, or guaranteed to be accurate, complete, or current. It needs to state that the customer is responsible for independently reviewing and verifying all outputs before relying on them for any business purpose. And it needs to frame this as a shared responsibility model: the provider provides the tool with stated limitations, the customer is responsible for validating outputs for their use case.
This last point matters. If your disclaimer reads as a blanket abdication of responsibility (“we disclaim everything, use at your own risk”), it is more likely to be challenged successfully than a disclaimer that frames the limitation honestly (“AI outputs are probabilistic, not deterministic, and should be independently verified before use in decision-making”). One sounds like you are hiding. The other sounds like you are informing.
The “informational purposes only” trap is a specific variant of this problem. Some SaaS companies add language stating that AI outputs are “for informational purposes only and do not constitute professional advice.” This is fine as far as it goes, but if the product is marketed as a tool for making business decisions, the informational-purposes-only disclaimer rings hollow. If customers are using your AI feature to draft contracts, analyze financial data, assess compliance risk, or make hiring recommendations, calling the output “informational” does not match how the product is actually used.
The better approach is to be specific about what the AI feature does and does not do, what the customer should and should not rely on it for, and what verification steps the customer should take. This is harder to draft than a blanket disclaimer. It is also harder to challenge.
When the Output Infringes
AI models generate outputs based on patterns learned from training data. There is an inherent risk that outputs may resemble or incorporate elements of pre-existing works. If a customer uses an AI-generated output that turns out to infringe a third party’s copyright, trademark, or other IP rights, the question is: who bears the risk?
The default position for most B2B SaaS companies is a standard IP indemnification clause: the provider indemnifies the customer against claims that the service infringes third-party IP rights. This works well for traditional software, where the provider controls the code and can warrant that it does not infringe.
AI outputs are different. The provider does not control the specific content of each output. The output is generated dynamically based on the customer’s inputs and the model’s learned patterns. The provider cannot review every output before it reaches the customer, and the provider cannot warrant that no output will ever resemble third-party content. This is not a hypothetical concern: there are over 50 copyright lawsuits currently pending against AI model developers, and while those cases target the developers (not the deployers), an adverse ruling on training data fair use could expose the entire downstream chain.
There are two positions on the spectrum.
The provider disclaims infringement liability for outputs. The IP indemnity applies to the service itself (the software, the platform, the interface) but explicitly carves out AI-generated outputs. This is the more protective position for the provider and is the approach most standard AI addendum templates take. The justification is straightforward: the provider cannot control or predict the content of dynamically generated outputs, so it cannot indemnify against infringement claims arising from them.
The provider offers limited indemnification for outputs. Some providers, particularly those whose LLM providers offer upstream IP indemnification, extend a form of infringement protection to their customers. This is typically narrower than the standard IP indemnity: it may apply only to outputs generated through normal use of the service (excluding outputs generated from infringing inputs or outputs modified by the customer), it is subject to the general liability cap, and it requires the provider to have upstream indemnification from its LLM provider to backstop the commitment. If your LLM provider does not indemnify you for output infringement, you cannot credibly indemnify your customers.
Your position should be informed by what your LLM provider’s terms actually say. Some providers offer IP indemnification for outputs generated through their API. Others do not. This directly constrains what you can promise downstream.
The Marketing Claims Problem
Everything above lives in your contracts. This section is about what lives on your website, your sales decks, and your product marketing.
The FTC has brought over a dozen enforcement actions against companies that overstate what their AI does. These actions target claims like “fully autonomous,” “guaranteed results,” and “AI-powered” descriptions applied to products that use minimal actual AI. This is not theoretical enforcement risk. It is active and it is accelerating.
For B2B SaaS companies, the specific exposure is the gap between marketing claims and contractual disclaimers. Your marketing page says “AI-powered insights you can trust.” Your terms say “AI outputs are provided as-is, may be inaccurate, and should not be relied upon without independent verification.” These two statements cannot coexist without creating risk.
If a customer relies on your AI output and suffers a loss, their lawyer will put your marketing page next to your disclaimer and ask the jury which one the customer reasonably relied on. If the FTC reviews your marketing, they will look at the same gap and ask whether your claims are substantiated by your actual product capabilities.
The fix is alignment. Your marketing claims, your contractual warranties, and your actual product capabilities need to tell the same story. This does not mean your marketing has to be as cautious as your legal disclaimers. It means your marketing should not make promises your contracts explicitly disclaim. If your terms say outputs should be independently verified, your marketing should not imply they do not need to be.
The practical exercise: put your product marketing page and your AI warranty disclaimer side by side. Read them together. If a reasonable customer would see a contradiction, you have a problem that no amount of legal language will fix. The marketing needs to change, the disclaimer needs to change, or both.
The Acquisition Lens
If your company is ever acquired, the buyer’s legal team will evaluate your AI output exposure as part of diligence. They will look at three things.
First, your warranty disclaimers. Are they specific to AI outputs, or are they generic as-is language that may not hold up? Is there a gap between what you disclaim and what your product actually promises?
Second, your IP indemnification structure. Did you indemnify customers for AI output infringement without upstream coverage from your LLM provider? If so, you have an uncapped or poorly scoped exposure that will affect the purchase price.
Third, the marketing-to-terms consistency. If your marketing makes claims your terms disclaim, the acquirer will see a litigation risk that may not have materialized yet but is sitting in the portfolio waiting to surface.
Every contract gap becomes a purchase price reduction lever. This is true for traditional SaaS agreements and it is amplified for AI-enabled products where the legal framework is still being established and the exposure is harder to quantify.
This is the third post in the AI-Enabled SaaS series. Previous: Customer Data and AI Training: The Clause That Will Make or Break Enterprise Deals. Next: Contracting With Your LLM Provider: What Most Companies Miss in the API Agreement.
No Boiler provides self-service legal document generation and educational content. This material and our service is not a substitute for legal advice. Please have a qualified attorney review any documents before relying on them.