You clicked “I agree” on an API terms of service. You integrated the API into your product. You started routing customer data through it.
Those API terms now sit underneath every promise you make to your customers. Your no-training commitment is only as strong as your provider’s no-training commitment. Your uptime SLA is constrained by a provider that may offer no uptime guarantee at all. Your IP indemnification is limited by whether your provider indemnifies you for output infringement. Your data retention disclosures are only accurate if you know what your provider actually retains.
This is the third layer of the three-actor model introduced at the beginning of this series. Your customer contracts govern the relationship between you and your customer. Your LLM provider’s terms govern the relationship between you and the model powering your AI features. The gap between what you promise downstream and what your provider promises upstream is where the risk lives.
Most B2B SaaS companies treat the LLM provider integration as a technical decision. It is also a legal one. This post covers the six areas of your provider agreement that directly constrain or conflict with the commitments in your customer-facing legal stack, and what to do about each one.
Data Retention and Processing
The first question to resolve: what happens to customer data after your provider processes it?
The major API providers have converged on a general position: API data is not used for model training by default. But “not used for training” is not the same as “not retained.” Most providers retain API inputs and outputs for some period, typically up to 30 days, for abuse monitoring and safety purposes. Some offer zero data retention as an option, but it is not always available by default and may require approval, an enterprise agreement, or a separate contractual amendment.
This matters because your DPA and Privacy Policy likely make commitments about how customer data is handled by your subprocessors. If you have told your customer that data is processed in real time and not stored, but your LLM provider retains inputs for 30 days for abuse monitoring, your disclosure is inaccurate. If your DPA says subprocessors will process data only for the purposes of providing the service, a 30-day retention window for the provider’s own safety monitoring is a different purpose.
What to do: review your provider’s data retention terms. Determine whether the default retention period is acceptable under your DPA commitments. If it is not, explore whether zero data retention or modified abuse monitoring is available for your tier. Update your DPA and Privacy Policy to accurately reflect the actual retention practices across your entire processing chain, including the provider layer.
Training Opt-Out: Contractual vs. Policy
Most major providers state that API data is not used for model training by default. But the form of this commitment varies, and the difference matters.
A contractual commitment in the API terms or your enterprise agreement is a binding obligation. If the provider changes its training practices, it needs to amend the agreement or notify you under the modification clause. You have a contractual remedy if the commitment is breached.
A policy statement on a help page, documentation site, or blog post is not a contractual commitment. It is a description of current practice that the provider can change at any time by updating the page. If your entire no-training position with your customers rests on a provider’s documentation page rather than a contractual term, your commitment is built on a foundation you do not control.
Some providers offer a formal opt-in mechanism: your organization is opted out by default, and you can affirmatively choose to share data for model improvement. Others frame it as an opt-out: training is the default for certain tiers, and you must change a setting to disable it. The distinction between opt-in and opt-out matters less than whether the commitment is contractual. A contractual opt-out you have exercised is strong. A policy-based opt-out that can be reversed by a terms update is not.
What to do: confirm whether your provider’s no-training commitment is in the API terms, in your enterprise agreement, or only in documentation. If it is only in documentation, treat it as informational and look for a contractual mechanism to formalize the commitment. If you are on standard API terms without this protection, consider whether an enterprise agreement is necessary to support the commitments you are making to your customers.
Uptime and Availability
Your SLA promises your customers a specific uptime percentage. Your LLM provider may not promise you anything.
Standard API terms from major providers typically do not include an SLA at all. Enterprise agreements may include one, but the commitment is often weaker than what you offer your customers: lower uptime percentage, broader exclusions, or remedies limited to service credits that do not come close to covering your downstream exposure if the AI feature goes down.
Rate limits add another dimension. Your provider may impose per-minute or per-day rate limits on your API calls. If your customer’s usage spikes and you hit a rate limit, the AI feature degrades or fails. Your SLA does not distinguish between downtime caused by your infrastructure and downtime caused by your provider throttling your requests.
What to do: map your customer-facing SLA commitments against your provider’s actual availability terms (or lack thereof). If there is a gap, you have three options. First, carve AI features out of your standard SLA and define a separate performance standard for AI-dependent functionality, as discussed in the earlier post on SLA implications. Second, negotiate an upstream SLA with your provider through an enterprise agreement. Third, build redundancy by integrating multiple LLM providers so you can fail over when one degrades. The worst option is making uptime promises to your customers that depend on a provider who makes no uptime promises to you.
IP Indemnification
This connects directly to the previous post on AI outputs. The question here is: does your LLM provider indemnify you for IP infringement claims arising from the model’s outputs?
Some providers offer IP indemnification for outputs generated through their API. The scope and conditions vary. The indemnification may apply only to outputs generated through normal use of the service, excluding outputs generated from infringing inputs or outputs that the customer modifies. It may be subject to a cap. It may require you to use the latest version of the model.
Other providers offer no output indemnification at all. Their terms disclaim all warranties regarding output infringement and exclude outputs from their standard IP indemnity, which covers only the service platform itself.
This directly constrains what you can promise your customers. If your LLM provider indemnifies you for output infringement with reasonable scope, you can extend a form of that protection to your customers (subject to your own terms and caps). If your provider does not indemnify you, offering output IP indemnification to your customers creates an unfunded liability. You would be on the hook for infringement claims with no upstream recovery.
What to do: review your provider’s IP indemnification terms. Determine whether output indemnification exists, what the scope and conditions are, and whether the coverage is sufficient to support any downstream indemnification you offer your customers. If your provider does not indemnify for outputs, your customer-facing terms should explicitly carve AI outputs out of your IP indemnity and explain why (as discussed in the previous post on output ownership and infringement).
Model Deprecation and Versioning
LLM providers deprecate models. They release new versions. They change capabilities, context windows, pricing, and behavior. A model you integrated six months ago may be scheduled for end-of-life, and the replacement may produce different outputs for the same inputs.
From a product perspective, this is manageable. From a contractual perspective, it creates a problem. If your customer contracted for a service that uses a specific AI capability, and the underlying model changes in a way that materially alters the outputs, your customer may have a claim that the service no longer matches what they agreed to. If the model deprecation happens on a timeline that does not align with your customer renewal cycle, you may need to migrate mid-contract.
Most provider API terms reserve the right to deprecate models with notice (often 6 to 12 months, though the timelines vary and are not always guaranteed). Some provide versioning that allows you to pin to a specific model version for a period. Others move you to the latest version automatically.
What to do: understand your provider’s deprecation policy and versioning options. If you can pin to a specific model version, document which version your AI features use and align your deprecation timeline with your customer renewal cycles. If your provider reserves the right to change models at any time, build model abstraction into your architecture so you can switch providers or versions without customer-facing disruption. Your customer-facing terms should reserve the right to make reasonable changes to AI features (including underlying model updates) with notice, while committing to maintaining the material functionality of the service.
Standard API Terms vs. Enterprise Agreements
Everything above varies depending on whether you are on standard click-through API terms or a negotiated enterprise agreement.
Standard API terms are non-negotiable. You accept whatever the provider offers: the default data retention, the default (non-existent) SLA, the default IP indemnification terms (or lack thereof), and the default deprecation policy. For a seed-stage startup doing initial integration, standard terms are fine as a starting point. For a company routing production customer data through the API and making contractual commitments to enterprise customers based on that integration, standard terms may not be sufficient.
Enterprise agreements give you negotiation leverage. You can negotiate data retention commitments, formalize the no-training commitment as a contractual obligation, obtain an SLA with defined uptime targets, potentially secure enhanced IP indemnification, and lock in pricing and deprecation notice periods. The threshold for accessing enterprise terms varies by provider and typically depends on your spend volume.
The practical question: does your current provider relationship support the commitments you are making to your customers? If you are on standard API terms and your customer-facing legal stack makes commitments about data retention, training, uptime, or IP protection that exceed what those standard terms provide, you have a gap. The gap may not matter today. It will matter when a customer asks to see your provider agreement during a procurement review, or when an acquirer evaluates your upstream dependencies during due diligence.
The Governing Principle
The principle that runs through all six areas is simple: you cannot promise your customers more than your provider promises you.
Your no-training commitment is limited by your provider’s no-training commitment. Your uptime SLA is limited by your provider’s availability. Your IP indemnification is limited by your provider’s IP indemnification. Your data retention disclosures are limited by what your provider actually retains.
This does not mean your customer-facing terms should mirror your provider’s terms. It means your customer-facing terms should be designed with full knowledge of what your provider’s terms actually say, so that the commitments you make downstream are supportable upstream. The data flow map from the first post in this series is the tool for this: trace customer data through all three layers, and at each layer, compare what you have promised your customer with what the party at that layer has promised you.
Where there is a gap, close it. Either negotiate better upstream terms, adjust your downstream commitments, or build technical redundancy that reduces your dependence on any single provider’s terms.
This is the fourth post in the AI-Enabled SaaS series. Previous: AI Outputs: IP Ownership, Accuracy Warranties, and the Marketing Claims Problem. Next: AI Subprocessors, the EU AI Act, and the Regulatory Disclosure Gap.
No Boiler provides self-service legal document generation and educational content. This material and our service is not a substitute for legal advice. Please have a qualified attorney review any documents before relying on them.