← All posts
ai terms-of-service dpa privacy saas enterprise

AI Addendum or Full Redraft? A Decision Framework for B2B SaaS Companies Adding AI

Your product shipped an AI feature. Your legal stack hasn't moved. Here's how to map your AI data flows, decide whether an addendum is enough or you need a full redraft, and handle the existing customers already on contracts that say nothing about AI.

No Boiler ·

Your product just shipped an AI feature. Maybe it is a summarization tool. Maybe it is a recommendation engine, a chatbot, or an AI-powered search. The feature works. Customers like it. Your product team is already planning the next iteration.

Your legal stack has not moved.

The Terms of Service your customers signed say nothing about data training. Your Privacy Policy does not disclose that customer data is now being sent to an LLM provider for processing. Your DPA lists your cloud infrastructure provider and your payment processor as subprocessors, but not OpenAI or Anthropic. Your SLA promises uptime for a deterministic system, and you just introduced a feature whose outputs vary every time it runs.

This is the gap most B2B SaaS companies are sitting in right now. The product moves fast. The contracts do not. And the longer the gap persists, the harder it gets to close, because every new customer who signs your existing terms is another customer whose contract does not contemplate what your product actually does.

This post is the starting point for a nine-part series on the legal implications of adding AI to a B2B SaaS product. It covers the foundational question: what changes in your legal stack when AI enters the picture, and how do you decide whether an addendum is enough or whether you need to redraft?

Start With the Data Flow Map

Before you can decide what your contracts need to say about AI, you need to map where customer data actually goes when it hits your AI feature.

This sounds obvious. It is not. Most SaaS companies think about two parties: themselves and their customer. When you add AI, there are three.

Your customer sends data to your product. Your product sends some of that data to your LLM provider (OpenAI, Anthropic, Google, Cohere, or whoever you have integrated). That LLM provider processes the data through its own infrastructure, which may involve its own subprocessors, its own data retention policies, and its own terms governing what happens to the inputs and outputs.

This is the three-actor model: the customer (data controller), you (the deployer), and the LLM provider (the developer). Your contractual commitments to your customer are constrained by the commitments your LLM provider makes to you. If your provider retains input data for 30 days and you have told your customer that data is processed in real time and not stored, you have a problem.

The data flow map is the exercise of tracing customer data through all three layers. For each AI feature in your product, document what data enters the feature, what is sent to the LLM provider, what the provider retains, what comes back, and what you store. This map drives every downstream decision in this series: what your DPA needs to disclose, what your Privacy Policy needs to say, what your subprocessor list needs to include, and what your training provisions need to commit to.

If you skip this step, everything else is guesswork.

What Changes in Each Document

AI does not create a new legal stack. It modifies the one you already have. But the modifications touch every document.

Terms of Service

Your ToS needs new provisions covering data usage and training rights for AI features. The most important question your enterprise customers will ask is whether your AI trains on their data. Your terms need a clear answer, whether that answer is “never,” “only with your consent,” or “only in anonymized and aggregated form.” Silence on this point is not neutrality. It is a gap that procurement will flag and that your customers’ lawyers will interpret against you.

Your ToS also needs to address AI output accuracy. Traditional SaaS warranty disclaimers cover software bugs. AI outputs are a different category: they are non-deterministic, they can be wrong in ways that look right, and the failure mode is not a crash but a hallucination. Your warranty language needs to account for this specifically, not rely on a general “as-is” disclaimer to do work it was not drafted to do.

Privacy Policy

Your Privacy Policy needs to disclose the new processing purposes introduced by AI features. If customer data is being sent to a third-party LLM provider, that is a data sharing arrangement that needs to be disclosed. If you are using any customer data (even aggregated or anonymized) to improve your AI, that is a processing purpose that needs to be stated.

Most B2B SaaS Privacy Policies were drafted before AI features existed. They describe data collection for “providing the service” and “improving our product.” Those categories may or may not cover sending customer inputs to an external AI model. The safer approach is to be explicit.

Data Processing Addendum

Your DPA needs updates in at least three areas. First, your processing purposes schedule needs to include AI-related processing. If your DPA says you process customer personal data “to provide the Service as described in the Agreement” and your Agreement says nothing about AI, the DPA authorization may not cover what your AI feature actually does.

Second, your subprocessor list needs to include your LLM provider. Adding OpenAI or Anthropic to your subprocessor list is not optional once you start routing customer data through their APIs. If your DPA includes a subprocessor change notification process (and it should), adding an AI provider triggers that process. For established vendors with existing customers, this means sending the notification and managing any objection rights your DPA provides.

Third, your security schedule may need to address AI-specific security measures: how data is transmitted to the LLM provider, whether it is encrypted in transit, what access controls exist, and what the provider’s own security posture looks like.

Service Level Agreement

AI features introduce non-deterministic behavior into a product that your SLA was drafted to cover as deterministic software. Uptime for your core application is one thing. “Uptime” for an AI feature that depends on a third-party API with its own rate limits, latency, and availability is another.

Your SLA may need to either carve out AI features from your standard uptime commitment or define a separate performance standard for AI-dependent functionality. The alternative is making uptime promises you cannot keep because your LLM provider does not make the same promises to you.

Acceptable Use Policy

Your standard AUP covers illegal activity, security violations, abuse of resources, and competitive use. AI features introduce a new category of misuse that your current AUP probably does not address: prompt injection, using your product to generate prohibited content, feeding regulated data (health records, financial information, children’s data) into AI features without authorization, and using AI outputs for automated decisions that have legal consequences without human review.

If your AUP does not address these, you have no contractual basis to restrict them.

The Addendum vs. Redraft Decision

Now the practical question: do you add an AI addendum to your existing agreements, or do you redraft your legal stack?

Several open-source and standard-form AI addendum templates have appeared in the market over the past year. This tells you the industry recognizes the AI addendum as a distinct document type. But a generic AI addendum has the same problem as a generic DPA: it does not account for your actual data practices, your specific AI architecture, or your real risk profile. It is a checkbox exercise that creates the appearance of coverage without the substance.

The typical template covers the expected topics in a pick-one format: AI feature definitions, training rights, IP ownership for inputs and outputs, similar outputs acknowledgment, output infringement liability, a disclaimer, third-party provider disclosure, and use restrictions. These templates are well-structured and useful as reference points. But they treat the AI addendum as an island. They say nothing about how the training provision interacts with your DPA processing purposes, how the output disclaimer aligns with your marketing claims, or how the third-party provider disclosure connects to your subprocessor notification obligations. Those connections are where the real risk lives.

Here is the decision framework.

An AI addendum is sufficient when your AI features are clearly separable from the core service, when they do not change existing data processing purposes (the AI feature processes data the same way and for the same purposes your existing terms already cover), and when you can govern AI-specific terms through a supplemental document without contradicting your existing agreements.

A full redraft is necessary when AI is deeply integrated into the product (not a bolt-on feature but a core part of how the product works), when it changes how customer data is processed (new subprocessors, new processing purposes, new data flows), or when it creates new liability exposure that your existing limitation of liability and indemnification provisions do not contemplate.

Most B2B SaaS companies adding meaningful AI features need the redraft. The AI features that matter are rarely separable from the core service. They process customer data in new ways. They introduce new subprocessors. They create output accuracy and IP questions that existing provisions do not address. An addendum bolted onto terms drafted for deterministic software is a patch on a foundation that no longer matches the product.

The Existing Customer Problem

For startups writing terms for the first time, the path is straightforward: draft your legal stack to account for AI from the start.

For established vendors with an existing customer base, the problem is harder. You have customers on contracts that do not contemplate AI. The AI features you just shipped may not be covered by the terms those customers signed. And the path forward depends entirely on what kind of contracts those customers are on.

Self-Serve and Click-Through Customers

If your customers accepted standard online terms (a click-through Terms of Service), your modification clause may allow you to update terms with notice. This is the scenario where a terms update or an AI-specific addendum published to your website can work. You update the terms, provide the notice period your existing agreement requires, and the updated terms take effect on renewal or after the notice window.

Even here, be careful. A modification clause that allows updates to standard terms is not a blank check. If the AI features represent a material change in how you process customer data (and routing data to an LLM provider probably qualifies), a customer who objects has a reasonable argument that the change exceeds what the modification clause contemplated. The more significant the change, the more you should consider an affirmative opt-in rather than deemed acceptance through continued use.

Feature-gated opt-in is often the cleanest approach for self-serve customers. You keep your existing terms in place for the core product and require customers to accept supplemental AI terms before they can access AI features. This creates a clear contractual boundary between the pre-AI product and the AI-enabled product. It is more work to implement technically but avoids the modification and deemed-acceptance questions entirely.

Enterprise and Negotiated Contracts

This is the harder case, and it is where most established B2B SaaS vendors will spend the bulk of their effort.

Enterprise customers rarely accept click-through terms. They negotiate. Their contracts are bilateral agreements, often with custom language on data processing, liability caps, indemnification, and security commitments that were negotiated specifically for a deterministic software product. Those contracts almost never include a unilateral modification clause that would allow you to change the terms by posting an update and sending a notice.

For these customers, adding AI features that change how customer data is processed, introduce new subprocessors, or create new categories of output liability is a material change to the service. You cannot paper this over with a notification. It requires a negotiated amendment.

The practical reality is that enterprise customers will want to understand what the AI features do, what data they process, where that data goes, what your LLM provider’s terms say, and how the new functionality affects the liability and indemnification provisions they negotiated. Some will want to restrict AI processing entirely. Some will accept it with guardrails. Some will require carve-outs for certain data categories. This is a contract-by-contract conversation, not a mass notification.

The upside is that this conversation is also a commercial opportunity. Enterprise customers who are asking about your AI features are engaged customers evaluating whether your product remains the right fit for their needs. Proactively reaching out with a well-structured AI addendum, one that addresses data training, subprocessors, output accuracy, and IP ownership, signals legal maturity and operational sophistication. It is the same dynamic covered in our post on legal documents as a sales asset: the companies that get ahead of the question close faster than the ones who wait to be asked.

The downside is that it takes time. If you have 50 enterprise customers on negotiated contracts, you may have 50 separate amendment conversations. Prioritize by data sensitivity and customer size. Start with the customers most likely to ask (those in regulated industries, those with active procurement teams, those approaching renewal) and work outward.

The Status Quo Is Not an Option

Whichever category your customers fall into, the status quo, shipping AI features on contracts that do not mention AI, is not a viable position. Every day you operate AI features under terms that do not contemplate them, you are accumulating contractual risk. Your DPA may not authorize the processing your AI feature performs. Your subprocessor list may be inaccurate. Your liability cap may not account for AI-specific exposure. These are not theoretical risks. They are the kinds of gaps that surface during procurement reviews, SOC 2 audits, and acquisition due diligence.

Where This Series Goes From Here

This post establishes the framework. The data flow map and the three-actor model are the analytical foundation for everything that follows.

The next eight posts in this series go deep on each of the issues surfaced here: data training provisions that enterprise procurement actually evaluates, AI output ownership and the warranty problem, what to look for in your LLM provider’s API agreement, regulatory disclosure obligations, AI-specific acceptable use restrictions, pricing and billing when your costs are per-token, insurance implications, and what recent litigation means for B2B SaaS providers offering AI features.

Each post is written from the provider’s perspective. You are the company building with AI. These are the questions your contracts need to answer.


This is the first post in the AI-Enabled SaaS series. Next: Customer Data and AI Training: The Clause That Will Make or Break Enterprise Deals.

For the foundational B2B SaaS legal stack (Terms of Service, Privacy Policy, DPA, SLA), see our core series on B2B SaaS legal frameworks.

No Boiler provides self-service legal document generation and educational content. This material and our service is not a substitute for legal advice. Please have a qualified attorney review any documents before relying on them.

No Boiler

Generate your legal stack in minutes.

Terms of Service, Privacy Policy, DPA, and Sub-Processor List — built on counsel-reviewed baselines, customized to your product.

Get started →