Your acceptable use policy was written for a deterministic product. It covers the standard categories: no illegal activity, no security violations, no reverse engineering, no competitive use, no abuse of shared resources in a multi-tenant environment.
None of that addresses what happens when a customer uses your AI feature to generate content that infringes third-party rights. Or feeds protected health information into an AI-powered search tool without authorization. Or uses prompt injection to bypass your safety filters and extract the system prompt. Or relies on your AI feature to make automated hiring decisions without any human review.
These are not edge cases. They are the predictable misuse patterns for any B2B SaaS product with AI features, and your standard AUP gives you no contractual basis to restrict them.
This post covers the AI-specific acceptable use restrictions your AUP needs, how to structure the boundary between user responsibility and provider responsibility for AI-generated outputs, and where these restrictions should live in your document stack.
What Your Standard AUP Misses
A standard SaaS AUP is built around a simple model: the customer uses the software, the software does what it is designed to do, and the AUP restricts how the customer uses it. The restrictions are about the customer’s behavior, not the software’s output, because deterministic software does not produce unpredictable results.
AI breaks this model. The customer provides an input. The AI generates an output. The output may be inaccurate, infringing, harmful, or inappropriate, and neither the customer nor the provider reviewed it before it appeared. The question is no longer just what the customer did with the software. It is also what the software did with the customer’s input. Your AUP needs to address both sides.
There are six categories of AI-specific misuse that a standard AUP does not cover.
The Six Restrictions
Prompt injection and safety filter circumvention. Prompt injection is the AI equivalent of SQL injection: the user crafts inputs designed to manipulate the model into behaving outside its intended parameters. Jailbreaking is a subset of this where the user attempts to bypass safety filters, content restrictions, or role boundaries that you have implemented. Your AUP should prohibit attempts to circumvent, disable, or manipulate any safety filters, content moderation mechanisms, or usage restrictions implemented in the AI features. This is not just about protecting your customers from bad outputs. It is about protecting your platform. If a user successfully jailbreaks your AI feature and uses it to generate prohibited content, you are the entity hosting and serving that content.
Prohibited content generation. Your standard AUP prohibits using the service for illegal activity. But AI features can generate content that is harmful without being illegal in every jurisdiction: deepfakes, synthetic impersonations, misleading content designed to deceive, or content that facilitates harassment. Your AI-specific restrictions should explicitly prohibit generating content intended to deceive, defraud, or mislead (including deepfakes and synthetic impersonations), and content intended for harassment, intimidation, or incitement. Being explicit matters because the general “no illegal activity” clause may not cover content that is harmful but technically legal in the customer’s jurisdiction.
Unauthorized input of regulated data. Your customer may feed data into your AI features that your product is not designed or certified to handle. Health records that implicate HIPAA. Financial data subject to regulatory requirements. Children’s data subject to COPPA. Personal data of EU residents that triggers GDPR obligations your AI subprocessor chain is not set up to handle. Your AUP should require that customers have lawful authority (including any required consents or legal bases) before inputting personal data of third parties into AI features. This is not the same as your general data protection provision. It is specific to AI features because AI processing introduces subprocessors and data flows that your customer may not have contemplated when they collected the data from their end users.
Competitive model training. Your customer may attempt to use your AI features to build a competing product. The specific risk: systematically extracting AI outputs to build, augment, or validate a dataset used to train a competing machine learning model. Your standard non-compete or competitive use restriction may cover using the service itself to compete with you, but it may not specifically address using the outputs of your AI features to train a competing model. A dedicated restriction makes this explicit.
False attribution of AI outputs. In some contexts, customers are legally required to disclose that content was AI-generated. Under the EU AI Act’s Article 50 transparency obligations (applicable from August 2026), synthetic content must be machine-readable as AI-generated. Several US states have disclosure requirements in specific contexts. Your AUP should prohibit representing AI outputs as human-generated work where disclosure of AI involvement is required by applicable law. It should also prohibit removing or altering any AI-generated content labels or metadata your product applies. This protects both your customer (who may have regulatory obligations to disclose AI use) and you (who may face downstream liability if AI outputs are misrepresented).
Automated decision-making without human oversight. This is the restriction with the most direct litigation relevance. As discussed in the series post on AI litigation, the Workday case established that SaaS vendors can be liable when their AI features are used to make consequential decisions that discriminate against protected classes. Your AUP should prohibit using AI features for fully automated decision-making that produces legal effects or similarly significant effects on individuals, without appropriate human review. This does not mean prohibiting use in high-stakes domains entirely. It means requiring that the customer implement human oversight proportionate to the consequences of the decision. The restriction serves a dual purpose: it protects the individuals affected by the decision, and it limits your exposure as the provider by establishing that automated consequential decisions are outside the intended use of the product.
User Responsibility vs. Provider Responsibility
The six restrictions above define what the customer cannot do. The next question is what you are responsible for when the customer follows the rules and the AI still produces a problematic output.
This is a line that needs to be drawn clearly in your terms, not your AUP. The AUP restricts customer behavior. Your warranty disclaimers and limitation of liability provisions address what happens when the product produces imperfect results despite compliant use.
The clean structure is a shared responsibility model. The provider is responsible for implementing reasonable safety measures: content filtering, safety guardrails, model selection appropriate to the use case, and compliance with applicable regulations for the AI features themselves. The customer is responsible for reviewing and verifying AI outputs before relying on them, ensuring that their use of AI features complies with applicable law (including any human oversight requirements), and not inputting data that they are not authorized to process through the AI features.
Your AUP enforces the customer’s side of this model. Your warranty disclaimers (as covered in the earlier post on AI outputs) enforce the provider’s limitations. Together, they establish a framework where both parties have defined obligations and neither party has disclaimed everything.
Content Moderation: What to Promise and What Not To
A related question that procurement teams sometimes ask: do you moderate or filter AI outputs?
The honest answer for most B2B SaaS companies is that you implement safety guardrails (typically provided by your LLM provider’s built-in content filtering) and may apply additional product-level filtering, but you do not review individual outputs before they reach the customer.
Your AUP and terms should reflect this reality. State that you implement reasonable content moderation and safety measures. State that you reserve the right (but are not obligated) to monitor use for compliance with the AUP. Do not promise that all outputs will be filtered, reviewed, or approved before delivery, because that is a commitment you cannot keep at scale, and promising it creates an expectation (and potential liability) that you will catch every problematic output.
The “right but not obligation to monitor” formulation is standard in SaaS AUPs and applies equally to AI features. It preserves your ability to act on violations without creating a duty to affirmatively police every interaction.
Where AI Restrictions Live in Your Document Stack
You have two structural options for AI-specific acceptable use restrictions.
The first is inline: add AI-specific subsections to your existing AUP. This works if your AUP already lives as a separate document incorporated by reference into your Terms of Service. You add an “AI Features” subsection with the six categories above. The customer reviews one document and sees all restrictions in one place.
The second is as part of an AI addendum or supplemental AI terms. If you are using the feature-gated opt-in approach discussed in the first post of this series (customers accept supplemental AI terms before accessing AI features), the AI-specific restrictions naturally live in those supplemental terms. This creates a cleaner separation between the core product restrictions and the AI-specific restrictions.
For most B2B SaaS companies, the inline approach is simpler and creates fewer documents. The supplemental terms approach makes sense if your AI features are genuinely optional and you want a distinct contractual boundary around them.
Regardless of where the restrictions live, they should be cross-referenced in your limitation of liability section. If a customer violates the AI-specific AUP restrictions and the violation causes harm (to the customer, to third parties, or to you), your limitation of liability should not cap your customer’s exposure for that violation the same way it caps ordinary claims. AUP violations, like confidentiality breaches and IP infringement, are standard candidates for carve-outs from the general liability cap.
This is the sixth post in the AI-Enabled SaaS series. Previous: AI Subprocessors, the EU AI Act, and the Regulatory Disclosure Gap. Next: Pricing AI Features: Billing Terms When Your Costs Are Per-Token.
No Boiler provides self-service legal document generation and educational content. This material and our service is not a substitute for legal advice. Please have a qualified attorney review any documents before relying on them.