When you integrated an LLM API into your product, you added a subprocessor.
This is not a subtle point, but it is one most B2B SaaS companies miss. If your product sends customer data to an external AI provider for processing, that provider is a subprocessor under your DPA. It belongs on your subprocessor list. If your DPA includes a subprocessor change notification process, adding the AI provider triggers that process. If your customer has objection rights when you add subprocessors, those rights apply.
For startups building from scratch, this is straightforward: include your AI provider on the subprocessor list from day one. For established vendors who integrated AI features into an existing product, this is where it gets uncomfortable. Your customers signed DPAs based on a known subprocessor list. If you added an AI provider to your processing chain without updating the list and notifying customers, you are in breach of your own DPA. Not potentially in breach. In breach.
This post covers two related problems. The first is the subprocessor disclosure gap: what your DPA requires when you add AI providers, and how to fix it if you have fallen behind. The second is the regulatory disclosure gap: what the EU AI Act, US state AI laws, and transparency obligations require from B2B SaaS companies offering AI features, and how those requirements are already affecting procurement and contracting.
AI Providers as Subprocessors
Your DPA defines subprocessors as third parties that process personal data on your behalf in connection with providing the service. Your LLM provider fits this definition whenever customer data (including data that contains or could contain personal information) is sent to the provider’s API.
The obligations flow from your existing DPA, not from any new AI-specific regulation. Most B2B SaaS DPAs include three subprocessor-related commitments: maintaining a current list of subprocessors, notifying customers before adding new subprocessors (with a defined notice period, typically 10 to 30 days), and providing customers with an opportunity to object to new subprocessors (with objection rights that may include a right to terminate).
When you add an AI provider, all three commitments activate. Your subprocessor list needs to be updated to include the provider’s name, the processing purpose (e.g., “AI-assisted content generation,” “natural language processing for search functionality”), and the processing location. Your notification process needs to run. And your customers need the opportunity to exercise whatever objection rights your DPA provides.
The practical challenge for established vendors is that the AI integration may have happened before anyone flagged the subprocessor implications. The engineering team added the API. The product shipped. The subprocessor list was not updated because nobody connected the technical integration to the DPA obligation. By the time legal catches up, the feature has been live for months and thousands of customers have been using it with an undisclosed subprocessor processing their data.
The fix is straightforward but not painless. Update the subprocessor list. Send the notification. Manage any objection conversations. For most B2B SaaS products, the objection rate will be low. Most customers accept new subprocessors, especially when the subprocessor is a well-known AI provider and the processing purpose is clearly connected to a feature the customer is already using. The risk is not the objection rate. The risk is that you have been operating in breach of your own DPA for however long the gap has existed, and that this breach is discoverable in a procurement review, an audit, or a due diligence process.
The deeper question is whether your DPA’s processing purposes schedule covers the AI processing at all. As discussed in the earlier post on data training, if your DPA authorizes processing “to provide the Service as described in the Agreement” and your Agreement was drafted before AI features existed, the authorization may not cover the new processing activities. Updating the subprocessor list is necessary. It may not be sufficient. The processing purposes schedule may need to be updated as well, which for enterprise customers means a negotiated DPA amendment.
The EU AI Act: When It Reaches US SaaS Companies
The EU AI Act is the first comprehensive AI-specific regulation in the world, and its reach extends beyond EU borders. If your AI-enabled SaaS product is used by EU customers, used by customers whose end users are in the EU, or produces outputs that are used within the EU, the Act may apply to you regardless of where your company is headquartered.
The Act uses a risk-based classification system. Most B2B SaaS AI features (search, summarization, content generation, recommendation engines) fall into the limited-risk or minimal-risk categories, which carry transparency obligations but not the full compliance burden of high-risk systems. The high-risk category is where the obligations become substantial: AI systems used in employment decisions, creditworthiness assessments, insurance pricing, educational admissions, and similar areas that affect individuals’ rights and opportunities.
Here is what matters for B2B SaaS companies right now.
The general-purpose AI model obligations have been in effect since August 2025. These apply primarily to the LLM providers (the model developers), not to the deployers (you). But they affect you indirectly because your provider’s compliance (or non-compliance) with these obligations may affect the legality of using their models in the EU market. Your provider should be able to confirm its GPAI compliance status.
The transparency obligations under Article 50 become fully applicable in August 2026. If your AI feature interacts directly with individuals (a chatbot, an AI assistant, a conversational interface), those individuals must be informed they are interacting with an AI system. If your product generates synthetic content (AI-generated text, images, or audio), that content must be machine-readable as AI-generated. These are disclosure obligations that affect your product design and your customer-facing communications, not just your legal documents.
The high-risk system obligations also apply from August 2026 (with a possible extension to December 2027 for certain Annex III systems, though this is not confirmed). If your AI feature is used in a high-risk domain, the obligations include risk management systems, data governance, technical documentation, human oversight, and conformity assessment. Most seed-stage B2B SaaS companies will not hit the high-risk threshold for their own features. But if your customers use your product in high-risk applications (e.g., a customer uses your AI-powered analytics tool for employment screening), you may face downstream compliance questions from customers who need to demonstrate that their AI tools meet the deployer obligations.
The provider vs. deployer distinction matters. If you build and brand the AI feature, integrate it into your product, and make it available to EU customers under your name, you are likely a provider with the full set of obligations for your risk tier. If you license a third-party AI model and resell it without substantial modification, you may qualify as a deployer with lighter obligations. Most B2B SaaS companies fall somewhere in between: they use a third-party model but build a significant application layer around it. The classification depends on the specifics and may require legal analysis.
US State AI Laws: What Is in Effect Now
In the absence of comprehensive federal AI legislation, US states have moved ahead. Two major laws took effect January 1, 2026, and others are in various stages of implementation.
The Colorado AI Act establishes a risk-based framework for AI developers and deployers. AI systems used in “consequential decisions” (employment, lending, insurance, housing, education, and similar domains) are classified as high-risk, triggering obligations around algorithmic impact assessments, bias testing, consumer notification, and disclosure. The law was originally scheduled for mid-2025 but has been delayed, with full enforcement now expected in 2026. For B2B SaaS companies, the implications depend on how your customers use your product. If your AI feature is used by customers to make or substantially influence consequential decisions, both you (as developer) and your customer (as deployer) may have obligations under the Act.
The Texas Responsible AI Governance Act prohibits certain harmful AI uses (systems designed to discriminate unlawfully, produce illegal deepfakes, or incite self-harm) and requires transparency disclosures when AI systems interact with consumers in regulated contexts. Its scope is broad: it applies to developers and deployers who conduct business in Texas, provide products or services used by Texas residents, or deploy AI systems within Texas.
Meanwhile, a December 2025 executive order from the Trump administration signals intent to establish a federal AI policy framework that could preempt state laws deemed inconsistent with federal policy. The Department of Commerce was directed to evaluate burdensome state AI laws by March 2026. Whether and how this preemption plays out is uncertain, but the signal is clear: the federal government is interested in consolidating AI oversight.
The practical takeaway for B2B SaaS companies: do not wait for the federal picture to clarify before acting on state obligations that are already in effect. Build your compliance framework around the strictest applicable requirements. If you are subject to both the EU AI Act and Colorado’s law, the EU obligations will generally be the stricter standard and will satisfy most US state requirements as well.
What This Means for Your Contracts
The regulatory landscape creates four specific contractual implications.
First, transparency disclosures need to be in your product and your agreements. If the EU AI Act requires you to inform users they are interacting with an AI system, that disclosure should appear both in the product interface and in your Terms of Service. If state laws require disclosure when AI is used in consequential decisions, your customer-facing documentation should make this clear.
Second, your DPA needs to account for AI-specific processing. This means updated processing purposes, an accurate subprocessor list, and (for high-risk applications) provisions addressing risk management, human oversight, and cooperation with regulatory inquiries.
Third, your customer agreements should allocate regulatory compliance responsibilities. Your customer may have deployer obligations under the EU AI Act or state AI laws. Your terms should be clear about what you are responsible for (provider-level compliance, transparency features, technical documentation) and what the customer is responsible for (deployer-level compliance, human oversight in their use case, impact assessments for their specific application).
Fourth, your terms should include a regulatory change mechanism. AI regulation is moving fast. A provision that commits you to adjust AI features or compliance practices as laws develop (without requiring a full contract renegotiation each time) gives both parties flexibility. This is analogous to how many DPAs handle changes in data protection law: the provider commits to compliance with applicable data protection laws as amended from time to time, rather than locking in compliance with a specific version of a specific law.
The Disclosure Gap Is the Risk
The common thread across everything in this post is disclosure. The subprocessor gap is a disclosure problem. The EU AI Act transparency obligations are disclosure requirements. The US state laws center on notification and transparency. Enterprise procurement teams are asking about AI governance, subprocessor chains, and regulatory compliance because their own compliance obligations depend on yours.
The companies that get ahead of these disclosures, that proactively update subprocessor lists, implement transparency features before the deadline, and build regulatory compliance language into their agreements, will close enterprise deals faster than the companies that wait to be asked.
This is the fifth post in the AI-Enabled SaaS series. Previous: Contracting With Your LLM Provider: What Most Companies Miss in the API Agreement. Next: AI-Specific Acceptable Use: Drawing the Line on What Users Can Do With Your AI Features.
No Boiler provides self-service legal document generation and educational content. This material and our service is not a substitute for legal advice. Please have a qualified attorney review any documents before relying on them.