Your user gave your AI agent permission to act on their behalf. The platform where your agent operates did not. A federal court just ruled that user permission is not platform authorization, and issued a preliminary injunction barring Perplexity’s AI shopping agent from accessing Amazon’s website. The order requires Perplexity to destroy all Amazon customer data its agent collected. This is not a regulatory action or a class action. It is a platform owner using computer fraud statutes to shut down an AI product.
Every other case in this series involves a regulator or a class of plaintiffs going after a SaaS vendor. This one is different. Amazon v. Perplexity is a platform owner using the Computer Fraud and Abuse Act (CFAA), the federal statute prohibiting unauthorized computer access, and California Penal Code § 502, the state’s parallel computer fraud law, to enjoin an AI company from accessing its infrastructure.
The ruling, issued March 9, 2026, by U.S. District Judge Maxine Chesney in the Northern District of California, is the first major court order addressing agentic AI commerce. It establishes a principle that every founder building AI agents needs to understand: your user’s consent does not equal the platform’s authorization.
What Happened
Perplexity AI launched Comet, an AI-powered browser that can autonomously browse websites, log into user accounts, find products, compare prices, and complete purchases on behalf of the user. It is not a passive tool. It is an agent that navigates third-party platforms and takes actions using the user’s credentials.
One of the platforms Comet targeted was Amazon. Users could ask Comet to shop Amazon on their behalf, and Comet would log into their Amazon account, browse products, add items to a cart, and complete checkout. Perplexity launched a “Buy with Pro” feature for U.S. users in November 2025, in partnership with PayPal, that enabled one-click AI-assisted purchases.
Amazon did not authorize any of this.
The timeline shows a company that was warned repeatedly and kept going:
In November 2024, Amazon first contacted Perplexity, warning that Comet was violating Amazon’s conditions of use by accessing the platform without identifying itself as an automated tool. Perplexity initially agreed to stop deploying AI agents on Amazon until the companies reached mutually agreed terms.
Throughout 2025, Amazon sent at least five separate warnings. According to Amazon’s filings, Perplexity had initially agreed to cease deploying AI agents on Amazon until the companies reached mutually agreed terms, but reversed course and resumed operations.
In August 2025, Amazon deployed a technical barrier designed to detect and block Comet’s automated access. Within 24 hours, Perplexity pushed a software update that circumvented the barrier and restored Comet’s access. According to Amazon’s complaint, Perplexity deliberately disguised Comet as a Google Chrome browser session to evade detection.
On October 31, 2025, Amazon sent a formal cease-and-desist letter directly addressed to Perplexity CEO Aravind Srinivas, demanding that Perplexity stop disguising Comet as Chrome, transparently identify its AI agents when operating on Amazon, and respect Amazon’s decisions about agentic activities on its platform.
On November 4, 2025, Amazon filed a federal lawsuit alleging computer fraud and unauthorized access. Perplexity published a blog post titled “Bullying is not innovation,” calling the lawsuit a bully tactic and arguing that users should be free to choose their own AI tools.
The Ruling
Judge Chesney granted Amazon’s motion for a preliminary injunction, finding Amazon likely to succeed on the merits of its claims under both the CFAA and California Penal Code § 502.
The court’s analysis turned on a single distinction that carries implications well beyond this case. The order states that Perplexity, through its Comet browser, accesses Amazon accounts “with the Amazon user’s permission but without authorization by Amazon.” The user gave Perplexity their login credentials. The user asked Comet to shop on their behalf. But Amazon, the entity that owns and operates the platform, did not authorize Perplexity to access its systems.
The court cited Facebook v. Power Ventures, a Ninth Circuit decision holding that user consent is “not sufficient to grant continuing authorization to access [plaintiff’s] computers after [plaintiff’s] express revocation of permission.” Once Amazon revoked authorization through its cease-and-desist letters, any continued access by Comet was unauthorized regardless of what the user permitted.
Amazon submitted evidence that it spent significantly more than $5,000 responding to Perplexity’s conduct, including employee time spent building tools to block Comet and detect future unauthorized access. The court found this evidence “essentially undisputed” and sufficient to meet the CFAA’s loss threshold.
The injunction is broad:
Perplexity and its agents are enjoined from accessing or attempting to access Amazon’s protected computer systems using AI agents, defined as any software deployed through the Comet browser that can autonomously or semi-autonomously perform actions and interact with third-party websites on behalf of a user. Perplexity is also barred from using, creating, or taking over Amazon accounts for the purpose of AI agent access.
Perplexity must destroy all copies of Amazon’s data, including customer data, obtained through AI agent access, whether held by Perplexity or its third-party service providers.
Perplexity must certify compliance by declaration within 30 days of the Ninth Circuit’s ruling on the administrative stay.
No bond was required. The court rejected Perplexity’s request for a $1 billion bond based on its market valuation and investment in Comet, finding that the injunction does not threaten the entirety of Perplexity’s business since Comet can still operate on every other website.
No stay pending appeal was granted. The court found Amazon, not Perplexity, is likely to succeed on the merits. The court granted only a seven-day administrative stay to allow Perplexity to seek a stay from the Ninth Circuit. Perplexity filed its appeal the next day.
Why This Case Is Different from the Rest of the Series
The previous five posts in this series cover cases where SaaS vendors face liability for how their products handle data: wiretapping claims for processing end-user communications without consent, BAA breaches for using patient data outside authorized purposes, discrimination claims for biased algorithmic outputs, and FTC enforcement for overstated marketing claims.
Amazon v. Perplexity is about something more fundamental: whether your AI agent is allowed to be on the platform at all.
This is not a data privacy claim. Amazon is not alleging that Perplexity mishandled customer data (though it raised security concerns). The claim is unauthorized access. Perplexity’s AI agent accessed Amazon’s computers without Amazon’s permission. That is a federal crime under the CFAA and a state law violation under California Penal Code § 502.
The adversary is also different. In every other case in this series, the plaintiff is either an end user, a class of affected individuals, or a regulator. Here, the plaintiff is the platform owner, a company with effectively unlimited litigation resources and a strategic interest in controlling how third parties interact with its infrastructure. The power asymmetry runs in the opposite direction from a typical class action.
What This Means for SaaS Founders Building AI Agents
If your product sends AI agents to interact with third-party platforms, whether to browse, purchase, extract data, fill forms, monitor prices, or automate workflows, the Amazon v. Perplexity ruling applies to you. The specific context (shopping) matters less than the legal principle: accessing a platform’s protected systems with an AI agent requires the platform’s authorization, not just the user’s.
A cease-and-desist letter creates the legal bright line. Before Amazon sent its cease-and-desist, there was arguably ambiguity about whether Comet’s access was authorized. After the letter, there was none. The CFAA requires “without authorization or exceeding authorized access.” A cease-and-desist letter from the platform explicitly revokes whatever authorization may have existed. If you receive one, continued access is unauthorized access. This is not a gray area after Power Ventures.
User consent does not override platform authorization. Perplexity’s strongest argument was that users freely chose to let Comet act on their behalf. The court rejected this. The user’s consent runs to the agent (Perplexity). The platform’s authorization runs to the platform (Amazon). They are separate requirements. Your user can give your agent their credentials. The platform can still say no. And if the platform says no, the CFAA says you are committing computer fraud if you continue.
Disguising your agent as a human browser is an aggravating factor. Amazon’s complaint alleged that Perplexity deliberately identified Comet as Google Chrome to evade detection. This strengthened Amazon’s case on multiple fronts: it demonstrated intent, it showed Perplexity was aware Amazon would block its agent, and it undermined any argument that the access was inadvertent. If your AI agent identifies itself honestly and respects robots.txt, terms of service, and platform API rules, you are in a fundamentally different position than a company that actively disguises its agent to circumvent detection.
The remedy is destructive, not just prohibitive. The court did not just say “stop accessing Amazon.” It ordered Perplexity to destroy all data collected through AI agent access and certify compliance under penalty of perjury. If your AI agent collects data from a platform that later revokes authorization, you may not be able to keep that data. Your data retention policies and your product architecture need to account for the possibility that a platform requires you to delete everything your agent collected.
Platform terms of service are enforceable barriers. Amazon’s conditions of use prohibit the use of data mining, robots, or similar data gathering and extraction tools. Perplexity’s Comet violated those terms. Most major platforms have similar restrictions. If your AI agent operates on a platform whose terms prohibit automated access, you are on notice that your access is unauthorized. You do not need to wait for a cease-and-desist letter if the terms already prohibit what your agent does.
What Founders Building AI Agents Should Take from This
The lesson from Amazon v. Perplexity is not “structure your contracts to manage the risk of unauthorized platform access.” It is “do not access platforms without authorization.”
Get platform authorization before you ship. If your AI agent interacts with a third-party platform, whether to browse, transact, extract data, or automate workflows, you need that platform’s permission. Not your user’s permission. The platform’s. Some platforms offer APIs with clear terms of use. Some have partner programs. Some will say no. If the platform says no, that is the end of the analysis. Building a product that depends on unauthorized access to a third-party platform is building on a foundation that can be removed by injunction.
Identify your agent honestly. Perplexity disguised Comet as a Google Chrome browser session to evade Amazon’s detection. This was central to the court’s analysis and eliminated any argument that the access was inadvertent or that Perplexity was unaware of Amazon’s objections. If your AI agent identifies itself honestly, respects robots.txt, and operates within a platform’s published API terms, you are in a fundamentally different legal position than a company that actively conceals its agent’s identity.
Stop when the platform tells you to stop. Amazon warned Perplexity at least five times over the course of a year. Perplexity circumvented Amazon’s technical block within 24 hours and continued operating. The court cited Facebook v. Power Ventures for the principle that a cease-and-desist letter definitively revokes any authorization that may have existed. After that letter, continued access is unauthorized access under the CFAA. There is no gray area here. If you receive a cease-and-desist from a platform, stop accessing it and consult counsel before taking any further action.
Do not build your core value proposition on a platform you do not control and have not contracted with. This is the business lesson underneath the legal one. Perplexity built an agentic shopping browser and targeted the largest e-commerce platform in the United States without that platform’s agreement. The injunction did not shut down Perplexity or Comet entirely. But it removed the single most commercially significant platform from Comet’s reach. If your product’s primary use case depends on access to a platform that has not authorized you, your product has a single point of failure that no contract provision can fix.
The Through-Line
This series has tracked a consistent pattern: SaaS vendors face liability when their AI products interact with the world in ways their legal stack did not anticipate. ConverseNow processed phone calls without end-user consent. Sharp’s AI scribe recorded medical visits without patient knowledge. Verily used patient data outside its BAA. Workday’s hiring tools produced discriminatory outcomes. The FTC targeted companies that overstated their AI capabilities.
Amazon v. Perplexity adds a new dimension. The previous cases all involve claims about how data was handled or how outputs affected people. This case is about whether the AI agent was allowed to be there in the first place. It is the most basic question an agentic AI product must answer: does the platform you are operating on know you are there, and has it said yes?
For SaaS founders building AI agents, the answer needs to be in your legal stack, in your product architecture, and in your relationship with every platform your agent touches, before you ship.
This is the sixth post in the AI Privacy Litigation series. Previous: FTC AI-Washing Enforcement: What SaaS Founders Get Wrong About Marketing AI Features. Next: AI Disclosure Laws by State: A Practical Compliance Map for B2B SaaS.
For the contractual framework on AI-specific acceptable use and third-party platform interactions, see AI-Specific Acceptable Use and Contracting With Your LLM Provider in the AI-Enabled SaaS series.
No Boiler provides self-service legal document generation and educational content. This material and our service is not a substitute for legal advice. Please have a qualified attorney review any documents before relying on them. No Boiler is not a law firm, and communications with us do not create an attorney-client relationship or carry any expectation of confidentiality. Use of our platform and content is governed by our Terms of Service and Privacy Policy.