Your AI Agent Just Committed a Federal Crime — Inside the Ruling That Could Kill Agentic Commerce
A federal judge in San Francisco just ruled that your AI shopping assistant may be breaking the same law used to prosecute hackers — even when you explicitly told it to act on your behalf. The March 2026 Amazon v. Perplexity decision draws a line that could reshape the entire AI agent industry: user permission is not platform permission.
The implications extend far beyond one company's browser. As 17,000+ autonomous agents execute millions of daily transactions across Web2 and Web3, this ruling forces a fundamental question: who actually authorizes an AI agent to act — the person who deployed it, or the platform it touches?
The Case: Amazon vs. Perplexity's Comet Browser
In late 2025, Perplexity AI launched Comet, an AI-powered browser designed to browse, compare prices, and complete purchases autonomously on behalf of users. The agent would log into a user's Amazon account, navigate product listings, and execute transactions — all without the user lifting a finger.
Amazon was not impressed.
The e-commerce giant warned Perplexity at least five times starting in November 2024 to stop its agents from accessing the platform. When warnings failed, Amazon implemented technical barriers in August 2025 to block Comet's access. Perplexity responded with a software update within 24 hours to circumvent the block.
Amazon also alleged that Perplexity deliberately disguised Comet as a regular Google Chrome browser session, evading bot-detection systems rather than transparently identifying itself as an AI agent. The company filed suit in November 2025.
On March 9, 2026, U.S. District Judge Maxine M. Chesney granted a preliminary injunction. The order required Perplexity to immediately stop accessing Amazon and destroy all data collected through Comet sessions.
The Legal Bombshell: User Authorization vs. Platform Authorization
The ruling's most significant finding centers on a distinction that no court had previously drawn so clearly in the AI context. Judge Chesney found that Comet accessed Amazon accounts "with the Amazon user's permission, but without authorization by Amazon," and ruled that Amazon was likely to prevail on claims under both the federal Computer Fraud and Abuse Act (CFAA) and California's Comprehensive Computer Data Access and Fraud Act (CDAFA).
This matters because the CFAA — a 1986 anti-hacking statute — imposes both civil and criminal liability for accessing a "protected computer" without authorization. Until this ruling, the legal community debated whether a user granting an AI agent their credentials constituted sufficient authorization. Judge Chesney's answer was unambiguous: it does not.
The precedent establishes three red lines for AI agent developers:
- Credential-gated access: Using customer login credentials to access third-party platforms without the platform's consent likely violates the CFAA, regardless of user authorization.
- Password-protected areas: Accessing non-public, account-specific pages (order history, payment methods, Prime-only content) amplifies CFAA exposure.
- Continued access after warnings: Operating an agent on a platform that has expressly told you to stop creates the strongest possible case for "without authorization."
The 9th Circuit Lifeline — and Why It's Temporary
One week after the injunction, the 9th U.S. Circuit Court of Appeals issued an administrative stay on March 16, temporarily lifting the ban. Circuit Judges Eric Miller and Patrick Bumatay allowed Perplexity's shopping agents to continue accessing Amazon while the appeals court conducts a more thorough review.
But this reprieve is explicitly temporary. The judges emphasized that the administrative stay exists only to preserve the status quo while they examine the merits — not because they disagree with Judge Chesney's analysis. The full appellate decision, expected later in 2026, will determine whether the "user authorization does not equal platform authorization" principle becomes binding precedent across nine western states.
Legal analysts note that even if the 9th Circuit modifies the lower court's reasoning, the core tension remains unresolved: platforms claim absolute authority over who or what accesses their systems, while AI companies argue that users have the right to delegate their own access to agents of their choosing.
The Protocol Solution: Google, OpenAI, and the Race to Legitimize Agent Commerce
The industry isn't waiting for courts to settle this. Two competing protocols have emerged to create legitimate, platform-sanctioned pathways for AI agent commerce.
Google's Universal Commerce Protocol (UCP), announced in January 2026 at the National Retail Federation conference, is an open-source standard developed with Shopify, Etsy, Wayfair, Target, and Walmart. UCP defines functional primitives for product discovery, cart management, checkout, and post-purchase workflows — creating a structured, permission-based channel through which AI agents can interact with merchants.
UCP integrates with Google's Agent Payments Protocol (AP2) and is compatible with both Agent2Agent (A2A) and the Model Context Protocol (MCP).
OpenAI's Agentic Commerce Protocol (ACP), developed with Stripe, takes a narrower approach focused on the checkout layer. ACP currently powers Instant Checkout in ChatGPT, enabling users to purchase from participating merchants without leaving the conversation.
The contrast between these protocols and Perplexity's approach is instructive. Where Comet accessed Amazon by impersonating a human browser session, UCP and ACP require explicit merchant opt-in. Merchants register their catalogs, define what agents can access, and maintain full control over pricing, inventory, and fulfillment data. The agent operates within a sandboxed commerce environment rather than crawling the open web.
This protocol-based model directly addresses the CFAA concern: if a platform explicitly publishes an API or joins a commerce protocol, agents accessing through those channels have unambiguous authorization.
What This Means for Web3 Agents
The Amazon v. Perplexity ruling sends a particularly important signal to the Web3 ecosystem, where autonomous agents are increasingly executing on-chain transactions, managing DeFi positions, and interacting with decentralized applications.
In Web3, the authorization model is fundamentally different — and potentially advantaged. When an AI agent interacts with a smart contract, it does so through a wallet with explicit cryptographic authorization. There is no ambiguity about whether the "platform" consented: smart contracts are permissionless by design, and the blockchain itself serves as the authorization layer. An agent with a signed transaction has, by definition, met the protocol's access requirements.
This creates a sharp contrast with Web2's access model:
- Web2: Platform owns the servers, sets the terms of service, and can revoke access at any time. AI agents must impersonate human users or negotiate API access.
- Web3: Smart contracts define access rules in code. Any entity — human or agent — that meets the cryptographic requirements can interact. Authorization is mathematical, not legal.
Two architectural patterns for Web3 AI agents avoid the CFAA trap entirely:
- Non-custodial delegation: The agent crafts transactions, but the user's wallet retains signing authority. The agent never holds credentials — it proposes actions that the user (or a smart contract with delegated permissions) approves.
- On-chain identity protocols: Standards like ERC-8004 enable agents to register verifiable on-chain identities, creating a transparent record of which agents are authorized to act and within what parameters.
However, Web3 agents are not immune from legal risk. When an agent interacts with a centralized exchange, a fiat on-ramp, or any platform with terms of service, the same CFAA logic applies. The ruling's message is clear: permissionless protocols are safe ground, but the moment an agent touches a permissioned system, platform authorization matters.
The Three Futures of AI Agent Access
The Amazon v. Perplexity case illuminates three possible trajectories for how AI agents will interact with digital platforms:
Scenario 1: Protocol Dominance. Commerce protocols like UCP and ACP become the standard. Platforms publish structured APIs, agents operate within sanctioned channels, and unauthorized scraping becomes legally and technically obsolete. This benefits large platforms that can dictate terms and disadvantages scrappy startups that rely on open-web access.
Scenario 2: Regulatory Carve-Out. Legislators create specific exemptions for AI agents acting on behalf of users, similar to how screen readers and accessibility tools receive legal protection. The argument: if a user has the right to access their own data, delegating that right to an AI agent should not create criminal liability. The EU, which currently lacks provisions for autonomous purchasing agents in its AI Act, may move first.
Scenario 3: The Web3 Bypass. Permissionless protocols capture an increasing share of commerce as developers route around the CFAA problem entirely. If interacting with Amazon requires platform permission but interacting with a decentralized marketplace requires only a wallet signature, rational builders will choose the path with less legal risk.
The most likely outcome is some combination of all three: protocol-based access for major platforms, regulatory updates that clarify agent rights, and a growing role for permissionless systems where authorization is embedded in code rather than contested in court.
What Developers Should Do Now
For teams building AI agents that interact with third-party platforms, the Amazon v. Perplexity ruling demands immediate attention:
- Audit your access patterns. If your agent uses user credentials to access any platform that hasn't explicitly authorized agent access, you have CFAA exposure.
- Adopt commerce protocols. Integrating with UCP, ACP, or platform-specific APIs eliminates the authorization question entirely.
- Don't circumvent blocks. If a platform tells you to stop, stop. Continuing after explicit warnings — as Perplexity did — is the strongest evidence of unauthorized access.
- Consider on-chain alternatives. For financial transactions, DeFi protocols offer a legally cleaner model where authorization is cryptographic rather than contractual.
- Watch the 9th Circuit. The full appellate decision will determine whether this precedent hardens or softens. Plan for both outcomes.
The Bigger Picture
The Amazon v. Perplexity ruling is not really about shopping bots. It's about who controls the interface layer between users and digital services in an age of autonomous agents. For forty years, that interface was a human sitting at a keyboard — and the legal system was built around that assumption. Now that AI agents are becoming the primary software interface, the law must decide whether the user's right to access a service includes the right to delegate that access to a machine.
The court's current answer — that it does not — will be tested, appealed, and eventually legislated. But the signal to builders is already clear: the era of building agents that access platforms without permission is over. The future belongs to protocols, APIs, and permissionless systems where authorization is granted by design, not contested after the fact.
For developers building AI agents that interact with blockchain protocols, BlockEden.xyz provides enterprise-grade RPC and API infrastructure across 20+ chains — giving your agents authorized, reliable access to on-chain data and transaction capabilities without the legal ambiguity of credential-based platform access.