Skip to main content

Your AI Agent Just Became a Criminal: How Amazon's Perplexity Ruling Rewrites the Rules for Autonomous Software

· 9 min read
Dora Noda
Software Engineer

A federal judge in San Francisco just drew a line that every developer building AI agents needs to understand. On March 9, 2026, Judge Maxine M. Chesney ruled that Perplexity's Comet browser violated both the federal Computer Fraud and Abuse Act (CFAA) and California's Comprehensive Computer Data Access and Fraud Act by accessing Amazon accounts on behalf of users — even though those users explicitly granted permission. The critical distinction: user authorization is not the same as platform authorization.

This ruling doesn't just affect Perplexity. It potentially criminalizes an entire class of AI agent behavior that hundreds of startups, crypto protocols, and Web3 projects are building right now.

What Perplexity's Comet Actually Did

Perplexity launched Comet in July 2025 as an AI-first browser built on Chromium. Unlike a traditional browser extension or chatbot overlay, Comet embedded large language models directly into the browser's core architecture, enabling what the company called "agentic task execution." Users could instruct Comet to comparison-shop across websites, log into their accounts, add items to carts, and complete purchases — all autonomously.

The Amazon conflict began escalating well before the courtroom. According to court filings, Amazon warned Perplexity at least five times starting in November 2024 to stop Comet from accessing Amazon's systems. In August 2025, Amazon deployed a technical barrier to block Comet. Perplexity responded with a software update within 24 hours to circumvent it.

Amazon filed its federal lawsuit on November 4, 2025, arguing that Perplexity deliberately disguised Comet's AI agent as a regular Google Chrome browser session to evade detection. The preliminary injunction granted in March 2026 not only barred Comet from accessing Amazon's password-protected systems but also required Perplexity to destroy any Amazon data it had collected.

The ruling's most consequential finding can be distilled into a single sentence from Judge Chesney's opinion: Comet accessed Amazon accounts "with the Amazon user's permission, but without authorization by Amazon."

This distinction sounds simple, but its implications are profound. For decades, the CFAA's "authorization" requirement has been contested legal territory. The Supreme Court narrowed CFAA's scope in Van Buren v. United States (2021), but that case dealt with an employee exceeding authorized access on a system they were otherwise permitted to use. The Amazon v. Perplexity case asks a fundamentally different question: when an AI agent acts on a user's behalf, whose authorization counts?

Judge Chesney's answer is clear: the platform's. The question is no longer simply whether a user consents to having an AI act on their behalf. The question is whether the platform where that action takes place has consented to an AI agent showing up in the first place.

This creates three critical red lines for developers:

  1. Account credential access: Any AI agent that uses customer credentials to log into a third-party platform without that platform's explicit consent risks CFAA liability.
  2. Password-protected sections: Accessing areas behind authentication gates — even with valid user credentials — constitutes unauthorized access if the platform hasn't sanctioned the agent.
  3. Continuing after platform warnings: Perplexity's decision to circumvent Amazon's technical blocks after receiving explicit warnings significantly strengthened Amazon's case.

Why This Matters Beyond Shopping

The immediate framing of this case as an "agentic commerce" dispute understates its reach. The legal principle — that platform authorization trumps user authorization — applies equally to AI agents operating across financial services, social media, healthcare portals, enterprise SaaS, and critically, decentralized finance.

Consider the current landscape. More than 68% of new DeFi protocols launched in Q1 2026 included autonomous AI agents. AI agents now represent approximately 18% of total prediction market volume. Forty-one percent of crypto hedge funds are actively using or testing on-chain AI agents. The entire thesis of agentic AI in Web3 — autonomous agents executing swaps, managing portfolios, and interacting with protocols — depends on answering the authorization question correctly.

The difference in Web3 is that many protocols are permissionless by design. A decentralized exchange doesn't have terms of service in the traditional sense. Smart contracts execute based on valid transactions, not on whether the caller is a human or a bot. But the Perplexity ruling creates a precedent that could extend to any platform with even minimal access controls — API rate limits, terms of service restrictions, or authentication requirements.

For centralized exchanges and CeFi platforms, the implications are immediate. An AI trading agent that logs into a user's Coinbase or Binance account using stored credentials — even with the user's full consent — could face the same legal exposure as Comet.

The Fork in the Road: APIs vs. Credential Access

The ruling effectively forces the AI agent ecosystem toward one of two architectural models.

Model 1: Platform-Sanctioned APIs

The "safe" path requires AI agents to operate exclusively through official APIs and partnership agreements. This is already taking shape:

  • Google's Universal Commerce Protocol (UCP), announced in January 2026 with over 20 partners including Shopify, Target, and Walmart, creates an open standard for agentic commerce that works with existing retail infrastructure.
  • OpenAI and Stripe's Agentic Commerce Protocol (ACP) powers Instant Checkout within ChatGPT, focusing specifically on the checkout layer.
  • Bybit's AI Trading Skills expose 253 API endpoints for natural-language trading, enabling agents to execute without credential access.

These protocols share a common architecture: the platform explicitly authorizes agent interactions through defined interfaces, rate limits, and permission scopes. The agent never touches user credentials directly.

Model 2: Credential-Based Access (Now Legally Perilous)

The alternative — AI agents using user-provided credentials to access platforms as if they were the user — is what Perplexity attempted. After this ruling, this model carries potential criminal liability under the CFAA, regardless of user consent.

For Web3, this fork is particularly significant. Decentralized protocols naturally favor the API model — smart contract interactions are inherently permissionless and don't require credential spoofing. But hybrid architectures that bridge centralized and decentralized systems (connecting a user's bank account to a DeFi protocol, for example) will need to navigate this distinction carefully.

China's "Controlled Openness" as a Counterpoint

While US courts define AI agent boundaries through litigation, China is approaching the problem through regulatory architecture. The Chinese mobile ecosystem's fragmentation — where different apps and services don't interoperate or share data — creates a structural challenge for AI agents that must operate across multiple superapps and devices.

ByteDance's Doubao AI agent, for instance, faces potential antitrust restrictions preventing it from directing users to Douyin's e-commerce platform. Alipay's Zhixiabao "AI Life Manager" operates within Ant Group's walled garden. The emerging Chinese model treats AI agents as potential "digital gatekeepers," applying a framework of controlled openness where access is managed to prevent monopolistic behavior.

The contrast is instructive. The US approach, exemplified by the Perplexity ruling, gives platforms the power to block AI agents entirely through terms of service. China's approach potentially constrains platforms' ability to restrict agent access if doing so creates anticompetitive effects. Neither model fully resolves the tension between user autonomy and platform control.

The Ninth Circuit Wild Card

Perplexity appealed the preliminary injunction on March 11, 2026. The Ninth Circuit's review represents a high-stakes inflection point for the entire AI agent industry.

If the appellate court upholds the injunction, the CFAA becomes a powerful tool for any platform to block AI agents, regardless of user consent. Terms of service violations could constitute federal computer fraud. Every AI agent developer would need platform-by-platform authorization agreements — fundamentally shifting power toward incumbent platforms.

If the Ninth Circuit reverses, it signals that user authorization is sufficient and that platforms cannot invoke fraud statutes to block tools that users have explicitly enabled. This would preserve the "personal AI agent" vision where users control how they interact with the web, but it would also limit platforms' ability to protect their ecosystems from unwanted automated access.

Legal scholars note that neither outcome perfectly serves all interests. A user who grants their AI agent access to their Amazon account is exercising a form of consumer autonomy. But Amazon has legitimate interests in controlling how its systems are accessed, preventing scraping, and maintaining the integrity of its marketplace.

What Developers Should Do Now

The ruling creates immediate action items for anyone building AI agents:

Audit your access patterns. If your agent accesses any third-party platform using user credentials, you need to evaluate whether the platform has explicitly authorized that access. Implicit acceptance (the platform hasn't blocked you yet) is not the same as authorization.

Adopt API-first architectures. Design agents to interact through official APIs, MCP (Model Context Protocol) integrations, and platform-sanctioned endpoints. Google's UCP and OpenAI's ACP provide emerging standards.

Document authorization chains. Maintain clear records of which platforms have authorized your agent's access, through what mechanisms, and under what constraints.

Watch the Web3 advantage. Permissionless smart contract interactions don't face the same CFAA exposure as credential-based access to centralized platforms. Protocols built on public blockchains operate with a fundamentally different authorization model — valid transactions are authorized by the protocol's own rules, not by terms of service.

Monitor the Ninth Circuit. The appellate ruling will likely arrive in Q3 or Q4 2026 and will either entrench or reverse the current precedent.

The Bigger Picture: Who Controls the Agent Layer?

The Amazon v. Perplexity case ultimately asks a question that will define the next decade of computing: who controls the layer between users and platforms?

In the pre-AI web, the answer was clear — the browser. Users chose Chrome, Firefox, or Safari, and those browsers faithfully rendered whatever the platform served. The browser was a neutral intermediary.

AI agents fundamentally change this dynamic. An AI agent doesn't just render a platform's interface — it interprets, navigates, compares, and acts within it. It collapses entire shopping workflows into single commands. It can aggregate data across competing platforms in ways those platforms never intended.

The Perplexity ruling sides with platforms in this power struggle. But the technological trajectory — agentic commerce protocols, permissionless blockchain interactions, and user demand for AI-powered autonomy — points toward a future where agents operate as authorized intermediaries rather than unauthorized intruders.

The question is whether that future arrives through partnership (open protocols and API standards) or through legal battles. The 2026 ruling suggests we're getting both.

For developers building AI agents that interact with blockchain infrastructure, permissionless architectures offer a natural advantage. BlockEden.xyz provides RPC and API endpoints for 30+ blockchain networks, enabling agents to interact with on-chain protocols through sanctioned, open interfaces — no credential spoofing required.