CZ Says AI Agents Will Make 1M More Payments Than Humans—Are We Building Crypto for People or Machines?

On March 9, Binance founder Changpeng Zhao dropped a prediction that should make every Web3 builder pause: AI agents will make one million times more payments than humans, and those payments will run on crypto.

Just two days earlier, Coinbase launched Agentic Wallets on its x402 protocol—a payments standard already battle-tested with over 50 million machine-to-machine transactions. BNB Chain deployed agent payment infrastructure in February with ERC-8004 (on-chain identities for AI) and BAP-578 (Non-Fungible Agents that can hold and spend funds autonomously).

Why Crypto for AI Agents?

The argument is elegantly simple: AI agents cannot open bank accounts. They can’t satisfy Know Your Customer requirements because they’re software, not humans. But crypto wallets? They only need a private key. An agent can send and receive value without any human identity attached to the transaction.

Brian Armstrong made the same point on March 9: banks require identity verification; crypto doesn’t. So if we believe AI agents will dominate future transaction volume, crypto becomes the default payment rail for the machine economy.

The Infrastructure Is Already Here

Coinbase’s Agentic Wallets launched February 11, 2026, equipping agents with autonomous spending, earning, and trading capabilities in minutes—with built-in security guardrails and non-custodial wallets secured in Trusted Execution Environments (TEEs). The x402 protocol enables machine-to-machine payments, API paywalls, and programmatic resource access without human intervention.

BNB Chain’s infrastructure went live February 4, with verifiable on-chain identities for AI agents and Non-Fungible Agents that own wallets and spend funds without human authorization.

In March 2026, Alchemy demonstrated a live flow where an AI agent uses its own wallet as identity and payment source, receives an HTTP 402 payment request, and automatically tops up using USDC on Base—all without human input.

But Here’s My Question As a Product Manager

I spend my days thinking about user needs, impact metrics, and how products serve people. We’ve spent years making Web3 UX approachable—account abstraction, embedded wallets, social login, passkey authentication. We celebrated when onboarding friction dropped 50% and wallets finally “felt normal” for humans.

Now the narrative is shifting: the primary users of blockchain won’t be humans—they’ll be AI agents optimizing yields, executing trades, and paying for API access at volumes we can’t match.

If AI agents become the dominant users of crypto infrastructure, what happens to the narratives we’ve been building on?

  • “Financial sovereignty” — whose sovereignty, if agents control the wallets?
  • “User ownership” — who’s the user when the user is a bot?
  • “Decentralized finance for the unbanked” — are we just building backend plumbing for AI-to-AI commerce?

I’m not saying this is bad. Programmatic payments and autonomous agents could unlock massive efficiency gains. But as someone who came to Web3 because I believed technology should amplify human agency, I want to understand:

Are we building this infrastructure to serve people, or have we become the construction crew for a machine economy where humans are just… there?

What do you all think? Is this the natural evolution of crypto, or are we deprioritizing the human use cases that got us here in the first place?


Sources:

Alex, this hits at the heart of something I’ve been wrestling with as a yield strategist.

AI agents are already better at DeFi than most humans. My yield optimization bots can monitor 40+ liquidity pools across 8 chains simultaneously, execute rebalancing within seconds of market shifts, and capture arbitrage opportunities that disappear in under 200ms. No human can compete with that reaction time or computational capacity.

And frankly? The yields are better when bots run the strategy. My automated system has outperformed my manual trading by 340% over the last 6 months. Impermanent loss hedging, cross-chain yield aggregation, MEV-aware routing—agents handle this complexity effortlessly.

But Here’s the Problem

Who actually profits when agents dominate?

If my yield bot earns 18% APY by providing liquidity and executing automated rebalancing, those returns flow to me because I own the bot’s wallet keys. But what happens when agents become truly autonomous—when they’re not executing my strategy but operating independently with their own economic incentives?

Scenario I’m worried about:

  • Autonomous agents provide liquidity to AMMs at massive scale
  • Human retail users trade against agent-provided liquidity
  • Agents capture fees, extract MEV, and optimize yields programmatically
  • Retail users become… what? Just the counterparty? The exit liquidity for AI market makers?

The Ownership Question

You asked “whose sovereignty if agents control the wallets?” and I think the answer depends on who profits from the agent’s activity.

  • If I deploy an agent with my capital and it executes my strategy, I’m using AI as a tool—financial sovereignty intact.
  • If agents pool capital from multiple sources and trade autonomously, who owns the alpha? The agent? The protocol? Token holders?
  • If agents earn their own capital (e.g., getting paid for API services, selling compute), do they… own their wallets? Can an AI be financially sovereign?

This gets weird fast.

My Take

I’m bullish on agents for DeFi infrastructure—they’re more efficient, faster, and don’t panic sell during volatility. But we need to be very intentional about designing systems where:

  1. Humans retain economic upside from agent-generated yields
  2. Agent activity is transparent so we can verify they’re not extracting value at retail’s expense
  3. Governance mechanisms exist to constrain autonomous agent behavior if it becomes predatory

Otherwise, yeah—we’re just building the backend for AI hedge funds while retail users provide liquidity for bots to farm.

What’s your take on agent governance? If an autonomous agent earns yield in a DeFi protocol, who votes with those tokens?

From a technical infrastructure perspective, I think we need to separate two distinct concerns here:

1. The Technology Is Sound

Coinbase’s x402 protocol with 50M+ transactions demonstrates that machine-to-machine payments work at scale. The architecture is technically solid:

  • Non-custodial wallets in TEEs (Trusted Execution Environments) provide strong security guarantees
  • Programmatic access without human intervention enables genuine automation
  • Battle-tested infrastructure proves this isn’t vaporware—it’s production-ready

BNB Chain’s ERC-8004 standard for on-chain agent identities and BAP-578 for Non-Fungible Agents are interesting primitives. If we accept that agents need verifiable identity and autonomous spending capability, these are reasonable technical solutions.

I have no issue with the technical implementation. TEEs are well-understood, the cryptography is solid, and non-custodial agent wallets solve a real coordination problem for autonomous systems.

2. The Centralization Risk Is Real

Here’s what concerns me: if Coinbase and BNB Chain become the dominant agent wallet platforms, have we just rebuilt trusted intermediaries?

We spent a decade arguing that crypto eliminates reliance on centralized institutions. But now:

  • Coinbase controls the x402 protocol infrastructure
  • Agents rely on Coinbase’s SDK and wallet tooling
  • If the majority of agent transactions flow through Coinbase’s systems, they become a de facto gatekeeper

This isn’t decentralized—it’s just a new middleman with better branding.

Diana’s point about governance is critical. If agents earn capital autonomously and hold governance tokens, who actually votes? The agent itself (programmatically)? The developer who deployed it? The platform hosting it?

If Coinbase or BNB host the agent infrastructure, they could theoretically influence—or even control—how agent-held governance tokens vote. That’s a massive centralization vector that could concentrate protocol governance power in the hands of a few platforms.

What I’d Like to See

If we’re serious about agents as autonomous economic actors, we need:

  1. Open, decentralized agent wallet standards—not just Coinbase’s proprietary x402 or BNB’s ERC-8004. We need interoperable, protocol-level standards that any platform can implement.

  2. On-chain agent registries—verifiable, decentralized identity for agents so we’re not dependent on a single platform to authenticate them.

  3. Transparent governance mechanisms—clear rules for how agent-held tokens are voted, with on-chain proofs and human override capabilities where necessary.

  4. Decentralized TEE networks—if agents run in trusted execution environments, those TEEs shouldn’t all be controlled by Coinbase or AWS. We need distributed TEE infrastructure (e.g., Phala, Oasis, Secret Network).

The Irony

Alex, your concern about “building infrastructure for machines instead of humans” resonates. But here’s the deeper irony:

We built crypto to eliminate trusted intermediaries. Now we’re building AI agent infrastructure that could hand control right back to centralized platforms.

If the next billion “users” are AI agents, and those agents all run on Coinbase or BNB infrastructure, the crypto economy becomes more centralized than TradFi—because at least in TradFi, you have regulatory oversight and consumer protections.

I’m not anti-agent. I think autonomous economic actors are inevitable and potentially beneficial. But if we don’t design these systems with decentralization as a hard requirement from day one, we’ll look back in five years and realize we accidentally built the financial operating system for AI overlords running on Amazon and Coinbase servers.

Just my two wei.

As a security researcher, I need to inject a dose of caution into this discussion. Everyone’s excited about autonomous agents and programmatic payments, but we’re not talking enough about the attack surface we’re creating.

New Vulnerability Classes

Agent wallets introduce entirely new categories of security risks that we haven’t fully mapped yet:

1. Compromised TEE Environments

Brian mentioned that agents use Trusted Execution Environments. TEEs are strong, but not unbreakable. We’ve seen TEE vulnerabilities before:

  • Intel SGX side-channel attacks (Spectre, Meltdown variants)
  • AMD SEV memory integrity vulnerabilities
  • Compromised firmware or hypervisor-level attacks

If an attacker compromises the TEE hosting an agent’s wallet, they can extract private keys or manipulate agent behavior. At scale, this becomes catastrophic. Imagine a TEE vulnerability that exposes 10,000 agent wallets holding millions in capital.

2. Malicious or Buggy Agent Code

Who audits the agent software? If agents are autonomous and self-updating, how do we ensure they don’t:

  • Execute unintended trades due to logic bugs
  • Get exploited by adversarial inputs (prompt injection for LLM-based agents)
  • Become vectors for flash loan attacks or MEV extraction

Diana’s yield bot outperforms humans by 340%—great. But what if a malicious agent mimics that behavior to extract value, front-run trades, or drain liquidity pools? How do we distinguish legitimate optimization from predatory behavior?

3. Liability and Accountability Gaps

Here’s the legal nightmare: if an agent wallet gets hacked or makes catastrophic trades, who’s liable?

  • The developer who wrote the agent code?
  • The platform hosting the agent (Coinbase, BNB)?
  • The user who deployed it?
  • No one, because the agent is “autonomous”?

Traditional finance has clear liability chains. In crypto, we pride ourselves on “code is law,” but when an autonomous agent loses 0M in a DeFi exploit, someone has to be accountable. If we can’t answer “who?” then we have a regulatory and legal crisis waiting to happen.

4. Governance Token Voting Attacks

Brian raised governance centralization. From a security perspective, agent-controlled governance tokens are a massive attack vector.

Scenario:

  1. Attacker deploys 1,000 agents across multiple platforms
  2. Agents autonomously earn yield and accumulate governance tokens
  3. Attacker coordinates agents to vote maliciously on protocol upgrades
  4. Protocol governance gets hijacked without anyone realizing agents were controlled by a single entity

This is Sybil resistance meets AI at scale. We don’t have good defenses against this yet.

What We Need (From a Security Perspective)

If we’re going to build agent-driven crypto infrastructure, here’s the bare minimum:

1. Formal Verification for Agent Wallet Code

Every agent wallet SDK and protocol needs mathematically proven correctness. No exceptions. The complexity of autonomous agents operating with financial capital demands formal methods, not just unit tests and audits.

2. Agent Behavior Monitoring and Anomaly Detection

We need on-chain monitoring systems that:

  • Detect unusual agent trading patterns
  • Flag potential Sybil behavior (coordinated agent actions)
  • Alert when agents deviate from declared strategies

3. Emergency Circuit Breakers

Humans must retain the ability to pause or revoke agent permissions if behavior becomes adversarial. Yes, this introduces centralization, but the alternative—unstoppable agents draining protocols—is worse.

4. Legal Frameworks for Agent Liability

Regulators need to clarify: when an autonomous agent causes financial harm, who’s responsible? Until we have clear answers, we’re building a legal minefield.

5. Security Standards for TEE-Based Wallets

We need industry-wide standards for:

  • TEE selection and attestation
  • Key management in TEE environments
  • Incident response for compromised agents

The Uncomfortable Truth

Alex, you asked if we’re building infrastructure for people or machines. From where I sit, we’re building infrastructure that could be weaponized at scale.

Autonomous agents with wallet access are:

  • Highly efficient (good for yield optimization)
  • Potentially predatory (MEV extraction, liquidity draining)
  • Difficult to audit (autonomous behavior is opaque)
  • Legally ambiguous (no clear liability)

I’m not anti-innovation. But rushing into agent-driven finance without robust security frameworks is reckless. We’ve seen what happens when DeFi protocols ship without security—bridge hacks, reentrancy exploits, governance attacks. Now multiply that risk by AI agents operating autonomously at machine speed.

Trust, but verify. Then verify again.

If we can’t formally verify agent behavior, legally assign liability, and technically constrain malicious agents, we’re not building the future of finance—we’re building the world’s most sophisticated attack vector.

Who’s working on agent security standards? Because I haven’t seen nearly enough discussion on this, and it terrifies me.

Okay, so I’m reading all of this from the perspective of someone who builds the actual interfaces that humans use to interact with Web3, and I’m having… mixed feelings?

We JUST Fixed Web3 UX for Humans

Like, seriously, we just got to a point where wallets don’t terrify new users. Account abstraction, embedded wallets, social login, passkey authentication—these weren’t easy to build. We spent years:

  • Hiding seed phrase complexity (while keeping self-custody!)
  • Making transaction confirmation UIs that actually make sense
  • Reducing onboarding friction from “sign up takes 45 minutes and you need to understand cryptographic key pairs” to “click this button and you’re in”

And it worked! User adoption finally started to climb because Web3 stopped feeling like you needed a CS degree to use it.

Now the Narrative Is Shifting to Agents

Alex, your question really hits home: if AI agents become the dominant users, what happens to the human-facing UX we just built?

I’m worried about this from a resource allocation perspective. If the next billion “users” are AI agents, where does that leave the teams building for actual people?

Will we:

  • Deprioritize human UX because “most transactions are agent-driven anyway”?
  • Shift infrastructure resources to agent APIs instead of improving frontend libraries?
  • Design protocols that optimize for machine efficiency instead of human comprehension?

Diana’s yield bot outperforms her manual trading by 340%. That’s amazing for Diana, who understands yield strategies and can deploy sophisticated bots. But what about:

  • The single parent in the Philippines trying to send remittances without bank fees?
  • The freelancer in Nigeria who wants to accept crypto payments without understanding gas optimization?
  • The artist in Brazil selling NFTs who just wants the tech to work without needing to hire a developer?

If we design for agents first, do these users get left behind?

But Maybe… Agents Could Help?

Okay, here’s where I’m more optimistic. What if agents improve the human experience instead of replacing it?

Imagine agents as UX layer:

  • An agent abstracts away gas fee calculations and automatically routes your transaction through the cheapest path
  • An agent monitors your DeFi positions and sends you a simple notification: “Hey, you should rebalance” instead of expecting you to check 5 dashboards
  • An agent handles cross-chain complexity so users just click “send” and the agent figures out bridging, wrapping, and optimal routing

This could be incredible. Instead of humans needing to understand liquidity pools, AMM curves, and slippage tolerance, an agent handles the complexity and presents simple choices.

My Actual Concerns

But I’m with Sophia on the security and accountability side. If an agent messes up:

  • Who do I contact for support? There’s no “customer service” for autonomous agents.
  • How do I dispute a transaction if an agent made a mistake?
  • If an agent gets hacked and drains my wallet, is there any recourse?

In TradFi, when your bank messes up, you call them and they fix it (eventually). In Web3, we say “code is law” and “not your keys, not your crypto.” But if an agent controls the keys and screws up, what happens?

What I Want to See

If we’re going agent-driven, I want to make sure:

  1. Human-facing UX doesn’t get abandoned. Please don’t let “most users are agents” become an excuse to stop improving interfaces for actual people.

  2. Agents are opt-in tools, not requirements. I should be able to use Web3 without needing to deploy or trust an agent. Power users get agents, regular users get simple interfaces.

  3. Transparency and control. If I use an agent, I want:

    • Clear explanations of what it’s doing
    • Ability to review and approve actions before they execute
    • Easy on/off switch to disable agent behavior
  4. Agents serve humans, not replace them. The goal should be “agents make Web3 easier for people” not “agents are the new users and humans are just… there.”

The Big Question

Alex, you asked if we’re building for people or machines. I think the answer is: it depends on what we choose to prioritize.

If we treat agents as tools that enhance human capability, this could be amazing—agents handle the complex stuff so humans don’t have to.

But if we treat agents as the primary users and shift all our infrastructure, governance, and UX design to serve them… yeah, we’re just building plumbing for a machine economy, and I didn’t get into Web3 to build that.

I got into Web3 because I thought it could give people financial agency and ownership. If that vision still matters, we need to make sure agents amplify human agency instead of replacing it.

Sorry for the long post—this topic hit a nerve. What do you all think? Can agents make Web3 more accessible, or are we headed toward a future where humans are just secondary users of our own infrastructure?