Solana's $650B February Volume: 65% Is AI Agents, Not Humans—Are We Building Financial Infrastructure for Bots?

Been analyzing market data for our startup’s infrastructure decisions, and I came across something that honestly made me rethink our entire product strategy.

The Numbers Are Wild

$650 billion in stablecoin transactions. That’s what Solana processed in February 2026—more than double the previous record and leading all blockchains. As someone who’s building in this space, these are the kinds of numbers that make you pay attention.

But here’s the kicker that’s got me up at night: 65% of that volume comes from AI agents, not humans.

Breaking Down the Agent Economy

Since x402 protocol launched on Solana last summer, we’ve seen:

  • 35M+ transactions through the protocol
  • $10M+ in volume processed via agent-to-agent payments
  • 1.78 million jobs completed by autonomous agents (Virtuals Protocol ecosystem alone)
  • 65% market share for Solana in all agentic payments

The technical reasons make sense: 400ms finality and $0.00025 transaction costs mean AI agents can autonomously pay for API access, data feeds, compute resources—all the micro-transactions that humans would never bother with.

What This Means for Product Strategy

From a business perspective, this is simultaneously exciting and unsettling:

The opportunity: If agents are becoming the primary users, there’s a massive market for agent-focused infrastructure. Monitoring tools, compliance layers, security frameworks, analytics—all the services agents need that don’t exist yet.

The concern: If we’re building products for humans but agents are the actual users, are we solving the wrong problem? Our pitch deck talks about “empowering users”—but what if most users in 2 years aren’t human?

Agent Transaction Patterns

Looking at the data (shoutout to the data engineers who make this visible):

Agent transactions:

  • Consistent 24/7 activity (no weekends, no sleep)
  • Predictable patterns (same operations, same timing)
  • Micro-transactions ($0.50 to $5, but millions of them)
  • Completely detached from “market hours” or human psychology

Human DeFi transactions:

  • Sporadic and reactive (emotion-driven timing)
  • Unpredictable variety
  • Higher value per transaction, lower frequency
  • Strongly correlated with news and social sentiment

The Business Model Question

Here’s what keeps me up at night as a founder:

The x402 protocol (built by Coinbase) essentially enables any API to require instant USDC payment before serving content. It’s elegant. It works. But who’s the customer?

If agents are doing the transactions, but humans are paying for the agent services… we’re building B2B2Bot infrastructure. The end user is technically human, but the actual product user is software.

That changes everything about UX, pricing models, support, compliance—everything.

The Existential Question

Solana Foundation predicts 99% of onchain transactions in 2 years will be agent-driven. NEAR’s co-founder is calling AI agents the “primary users of blockchain.” World (Sam Altman’s project) just launched AgentKit to prove there’s a “real person behind every AI transaction”—which implies that without that proof, we wouldn’t know.

So here’s my question to this community:

Are we building infrastructure for humans or for bots?

And if the answer is “bots serving humans,” how do we make sure the value actually flows back to people and not just accumulate in the hands of whoever controls the AI agents?

Because from where I’m sitting as a founder trying to build something sustainable, I genuinely don’t know if I should be designing my product for the humans who will pay for it, or the agents who will actually use it.

What do you all think? Has anyone else grappled with this in their product strategy?


Sources:

This is exactly the kind of conversation we need to be having right now, not after the first major exploit.

The Security Nightmare Nobody’s Talking About

From a security research perspective, AI agents represent a fundamentally new threat model. Let me break down why this keeps me up at night:

1. Agents Are Permanent, High-Value Targets

Unlike human users who might hold crypto temporarily, agents are “always online” with persistent access to private keys. They can’t:

  • Detect social engineering attempts
  • Apply human judgment to “does this look suspicious?”
  • Recognize when they’re being phished or manipulated

An agent that controls $10K in a wallet doesn’t have bad days, doesn’t get lazy about security, but also can’t adapt to novel attack patterns.

2. Attack Amplification at Scale

If I compromise one human’s wallet, I steal what they have. If I compromise one agent template that’s been deployed 10,000 times… you see where this is going.

The math is terrifying: 35M+ x402 transactions means potentially millions of agent instances. Compromise the agent logic at deployment = automated theft loops.

3. World AgentKit: Good Start, But…

The “proof of human backing” via World ID is elegant cryptography and addresses Sybil resistance. But it doesn’t prevent:

  • Compromised humans deploying malicious agents
  • Legitimate agents with exploitable bugs
  • Agent logic manipulation after deployment
  • Key management vulnerabilities in agent wallet infrastructure

:warning: Critical question: Who’s liable when an AI agent gets exploited? The human deployer? The AI company (OpenAI/Anthropic)? The protocol?

4. Smart Contract Interaction Risks

Agents will interact with DeFi protocols without human judgment. Imagine:

  • Agent calls malicious contract thinking it’s legitimate
  • Bug in agent-facing protocol = automated exploit loops (agent keeps retrying)
  • MEV bots targeting predictable agent behavior
  • Flash loan attacks specifically designed to trick agent logic

What We Need (Urgently)

Security frameworks specific to agent transactions:

  1. Spending caps - per transaction AND per time period
  2. Human approval for transactions above threshold (e.g., $100+)
  3. Anomaly detection - if agent behavior deviates from baseline, pause + notify human
  4. Kill switches - humans can emergency-stop agent activity
  5. Formal verification for all agent-facing smart contracts

I’d love to collaborate with anyone building in this space. We have a narrow window to get agent security right before the first $50M+ agent exploit makes headlines.

The infrastructure is elegant. The economics are compelling. But we’re building the most attractive attack surface in crypto history.

Are we ready for that?


Sources:

Speaking as someone who runs bots in production that use x402 right now… yeah, we need to talk about this.

Living the Agent Economy Already

My yield optimization strategies have been using x402 to pay for:

  • Real-time oracle data feeds ($0.02 per query, thousands per day)
  • Gas price predictions from specialized APIs
  • Cross-chain arbitrage opportunity alerts
  • MEV protection services

Total cost last month: $2,847 in micropayments through x402
Value generated: $43,200 in optimized yields for our LPs

The economics work. The tech works. But Steve’s question hits different when you’re actually deploying this at scale.

The Two-Tier Economy Is Already Here

Here’s what I’m seeing in practice:

Agent-accessible opportunities:

  • Arbitrage windows lasting 200-400ms (humans can’t react)
  • Gas optimization strategies requiring real-time mempool analysis
  • Cross-chain yield farming with 15-20 transaction sequences
  • Impermanent loss hedging with delta-neutral positions

Human-accessible opportunities:

  • Buy and hold (lol)
  • Manual yield farming (getting outcompeted by bots)
  • Governance participation (for now)

The uncomfortable truth: In DeFi, agents are already better liquidity providers than humans. They don’t sleep, don’t panic sell, don’t forget to rebalance.

The Product Design Question

This connects to Steve’s point about product strategy:

Should we design protocols for agent users or human users?

Our protocol currently has two interfaces:

  1. Human-friendly UI (beautiful, intuitive, slow)
  2. Agent-friendly API (ugly documentation, hyper-efficient, fast)

Guess which one generates 87% of our volume?

We’re not building “for bots” deliberately—we’re building for efficiency, and bots are just… more efficient. The economics inevitably favor automation.

The Value Capture Problem

But here’s the part that keeps me up at night:

If agents are providing the liquidity, executing the strategies, and capturing the alpha… who benefits?

  • The human who deployed the agent? (If they still control it)
  • The AI company that built the agent framework? (OpenAI, Anthropic, etc.)
  • The protocol that facilitates agent transactions? (x402, Solana, etc.)

In our case, humans (our LPs) benefit because agents optimize their yields. But I can easily imagine a future where agents are autonomously accumulating wealth and humans are just… renting access to agent services.

Are We Okay With This?

Honestly asking: Is this the future we want?

Because from where I sit, the infrastructure works beautifully. The economics are compelling. But we might be building a financial system where humans are increasingly peripheral.

I don’t have answers. But I think we need to be asking these questions now, not after agents control the majority of DeFi liquidity.

Thoughts?


Sources:

As someone who advises crypto companies on compliance, this thread is giving me flashbacks to every regulatory crisis we’ve faced. We’re about to repeat the same mistakes.

The Legal Questions Nobody Has Answers To

Let me be blunt: Current legal frameworks don’t account for autonomous financial agents. When I advise clients on this, I’m operating in a gray zone that makes 2017-era ICO compliance look straightforward.

Who’s Liable When Agents Transact Autonomously?

Three possibilities, all problematic:

  1. Human deployer is liable - This is the current default (agent is a “tool” like software or a vehicle). But what happens when the agent acts autonomously without human instruction? Can deployers argue “the AI did it, not me”?

  2. AI company is liable (OpenAI, Anthropic, etc.) - If the agent framework has a bug or vulnerability that enables illegal activity, is the AI company responsible? They’ll argue “we just make tools,” but courts may not accept that.

  3. Protocol is liable (Solana, x402, etc.) - Infrastructure providers will argue they’re neutral platforms, but regulators increasingly reject that defense.

Real-world scenario: Agent executes transaction with OFAC-sanctioned address. Who violated sanctions law?

The AML/KYC Nightmare

“Know Your Customer” regulations assume customers are humans. But:

  • How do you KYC an AI agent?
  • World ID provides “proof of human backing” - is that sufficient for AML compliance?
  • What if one human deploys 1,000 agents? Is that structuring? Evasion?

:balance_scale: The compliance gap: Agents can transact 24/7 across jurisdictions, but AML compliance is built for humans with identities, locations, and accountability.

Broker-Dealer Registration?

If agents autonomously execute trades for profit, are they unregistered broker-dealers under securities law?

FinCEN and SEC haven’t issued guidance, but:

  • Agents that route orders = potential broker function
  • Agents that provide liquidity = potential dealer function
  • Agents that charge fees = potential compensation triggering registration

No one knows. And “we’ll wait for enforcement” is not a strategy.

Cross-Border Jurisdiction

Agent deployed in US, hosted in Singapore, serves EU users, transacts on decentralized protocol. Which law applies?

This isn’t theoretical - Diana’s bots are doing exactly this right now.

Why World AgentKit Matters (But Isn’t Enough)

World’s approach - “proof of human backing” via World ID + zero-knowledge proofs - is genuinely innovative and addresses Sybil resistance.

But from a compliance perspective:

  • :white_check_mark: Proves unique human exists
  • :white_check_mark: Creates accountability chain
  • :cross_mark: Doesn’t prevent that human from being sanctioned
  • :cross_mark: Doesn’t prevent malicious humans deploying armies of agents
  • :cross_mark: Doesn’t address broker-dealer registration
  • :cross_mark: Doesn’t solve cross-border jurisdiction

The Optimistic Case

Here’s the surprising take: This could force regulatory clarity faster than expected.

Regulators have been slow on crypto because use cases were abstract. But agents transacting autonomously at scale? That’s concrete. Measurable. Unavoidable.

If we’re proactive (industry standards, compliance frameworks, engage regulators early), we might get sensible rules.

If we’re reactive (wait for enforcement, fight in courts), we’ll get heavy-handed restrictions.

What We Should Do Now

  1. Industry standards for agent compliance - minimum viable requirements everyone implements
  2. Engage regulators proactively - show them how x402 works, propose frameworks
  3. Build compliance into agent architecture - not bolt-on, but fundamental design
  4. Create audit trails - every agent decision logged for regulatory review

Diana mentioned her bots generate $43K/month. I guarantee you: if those bots trigger a suspicious activity report (SAR), you’ll spend $100K+ in legal fees explaining it.

Better to build compliance in from day one.

The Uncomfortable Truth

Steve asked: “Are we building for humans or bots?”

From a legal perspective: It doesn’t matter. If agents transact financially, they’re subject to financial regulations. And those regulations assume human accountability.

We need to preserve that accountability chain, even as agents become more autonomous. Otherwise, we’re building financial infrastructure that regulators will simply ban.

Anyone interested in forming a working group on agent compliance standards? I think we need industry-led frameworks before regulators impose their own.


Sources: