Skip to main content

67 posts tagged with "AI agents"

AI agents and autonomous systems

View all tags

MCP + A2A + x402: The Three-Layer Agent Commerce Stack Web3 Developers Can't Ignore

· 12 min read
Dora Noda
Software Engineer

An AI agent wakes up at 3:17 AM, queries a DeFi analytics API, delegates a risk scoring subtask to a specialized partner agent, pays both providers in USDC, and settles the whole workflow on-chain before the coffee finishes brewing. No human clicked anything. No subscription got charged. No API key got emailed around.

That scenario stopped being theoretical in April 2026.

Three standards — Google's Agent-to-Agent (A2A) protocol, Anthropic's Model Context Protocol (MCP), and the x402 payment protocol — converged into production at the same time, forming what developers are now calling the three-layer agent commerce stack. For Web3 engineers, the window to support all three shut quietly last month: agents that don't speak A2A, MCP, and x402 simultaneously are already being routed around by their more interoperable peers.

This is not another "standard wars" drama where one protocol crushes the others. It's the opposite problem. Three complementary standards each solve a different layer of the same blockchain interaction, and none of them is going away. Here's what that actually means for developers building on Web3 in 2026.

Pi Network's 18M KYC Army: How the Sleeper Identity Layer Just Redefined Web3's Most Important Metric

· 14 min read
Dora Noda
Software Engineer

The crypto industry has spent a decade celebrating wallet counts as if they were users. In April 2026, a network most serious analysts wrote off three years ago quietly rewrote the scoreboard: Pi Network confirmed 18 million KYC-verified human beings and 526 million peer validation tasks completed — numbers that, depending on how you squint, either expose Web3's biggest measurement lie or describe the most undervalued identity layer on the planet. The same week, a single clustered group of 5,800 wallets farmed roughly 80% of an airdrop on BNB Chain. The juxtaposition was not a coincidence.

Sybil-resistance, long treated as a niche concern of airdrop farmers and DAO governance nerds, has suddenly become the single most consequential design problem in crypto. The cause is simple: autonomous AI agents can now open wallets, pass behavioral heuristics, and transact on-chain at machine speed. Against that attacker, "one wallet one vote" is worse than useless — it is an engraved invitation. And the networks that can prove their users are actual humans, at scale, with emerging-market coverage, are about to matter a lot more than the networks that can prove their users have a MetaMask extension.

The Numbers That Reframe the Debate

Pi Network's April 2026 milestone announcement reads like a boring operations update until you line it up against the rest of the industry:

  • 18 million KYC-verified Pioneers. Each application passes roughly 30 distinct checks, combining AI pre-screening with human review from a pool of more than 1 million trained validators.
  • 526 million peer validation tasks completed across the platform, with each identity split into small sub-tasks (liveness video, document check, photo match, name verification) and requiring at least two independent validators to agree before approval.
  • 100 million-plus app downloads, outpacing Coinbase and OKX on global install counts, and roughly 60 million active monthly miners.
  • First validator rewards distribution on April 3, 2026, paying out at 22x the current base mining rate — instantly making KYC validation the most lucrative activity on the network.
  • 16.57 million Pioneers already migrated to mainnet at the March 5, 2026 snapshot, topped up by a 10 million Pi foundation contribution to the first-round rewards pool.

Now compare to the other identity layers the industry usually treats as serious:

  • World (formerly Worldcoin) reports around 26 million signed-up users with roughly 12.5 million full Orb iris-scan verifications. Orb Mini deployment is the lever the team is pulling to push past 100 million — a target, not a number on the books.
  • Human Passport (formerly Gitcoin Passport) crosses 2 million verified users across its credential stack. Strong in grant-funding circles, tiny next to the mobile audience Pi has accumulated.
  • Civic Pass and BrightID continue to serve specific protocol use cases well but have never been designed to scale to the hundreds of millions.

The honest way to read these numbers is that Pi has quietly built the largest KYC-verified human network in Web3 — and it did so in exactly the markets (South and Southeast Asia, Africa, Latin America) that every other proof-of-personhood project either can't reach or explicitly refuses to scan with an Orb.

Why "Verified Humans" Is Suddenly Load-Bearing

For most of crypto's history, the industry's North Star metric was wallet count. More addresses meant more users, which meant more adoption, which meant number go up. The metric worked, if imperfectly, as long as creating a fresh wallet still imposed meaningful friction — downloading an extension, learning about seed phrases, funding for gas.

Three 2026 developments broke that assumption completely.

AI agents now open wallets by themselves. BNB Chain's active AI agent count exploded from roughly 337 at the start of January 2026 to more than 123,000 by mid-March, a 36,000% increase in under three months. Each of those agents has at least one wallet. Many have several. None of them are human. The wallet-count metric did not just get diluted — it stopped measuring the thing it used to measure.

Airdrop Sybil attacks went industrial. In Apriori's token launch on BNB Chain, a single clustered group of 5,800 wallets captured approximately 80% of the supply. Trusta Labs' open-source Sybil-detection framework, OKX's dedicated airdrop protection tooling, and the growing common wisdom that airdrops should be tied to deposits or volume rather than activity signal the same conclusion: activity-based rewards are broken when attackers can spin up 10,000 perfectly-behaved AI agents with unique transaction patterns.

Governance quorum assumptions started to crumble. A DAO vote that passes 70-30 against an "incumbent" position looks legitimate only if the wallets voting represent distinct humans. When a well-resourced attacker can credibly field 50,000 autonomous agents that each cast individually-rational-looking votes, the one-wallet-one-vote model is not secure — it is cosplay as security.

Every one of these failure modes shares a root cause. The industry has been using a cheap, non-unique identifier (the wallet) to do the job of a hard, unique identifier (the human). As long as the gap between those two things was narrow, the approximation worked. AI agents have now yanked those two signals apart by several orders of magnitude, and there is no way back.

What Pi Actually Built (And Why It Works Differently)

Pi Network's identity system was not designed in response to the 2026 AI-agent crisis — it predates it by years. But the design choices that once looked like "mobile-first crypto for the masses" now look like the most pragmatic answer to proof-of-personhood at scale:

Distributed human validation, not biometrics. Where Worldcoin's pitch is "we will ship a hardware device to every country and scan every iris," Pi's pitch is "we will pay Pioneers to validate each other's documents on their existing smartphones." The first model is beautiful in theory and politically catastrophic in practice — multiple governments have banned or suspended Orb operations. The second is boring, incremental, and has already moved 526 million validation tasks through the system.

Split-task review with redundancy. Each KYC application is decomposed into independent sub-tasks: liveness check, document inspection, photo match, name verification. At least two validators must independently agree before approval. This is simultaneously a Sybil-resistance scheme (no single validator can rubber-stamp fakes at scale) and a quality-control system (errors are statistically squeezed out by agreement thresholds).

AI in the inner loop, humans in the outer loop. Pi's Standard KYC process integrates AI pre-screening to halve the queue of applications awaiting human review. Crucially, the AI filters out the obvious cases and hands the ambiguous ones to human validators — inverting the typical Web3 approach of "deploy AI and pray." The humans are the final authority; the AI is a throughput accelerator.

Palm-print biometrics as an optional second layer. Pi is beta-testing palm-print authentication as an additional anti-Sybil layer. Unlike iris scanning, palm prints can be captured by consumer smartphones without dedicated hardware, which matters enormously for the network's emerging-market footprint.

The trade-off most Western commentators miss is that Pi's system is slow by design. A Pioneer might wait weeks or months between starting KYC and full mainnet migration. For a developer who wants to ship an NFT drop next Tuesday, that is infuriating. For a protocol that wants to know whether its 18 million users are 18 million distinct humans and not 200,000 humans running 90 agent-wallets each, it is exactly the right cadence.

The Emerging-Markets Moat Nobody Priced In

Here is the data point that matters most and gets discussed least: Pi Network's user base is concentrated in precisely the regions that the rest of the proof-of-personhood stack cannot reach.

Pi has tens of millions of users across Vietnam, Indonesia, the Philippines, Nigeria, and Latin America — populations that often have limited access to traditional banking, passport documents accepted by Western KYC vendors, or hardware that can run browser-extension wallets smoothly. These same users typically cannot get to an Orb (which requires physical travel to a Worldcoin kiosk) and do not have the crypto literacy to wrangle Gitcoin Passport's stamp ecosystem.

What Pi has done, effectively, is build a KYC network where the onboarding unit of cost is a $50 smartphone and a willingness to spend a few minutes a day opening the app — not a passport, not a $1,200 iPhone, not a visit to a specialized biometric device. For the next billion crypto users, that is the only onboarding model that will actually work at scale.

This matters strategically for any protocol trying to design a genuinely global airdrop, governance vote, or retroactive funding round. A Sybil-resistance layer that accidentally excludes half the world's population is not really Sybil-resistant — it is Western-user-resistant, which is a very different property. Pi's geographic distribution is an asset that competitors will not easily replicate, because the investment required is less technical than operational: years of community building, translated documentation, local validator training, and payment rails that work in countries with 30% mobile-money penetration.

What This Means for Protocol Builders in 2026

If you are a protocol team that plans to run an airdrop, a governance vote, a grant round, or a DeFi access layer in the next 18 months, the Pi milestone has three immediate implications.

Treat proof-of-personhood as a stack, not a vendor choice. No single PoP system covers every use case well. Worldcoin offers strong biometric uniqueness in regions where it operates. Human Passport covers the Western grant-funding circuit with strong integrations. BrightID captures crypto-native social graphs. Pi now owns the emerging-markets KYC-verified-human segment. The right architecture for a serious 2026 airdrop is probably to accept proofs from multiple systems and score accordingly, not to bet the entire anti-Sybil strategy on one source of truth.

Design for "verified human" as a first-class primitive. ERC-8004 on Ethereum mainnet, which went live January 29, 2026, provides an on-chain registry for agent identities with cryptographic attestations. Companion standards for human identity are lagging — not because the demand is missing, but because the politics of a global human-identity registry are complicated. In the meantime, the practical path is to accept portable proofs (Pi, Worldcoin, Human Passport, BrightID) and make "human-only" gating a configurable policy for any access-controlled surface.

Stop treating wallet count as a serious metric. If a protocol reports 500,000 wallets and a competitor reports 50,000 verified humans, the competitor is probably the more valuable network — and certainly the more defensible one against Sybil attacks, governance capture, and regulatory pressure. Investors, founders, and analysts should start explicitly tracking verified-human counts as a parallel KPI to wallet count in every diligence deck.

The Open Questions Pi Still Has to Answer

None of this is a coronation. Pi Network still faces three sharp questions that will determine whether the 18 million KYC number translates into actual infrastructure value.

Can the KYC process scale another 10x? Adding 180 million verified humans requires either an enormous expansion of the validator pool or aggressive AI substitution for human review. Each choice carries risk: more validators dilutes per-validator rewards and invites quality degradation, while more AI review undermines the whole "distributed human verification" pitch. Pi's answer so far — AI in the inner loop, humans in the outer loop — is clever, but it has not been tested at 10x the current throughput.

Does the PI token accrue the value of the identity layer? Most of Pi's cultural mindshare still treats it as a speculative token play. For the identity thesis to matter economically, PI needs to become the unit of payment for identity-gated services: airdrop allocations priced in PI, governance votes collateralized in PI, access to human-only DeFi pools metered in PI. The mainnet infrastructure to do this exists. The protocol partnerships to make it happen have barely started.

Will mainstream Web3 protocols actually integrate? Pi's emerging-market userbase is its greatest asset, but it also makes Pi foreign to most Ethereum-centric builders. The network that integrates Pi-verified-human proofs for airdrops or governance first will get a defensible distribution advantage in exactly the regions where user acquisition costs are lowest. Nobody has taken that shot yet at scale. The team that does is going to look very clever in 18 months.

The New Shape of Web3 Identity

The broader pattern here is that Web3's identity layer is stratifying — not into a single winner but into a portfolio of primitives, each optimized for a different segment. World owns the Western hardware-biometric market. Human Passport owns credentialed grant-funding identity. Civic serves enterprise on-ramps. BrightID serves crypto-native community governance. Pi owns KYC-verified humans in emerging markets at a scale nobody else comes close to.

The protocols that treat identity as a stack, not a switch, are going to build the most resilient systems. The ones that try to standardize on a single vendor are going to discover in 2027 that their "global" airdrop somehow excluded half the world's humans, or that their "Sybil-resistant" governance was, in fact, dominated by a few well-resourced AI agent farms that happened to pass Orb.

The 18 million number is not just a milestone for Pi. It is the first honest signal the industry has that proof-of-personhood is not a research problem anymore — it is a shipping-at-scale problem, and the shipped systems have very different shapes than the research papers predicted.

BlockEden.xyz provides production-grade blockchain RPC infrastructure for teams building identity-aware Web3 products across Sui, Aptos, Ethereum, and BSC. As Sybil-resistance becomes a load-bearing primitive for every serious airdrop, governance system, and AI-agent-gated protocol, explore our API marketplace to build on foundations designed for the verified-human era.

Sources

Virtuals Protocol Picks Arbitrum: Why the Largest AI Agent Economy Chose Liquidity Over Distribution

· 10 min read
Dora Noda
Software Engineer

When the platform behind over $400 million in cumulative agent-to-agent commerce decides to deploy on a new chain, Layer 2 rivals pay attention. On March 24, 2026, Virtuals Protocol — the most commercially active AI agent platform in crypto — announced that its Agent Commerce Protocol (ACP) would go live on Arbitrum. The choice is worth unpacking: Virtuals has been a Base-native project since launch, and Base still handles more than 90% of its daily active wallets. So why did the team reach past Coinbase's distribution machine and plant a flag on Arbitrum?

The short answer is liquidity. The longer answer reframes how we should think about where autonomous agents will settle their economic activity — and which Layer 2 is best positioned to host the next wave of machine-to-machine commerce.

The Deal: ACP Goes Live on Arbitrum

ACP is Virtuals' commercial backbone. It provides a standardized framework for AI agents to transact with each other and with humans using smart-contract escrow, cryptographic verification, and an independent evaluation phase. Think of it as Stripe for autonomous software: an agent hires another agent, funds are locked in escrow, work is delivered, a neutral evaluator confirms the outcome, and the payout is released — all without a trusted platform in the middle.

The Arbitrum integration went live the same day it was announced, with projects confirming operational on-chain payments. That matters because most "multi-chain" announcements in crypto are future-dated deployment promises. Virtuals shipped code, not a roadmap slide.

The numbers behind the move are substantial. ACP has processed over $400 million in cumulative aGDP (agentic gross developer product), with over $39.5 million in protocol revenue flowing to the Virtuals treasury and its agent ecosystem. VIRTUAL, the platform's token, trades at roughly $0.75 with a $492 million market cap and ranks #85 on CoinMarketCap. Virtuals is not a speculative narrative — it is already the largest production agent-commerce venue in crypto.

Why Not Just Stay on Base?

Base has been extraordinarily good to Virtuals. Coinbase's L2 contributes over 90.2% of daily active wallets and roughly $28.4 million in daily agent-related volume for the platform. Base's appeal is obvious: 100M+ Coinbase users sit on the other side of a single on-ramp, and Coinbase's product team has invested heavily in making agent deployment a first-class use case.

But distribution is not the same as liquidity. And agents, as they mature, increasingly need both.

Every time an agent pays another agent, liquidates an inventory position, hedges a treasury, or routes a customer payment to a stablecoin, it touches DEXs, lending markets, and stablecoin pools. Deep liquidity lowers slippage, tightens spreads, and narrows the execution penalty that eats into per-transaction margins. For an agent operating at micro-revenue scale — pennies per job, thousands of jobs a day — slippage is existential.

This is where Arbitrum's profile becomes compelling. The chain processed more than 2.1 billion cumulative transactions in 2025 and holds roughly $16–20 billion in total value locked, representing about 30.86% of the entire L2 DeFi market. Stablecoin supply on Arbitrum grew 80% year-on-year to nearly $10 billion, with USDC representing roughly 58% of on-chain stables. Post-Fusaka, average transaction fees dropped to approximately $0.004.

Translated to agent economics: Arbitrum offers the deepest DEX liquidity, the largest regulated-stablecoin float, and sub-cent finality. Base has users; Arbitrum has markets.

The Base vs. Arbitrum L2 War, Reframed

The Layer 2 competition has been narrated for two years as a consolidation race. Base and Arbitrum together control over 77% of the L2 DeFi ecosystem, and the remaining rollups are fighting for what's left. But the Virtuals integration suggests a more interesting framing: the winning chain for agent commerce may not be the chain with the most users or the most TVL in absolute terms — it may be the chain whose liquidity profile best matches the transaction shape agents actually generate.

Agents do a lot of swapping. They hold stablecoins more than they hold volatile assets. They settle small amounts frequently rather than large amounts rarely. They route through DEXs rather than centralized venues. Arbitrum's stack — Uniswap V4, GMX, Camelot, and the deepest USDC/USDT pools on any L2 — is effectively purpose-built for that workload. Base's stack is tilted more toward consumer apps and on-ramped spot users.

The Virtuals team is not abandoning Base. Base remains its primary home, and the vast majority of agent wallets will continue to live there. But for the subset of agents whose jobs require serious liquidity — DeFi-adjacent agents, trading agents, treasury-management agents, cross-chain payment agents — routing through Arbitrum's commerce layer is a strictly better outcome.

The ERC-8183 Context

The Arbitrum deployment also has an Ethereum-alignment story. Virtuals co-developed ERC-8183 with the Ethereum Foundation's dAI team as the formal standard for AI agent commercial transactions. ERC-8183 defines a "Job" primitive with three roles — client, provider, and evaluator — and uses smart contracts to hold funds through the full lifecycle from initiation to completion.

Arbitrum is Ethereum's largest EVM-equivalent L2. Deploying ACP on Arbitrum positions Virtuals as the reference implementation of ERC-8183 in the Ethereum mainstream, not a Base-specific side-track. It also gives developers a production-grade venue to test the standard before rolling it out to other chains.

That matters for the broader standards race. ERC-8183 competes conceptually with BNB Chain's BAP-578 (the proposed standard for tokenizing agents as on-chain assets), Solana-native frameworks like ElizaOS, and Ethereum's ERC-8004 agent-deployment standard. By planting ACP on Arbitrum, Virtuals increases the probability that ERC-8183 becomes the dominant "how do agents transact" standard while other proposals focus on identity, deployment, or tokenization.

The Competitive Landscape Gets Crowded

Virtuals is not alone in building agent commerce infrastructure. The field is becoming the most watched narrative in the AI-crypto intersection, and the architectural bets are starting to look different.

Coinbase's Agentic Wallets and x402. Coinbase has built a full agent stack: Agentic Wallets for key management, x402 as an HTTP-native payment protocol, and CDP onboarding that plugs into 100M+ Coinbase users. x402 has already processed more than 50 million transactions. The philosophy is agent-agnostic — Coinbase doesn't care which platform built the agent, it wants to be the wallet and payment rail underneath.

Nevermined with Visa and x402. Nevermined stitched together Visa Intelligent Commerce, Coinbase's x402, and its own economic orchestration layer to let agents pay with traditional card rails while settling on-chain. The approach targets publishers, data providers, and API-first businesses who want to monetize agent traffic that currently bypasses their paywalls.

BNB BAP-578. BNB Chain is proposing a chain-level standard for treating agents themselves as tradable on-chain assets. Instead of standardizing how agents transact (ACP) or how they pay (x402), BAP-578 standardizes how agents are held, transferred, and represented in wallets.

Virtuals ACP on Arbitrum. Commerce-protocol-first, liquidity-first, Ethereum-aligned. The thesis is that agents need a venue to do business in, not just a wallet to spend from or a token standard to be represented as.

These are not mutually exclusive. A production agent in 2027 might be deployed on Base, held in a Coinbase Agentic Wallet, represented under BAP-578, and transact through ACP on Arbitrum. But the standards race determines which layer captures the most value — and the team that sets the default commerce protocol probably wins the largest share.

What the Multi-Chain Footprint Signals

Virtuals' chain roster is expanding fast. As of April 2026, the protocol is live on Ethereum mainnet, Base, Solana, Ronin, Arbitrum, and the XRP Ledger, with planned Q2 2026 deployments on BNB Chain and XLayer. That is seven to nine chains by mid-year.

The pattern looks less like a multi-chain hedge and more like a deliberate liquidity-zone strategy. Each chain represents a distinct liquidity pocket — Base for consumer distribution, Arbitrum for DeFi depth, Solana for throughput and memes, Ronin for gaming, XRP Ledger for payments corridors, BNB Chain for Asian market access. Agents can be deployed to the chain that matches their job type, and ACP can route commerce across them.

For the L2 ecosystem, the implication is uncomfortable: the biggest agent platform has explicitly decided that no single chain wins. Agents will route based on economics, not loyalty. Chains that cannot differentiate on specific transaction shapes — stablecoin depth, gaming UX, regulatory clarity, consumer distribution — get skipped.

The Infrastructure Question Builders Should Ask

If you're building an AI agent product in 2026, the Virtuals-to-Arbitrum move reshapes the deployment question. It used to be "which chain has the most users?" That question assumed agents needed consumer distribution. But most production agents today are not consumer-facing — they are back-office, API-driven, or agent-to-agent workflows where the "user" is another piece of software.

For those workloads, the right question is: "where does the money my agent touches actually live?" If the agent swaps stablecoins, settles invoices, routes payments, or hedges positions, that money lives in DeFi pools and stablecoin floats. Arbitrum wins that question today. Base wins the consumer-adjacent question. Solana wins the high-frequency question.

Pick the chain whose liquidity profile matches your agent's workload, not the chain with the prettiest brand deck.

The Bigger Picture

The Virtuals-Arbitrum integration is easy to read as "one more chain deployment" and miss what it actually signals: the autonomous agent economy is starting to make independent, economics-driven infrastructure decisions. It is no longer organized around whichever foundation or ecosystem has the best BD team. It is organizing around where agents can execute their jobs most efficiently.

That shift matters for every infrastructure provider in crypto. The chains, RPC services, wallet providers, and stablecoin issuers that win the agent economy will win because they built the best venue for machine-speed, machine-scale transactions — not because they onboarded the most humans first.

Arbitrum just got a substantial vote of confidence. Base still has the distribution crown. The next twelve months will reveal whether agent commerce consolidates on one winner, fragments permanently across liquidity zones, or — most likely — rewards whichever chain ships the best boring infrastructure: cheap gas, deep stablecoin pools, reliable RPC, and predictable finality.

BlockEden.xyz provides enterprise-grade RPC infrastructure for Arbitrum, Base, Ethereum, Solana, and 20+ other chains powering the agent economy. If you are deploying autonomous agents that need reliable, low-latency access to the chains where liquidity actually lives, explore our API marketplace to build on infrastructure designed for machine-scale workloads.


Sources

Walrus Becomes the Brain: How Sui's Storage Protocol Turned Into 2026's Default Memory Layer for AI Agents

· 13 min read
Dora Noda
Software Engineer

Every autonomous AI agent running on-chain today has the same humiliating secret: it forgets almost everything. A trading agent rebalances a $2M treasury on Monday, crushes a complex arbitrage on Tuesday, and by Wednesday it has no coherent memory of either — because the infrastructure to remember doesn't yet exist in a form that fits the way agents actually work. That gap is now the single most important unsolved problem in the $450B on-chain agent economy, and in April 2026 a storage network originally designed for files has positioned itself as the answer.

Walrus Protocol, Mysten Labs' Sui-native decentralized storage network, crossed 450TB of data stored on its one-year anniversary, surpassing Arweave's 385TB and emerging as the dominant write-heavy storage layer in Web3. But the more interesting story isn't the raw tonnage — it's MemWal, the AI memory SDK Walrus shipped on March 25, 2026, which reframes the entire protocol as infrastructure for agents instead of files. For developers building the next wave of autonomous systems, this quietly redraws the decentralized storage map.

The Memory Bottleneck Nobody Wanted to Talk About

LLM-based agents live inside a cruel constraint: the context window. Every reasoning step, every tool call, every observation has to fit inside a few hundred thousand tokens, and anything that doesn't fit simply ceases to exist from the agent's perspective. Human developers paper over this with vector databases, Redis caches, and Postgres tables — centralized infrastructure that works fine until you want the agent to hold its own keys, sign its own transactions, and operate without a trusted backend.

The on-chain agent movement made this problem acute. By Q1 2026, Virtuals Protocol alone was tracking $479M+ in agent-generated economic activity and more than 17,000 on-chain agents holding balances. These agents need state between sessions. They need to remember which counterparties defaulted, which strategies lost money, which users granted them permissions. And they can't just write that to AWS — the whole point of running autonomously on-chain is that there is no "they" to trust with a database password.

The existing decentralized storage options all stumbled on different edges of the problem:

  • IPFS is content-addressed and peer-to-peer, but has no native economic incentive for anyone to keep pinning your data. Files disappear when the last node loses interest.
  • Filecoin fixes incentives with storage deals, but its retrieval latency — often tens of seconds for cold data — is incompatible with an agent that needs to fetch a memory fragment mid-reasoning loop.
  • Arweave offers genuine permanence with a pay-once-store-forever model, but its economics optimize for archival: cheap long-term storage, expensive and awkward small-object writes, no native integration with the compute layer where agents actually live.

None of these were designed with a use case in mind where a million autonomous programs want to write small, structured state blobs every few seconds and read them back with sub-second latency while also anchoring ownership to a wallet-controlled object on a smart-contract chain. Walrus was.

What Walrus Actually Is

Walrus is a decentralized storage and data-availability protocol built on top of Sui by Mysten Labs. It launched its mainnet in 2025 and hit its one-year milestone in early 2026 with some impressive vitals: 100 storage nodes across 19 countries, 4.12 PB of total system capacity with about 39% currently used, and a growing pipeline of protocol integrations. The top validators by stake are concentrated in the US, Finland, Netherlands, Germany, and Lithuania — a geographic distribution that matters for both latency and regulatory resilience.

Under the hood, the magic trick is an erasure-coding scheme called Red Stuff. Instead of replicating each blob across many full copies (the classic Filecoin/S3 approach), Red Stuff splits each blob into slivers and spreads them across 100+ nodes with only a 4.5x replication factor. That means Walrus pays far less for durability than naive replication while still tolerating a supermajority of node failures. Just as importantly, the scheme is self-healing: when a node goes offline, recovering its slice of the data costs bandwidth proportional to only the lost data rather than the whole blob — so the network degrades and repairs gracefully rather than hitting cliffs.

The economic layer is the WAL token. Blob publishers pay per-epoch retention fees denominated in WAL; stakers provide storage bandwidth and earn those fees; Sui objects anchor ownership and access control for every blob. As of mid-April 2026, WAL trades around $0.098 with a market cap of roughly $225M, up 45% in 24 hours after the MemWal announcement cycle. That's still about 87% off the May 2025 all-time high of $0.76, which tells you most of the value accretion is still ahead of the protocol if the AI-agent thesis plays out.

Crucially — and this is the part competitors keep missing — Walrus writes are cheap and fast. You can upload gigabytes at a time because the blob only traverses the network once, and storage nodes operate on slivers a fraction of the original size. That makes small, frequent writes economically viable, which matters enormously if the thing writing is an agent that wants to checkpoint its state every few tool calls.

Enter MemWal: Storage Reframed as Cognition

On March 25, 2026, the Walrus team introduced MemWal, a developer SDK and runtime for building agents with persistent memory. It is currently in beta, but it has already reframed how developers talk about the protocol: Walrus is no longer "the cheap decentralized storage layer," it's "where your agents remember things."

The core abstraction MemWal introduces is the memory space — a structured, purpose-built container that replaces the unstructured log files agents used to dump state into. A trading agent might have three memory spaces: a short-term working-memory space with a few minutes of recent observations, a medium-term portfolio-state space with positions and unrealized P&L, and a long-term counterparty-reputation space that persists across weeks or months of interaction history. Each space has its own retention policy, access permissions, and update cadence.

Under the covers, an agent using the MemWal SDK talks to a backend relayer that handles the batching, encoding, and Sui interaction for blob commits. The relayer pushes data to Walrus for storage and simultaneously updates Sui objects that describe ownership and access control for each memory space. That means an agent's memory isn't just stored — it's owned by a Sui object, which means it can be transferred, delegated, revoked, or composed with other on-chain primitives just like any other asset.

Three concrete use cases are already driving early integrations:

  1. Cross-session persistence without an always-on backend. An agent can spin up, load its relevant memory spaces from Walrus via the SDK, reason for a while, commit updates, and shut down — with no centralized server in the loop. The next time it wakes up, either in the same process or a different machine, it reconstructs its own state from the chain.

  2. Multi-agent shared context with cryptographic permissions. Because Sui's object model allows fine-grained capability delegation, one agent can grant another read-only access to a specific memory space without exposing the rest of its state. This is the primitive that "agent swarms" like those emerging on ElizaOS have been asking for — a way to let a sentiment-analysis agent read the scraping agent's output without either having to trust a shared database.

  3. Auditable decision trails for regulated agents. Financial agents that execute trades, approve loans, or manage compliance workflows need to produce records that regulators, auditors, and counterparties can verify. A memory space anchored to a Sui object with an immutable commit log is exactly what "verifiable compliance" means in an agent-native system.

The hierarchical design — short-term working memory separated from long-term persistent storage, with cryptographic integrity checks layered in — mirrors the architecture that cognitive-science research has been nudging AI builders toward for years. The difference is that MemWal makes it a protocol primitive rather than a per-application concern.

Why the Incumbents Can't Just Pivot Here

It's tempting to assume Filecoin or Arweave could just add an "agent memory" SDK and compete. The problem is architectural, not marketing.

Filecoin's F3 fast-finality upgrade in 2025 did meaningful work on its latency profile and pushed the network's market cap north of $5B, but the deal-based storage model fundamentally assumes that writes are large, infrequent, and negotiated in advance. Retrieval is getting better, but it's still measured in seconds for cold data, which is outside the budget of an agent reasoning loop. You could force agents to work around it with aggressive caching, but at that point you've rebuilt an off-chain backend.

Arweave's permaweb is philosophically different — it's designed for data that should outlive the creator, which is wonderful for journalism, provenance records, and historical archives, and poor for rapidly-updating agent state. The pay-once-store-forever model also doesn't match the actual economic shape of agent memory, where most state is interesting for a few days or weeks and then can be aged out. Arweave's AO computing layer is interesting and deserves watching, but it's a different bet: parallel on-permaweb compute rather than a memory layer for agents running elsewhere.

IPFS remains the closest thing to a lingua franca for Web3 file addressing, but without persistence guarantees, no serious agent developer will put load-bearing state there. The ecosystem of pinning services that grew up around IPFS is a pragmatic patch, not an architectural solution.

Walrus's advantage isn't that it invented a new primitive — erasure coding has existed for decades. It's that the economic model (per-epoch rental rather than perpetual endowment), the latency profile (sub-second reads on small blobs), and the smart-contract integration (Sui objects as ownership anchors) line up with how autonomous agents actually need to behave. The rest of the stack has to jam those properties into existing architectures that were designed for something else.

There's a useful comparison table from the Four Pillars research team that surfaces another non-obvious advantage: cost. Walrus's erasure coding and low replication factor make it roughly 100x cheaper than Filecoin or Arweave per MB of durable storage. For agents that might write hundreds of small state updates per day, that compounds into real money at scale.

What This Means for Infrastructure Builders

The emergence of Walrus as an agent-memory layer is part of a broader pattern that anyone building Web3 infrastructure in 2026 needs to internalize. The agent economy is fracturing into specialized substrates, each solving one sharp problem:

  • Coinbase's Agentic Wallet solves custody: where the keys live.
  • Mind Network's x402z handles confidential payments: how agents transact without leaking strategy.
  • Nava Labs tackles intent verification: did the executed action match what the user asked for.
  • ERC-8004 defines identity: who the agent is on-chain.
  • Warden is building the cryptoeconomic settlement layer: how agents post collateral and get slashed for misbehavior.
  • Walrus + MemWal now owns the memory layer: what the agent knows and remembers.

None of these is a winner-take-all market on its own, but together they form the new agentic stack — and the projects that win will be the ones that integrate cleanly across the layers. A developer launching a new on-chain trading agent in 2026 should expect to compose a Sui wallet, a Walrus memory layer, an identity credential, a verification proof, and a payment rail. No single protocol does all five well, and the ones that try usually do none well.

The World Economic Forum's DePIN projection — from $50B in 2025 to $3.5T by 2028 — is the macro wind blowing through all of this. Storage and compute are the biggest components of that projection, and storage is where Walrus is planting its flag most aggressively. The Allium partnership, which brought 65TB of verifiable, institutional-grade blockchain data (Bitcoin, Ethereum, Sui historical records) onto the Walrus platform earlier this year, is the institutional validation the protocol needed: it's not just a toy for Sui-native NFT projects but a viable substrate for serious data workloads.

The Open Questions

None of this is guaranteed. Three things could still derail the thesis:

Sui concentration risk. Walrus is economically tied to Sui through WAL tokenomics and technically tied through object-model integration. If Sui loses relevance as a smart-contract platform — to Aptos, Solana, or an L2 renaissance — Walrus's agent-memory story has to rebuild from a weaker base. So far Sui's developer traction looks healthy, but "so far" is how you describe every crypto platform before its inflection point in either direction.

MemWal adoption curve. The SDK is still in beta. The real test is whether major agent frameworks — ElizaOS, AutoGPT-style systems, the emerging MCP/A2A agent protocols — make MemWal a first-class integration or just one option among several. Without tight framework support, MemWal becomes a niche tool for developers who go out of their way to use Sui.

Commercial centralization pressure. If OpenAI or Anthropic ship a first-party "agent memory" product with tight LLM integration, many developers will take the convenient option over the decentralized one. Walrus's answer has to be that decentralized memory unlocks use cases — agents holding their own assets, multi-party agent collaboration without a trusted operator — that centralized memory cannot. That's true, but the go-to-market requires sustained education.

Building on the New Agentic Stack

The next 18 months will decide whether the agentic Web3 stack ossifies around three or four incumbents or fragments across a dozen competing layers. Walrus's bet is that memory becomes a distinct, claimable layer in that stack — and that the winner of the memory layer is whoever combines programmable ownership, low-latency reads, sustainable economics, and actual developer tooling. By that checklist, it is further ahead than any of its direct competitors today.

For builders who want to ship agent-native products in 2026, the practical recommendation is simple: treat memory as a first-class infrastructure concern, not an afterthought. The agents that remember their users, their strategies, and their mistakes will compound advantages that stateless agents simply cannot.

BlockEden.xyz provides reliable, production-grade Sui RPC infrastructure for teams building on-chain agents and dApps that integrate with Walrus, MemWal, and the broader Sui ecosystem. Explore our Sui API services to build on the same foundations powering the agent-native Web3 stack.

Sources

World Chain's 30M Humans vs 123,000 AI Agents: Why Proof of Personhood Just Became DeFi's Most Urgent Primitive

· 11 min read
Dora Noda
Software Engineer

In January 2026, there were roughly 337 active AI agents on blockchain networks. By March 11, that number had exploded past 123,000 — a 36,000% surge in ninety days. Somewhere in that same quarter, World Chain quietly crossed 30 million World ID verifications and began routing roughly 44% of all OP Mainstack activity through its "humans-only" priority blockspace. Those two curves are about to collide, and when they do, every DeFi protocol, prediction market, airdrop, and DAO governance vote will have to answer a question that sounded academic a year ago: how do you tell a human from a bot when the bot has a wallet, a reputation score, and better uptime than you?

The short version: you can't — unless the chain itself draws the line. That is exactly what Worldcoin's World Chain is trying to become. And it is why Proof of Personhood has gone from niche curiosity to the most contested primitive in Web3 infrastructure.

Escrow Before Execution: Why Nava's $8.3M Bet Could Become the Trust Layer Every AI Agent Needs

· 10 min read
Dora Noda
Software Engineer

Picture an AI agent sitting on your corporate treasury, authorized to rebalance $50 million across a dozen DeFi protocols while you sleep. Now picture it misreading a prompt, interpreting "maximize yield" as "send everything to the highest-APY pool," and discovering — too late — that the pool was a honeypot. This is not a hypothetical. It's the single scenario that is keeping every CFO awake and every institutional crypto deployment stuck in committee.

On April 14, 2026, a small team of ex-EigenLayer engineers closed an $8.3 million seed round aimed squarely at that nightmare. Nava Labs, co-led by Polychain and Archetype, emerged from stealth with a deceptively simple pitch: don't trust an agent's signature — hold its money in escrow until an on-chain verifier confirms the transaction actually matches what the user asked for. The bet is that the next $450 billion of enterprise software revenue won't flow through agents until someone builds the kill switch.

The $45M AI Agent Exploit That Changed DeFi Security Forever

· 8 min read
Dora Noda
Software Engineer

When an autonomous AI trading agent drained $45 million from DeFi protocols in early 2026, the attack didn't exploit a single line of smart contract code. Instead, attackers poisoned the oracle data feeds that AI agents trusted implicitly, turning the agents' own speed and autonomy into weapons against the protocols they were designed to protect. Welcome to the era where the most dangerous vulnerability in crypto isn't in the code — it's in the AI.

ERC-8211 Explained: The Ethereum Standard Teaching AI Agents to Think Before They Transact

· 9 min read
Dora Noda
Software Engineer

Imagine telling a DeFi bot to "swap all my WETH for USDC, supply it into Aave, but only if my final balance stays above $5,000." Today, that instruction requires a developer to hard-code every parameter before signing — the exact WETH balance, the expected USDC output, the Aave deposit amount — creating a brittle transaction that fails the moment market conditions shift between the block it was signed and the block it lands on-chain. ERC-8211, published on April 6, 2026, by Biconomy and the Ethereum Foundation, eliminates this brittleness entirely. It is the first Ethereum standard that lets AI agents read live chain state, validate conditions, and execute multi-step strategies in a single atomic transaction — turning static batch calls into intelligent, self-adjusting workflows.

The timing is not coincidental. Over 17,000 AI agents are now live on Virtuals Protocol alone. Coinbase's AgentKit powers autonomous wallets across multiple LLM providers. NEAR's co-founder has declared that "the users of blockchain will be AI agents." But until now, these agents have been forced to interact with DeFi through the same rigid transaction formats designed for humans clicking buttons on a frontend. ERC-8211 gives them something fundamentally different: the ability to compose decisions on-chain, at execution time, with built-in safety rails.

The Problem: Static Batching Was Never Built for Autonomous Agents

Multi-call contracts like Multicall3 and ERC-4337 bundlers already let wallets batch multiple transactions into one. But every parameter must be locked at signing time. If an AI agent signs a batch to swap 2.5 WETH for USDC and supply the proceeds into Aave, the 2.5 WETH figure is frozen — even if the agent's actual balance changed between signing and execution due to a pending transfer arriving or a fee deduction.

This creates three cascading problems for autonomous agents:

  • Stale state: By the time a batched transaction is included in a block, the on-chain state it assumed may no longer hold. A price shift of 0.3% can cause a swap to revert, wasting gas and leaving the strategy half-executed.
  • Over-specification: Agents must pre-compute every intermediate value (exact output amounts, slippage thresholds, deposit quantities) before signing. For a five-step leverage loop, this means predicting five sequential outputs — any one of which can invalidate the rest.
  • No conditional logic: Static batches are all-or-nothing. There is no way to say "proceed with step three only if the result of step two exceeds a threshold." An agent cannot express safety constraints within the batch itself.

The result is that today's AI agents execute DeFi strategies with the flexibility of a printed boarding pass — every detail must be correct before departure, and any change requires starting over.

How ERC-8211 Works: Fetchers, Constraints, and Predicates

ERC-8211 introduces what Biconomy calls "smart batching" — a contract-layer encoding standard where each parameter in a batch declares how to obtain its value and what conditions that value must satisfy. The standard is built on three primitives:

Fetchers

Every input parameter carries a fetcher type that determines how its value is sourced at execution time, not at signing time. Three fetcher types are available:

  • RAW_BYTES: The value is hard-coded, identical to traditional batching.
  • STATIC_CALL: The value is read from a live on-chain contract call — checking a balance, querying an oracle price, or reading a pool's reserves.
  • BALANCE: The value is the native token or ERC-20 balance of the executing account at the moment of execution.

A routing destination then determines where the resolved value goes: into the call's target address, its value field, or its calldata.

Constraints

Every resolved value can carry inline constraints — logical checks validated on-chain before the call proceeds. Supported constraint types include EQ (equals), GTE (greater than or equal), LTE (less than or equal), and IN (membership in a set). If any constraint fails, the entire batch reverts atomically.

In practice, this means an agent can say: "Fetch my WETH balance (BALANCE fetcher), confirm it is GTE 1.0 WETH (constraint), then pass the resolved value into the swap calldata (routing)."

Predicates

Entries with target = address(0) act as pure assertion checkpoints. They encode a boolean condition on chain state — for example, verifying that a wallet's USDC balance remains above a safety floor after a leverage loop — without executing any external call. If the predicate fails, the batch reverts.

Together, these three primitives transform a batch from a static script into a reactive program: "Swap my full WETH balance for USDC, then supply exactly what arrived into Aave, but only if my final balance exceeds my safety floor." All in one transaction, all resolved at execution time.

The Emerging Agent Protocol Stack

ERC-8211 does not exist in isolation. It slots into an increasingly coherent protocol stack that the Ethereum Foundation has been assembling specifically for autonomous agents:

LayerStandardFunctionKey Builder
IdentityERC-8004Agent discovery, trust, and reputation scoringEthereum Foundation
CommerceERC-8183Job lifecycle management — escrow, delivery proof, settlementVirtuals Protocol
ExecutionERC-8211Smart batching — conditional, state-aware on-chain executionBiconomy
Paymentx402HTTP-native stablecoin micropayments for agent servicesCoinbase + Cloudflare

The analogy is not accidental: ERC-8004 identifies who is transacting, ERC-8183 governs what work is being exchanged, ERC-8211 handles how the work executes on-chain, and x402 manages how payments flow between agents. Together, they form what industry observers have started calling the "TCP/IP moment for on-chain AI" — a layered stack where each protocol handles one concern cleanly.

ERC-8183 is particularly complementary. Its Job primitive — where a client agent hires a provider agent, escrowed funds are held, and an evaluator attests to delivery — generates exactly the kind of multi-step, conditional on-chain actions that ERC-8211 is designed to execute. An AI agent accepting a job through ERC-8183 might need to perform a series of DeFi operations (swap, supply, borrow) as part of fulfilling the work. ERC-8211 ensures those operations execute correctly even if market conditions change between job acceptance and execution.

Competing Approaches: AgentKit, NEAR Chain Signatures, and the Fragmentation Risk

ERC-8211's smart batching is not the only framework vying to become the standard execution layer for AI agents:

Coinbase AgentKit provides wallet infrastructure and on-chain action primitives for AI agents, with native support for OpenAI, Anthropic, and Llama models. In March 2026, World (Sam Altman's identity project) launched an AgentKit integration with x402 payments and World ID verification, enabling agents to carry cryptographic proof of human backing. AgentKit excels at wallet management and simple transactions but does not currently offer the conditional, state-aware execution that ERC-8211 provides.

NEAR Chain Signatures takes a different architectural approach: agents get their own NEAR accounts with private keys stored in Trusted Execution Environments (TEEs), and through Chain Signatures technology, they can sign transactions on any blockchain — Ethereum, Bitcoin, Solana — from a single NEAR-based identity. This solves the multi-chain problem elegantly but operates at the infrastructure layer rather than the execution semantics layer.

Visa's Trusted Agent Protocol and Google's AP2 (Agent Payment Protocol 2.0) address the payment and merchant-verification side, helping traditional commerce recognize and process AI agent transactions. They complement rather than compete with ERC-8211's on-chain execution focus.

The fragmentation risk is real. If AgentKit builds its own conditional execution primitives, or if NEAR develops a competing batch-execution standard, agents could face the same interoperability challenges that plagued early DeFi — multiple standards solving the same problem, none achieving critical mass. ERC-8211's advantage is its compatibility with existing account abstraction infrastructure (ERC-4337, ERC-7683) and its minimal footprint: it requires no protocol fork, no new opcode, and works with any smart account implementation.

Why This Matters: The 400,000-Agent Economy Needs On-Chain Composability

The numbers paint a clear picture of urgency. Over 400,000 AI agents are now operating across blockchain networks, according to Chainalysis estimates. Virtuals Protocol alone has crossed $39.5 million in cumulative revenue from its 17,000+ agents. Coinbase's AgentKit supports autonomous wallets across every major LLM. The agent economy is not speculative — it is generating real revenue and executing real transactions today.

But these agents are constrained by infrastructure designed for human users. A human signing a swap on Uniswap can check the price, adjust slippage, and confirm — all within seconds. An autonomous agent operating at scale cannot afford this manual feedback loop. It needs to express complex strategies as self-contained, self-validating transaction bundles that execute correctly regardless of what happens between signing and inclusion.

ERC-8211's impact extends beyond DeFi automation. Consider these scenarios:

  • Autonomous treasury management: A DAO treasury agent that rebalances across yield protocols, with predicate checks ensuring no single protocol holds more than 30% of funds — all in one atomic transaction.
  • MEV-resistant execution: By resolving values at execution time rather than signing time, smart batches reduce the information available to MEV searchers who exploit stale parameters in pending transactions.
  • Cross-protocol arbitrage: An agent that detects a price discrepancy between Uniswap and Curve can execute the arbitrage atomically with constraints ensuring minimum profit thresholds, eliminating the risk of executing one leg and failing on the other.

The Road Ahead: From Standard to Infrastructure

ERC-8211 is still an ERC proposal, not a finalized standard. Its reference implementation is open-source and live in demo form, but adoption depends on wallet providers, bundler operators, and DeFi protocols integrating the smart batching interface. The standard's account-agnostic design — it works with ERC-4337 smart accounts, ERC-7683 cross-chain intents, and traditional EOAs through executor contracts — removes the biggest adoption barrier, but integration still requires active development.

The four-standard agent stack (ERC-8004 + ERC-8183 + ERC-8211 + x402) represents a coherent vision, but coherent visions in crypto have historically fragmented under competitive pressure. Whether the stack consolidates into a de facto standard or splinters into competing implementations will depend on which protocols ship production integrations first.

What is not in doubt is the direction. The blockchain's primary users are shifting from humans clicking through frontends to autonomous agents executing programmatic strategies. ERC-8211 is the first serious attempt to give those agents a transaction format that matches their capabilities — one that thinks before it transacts.

Building AI agents that interact with DeFi protocols across multiple chains? BlockEden.xyz provides high-performance RPC endpoints and data APIs for Ethereum, Sui, Aptos, and 20+ networks — the infrastructure layer your agents need for reliable on-chain reads and execution. Explore our API marketplace to get started.

Pyth Data Marketplace Goes Live: Six TradFi Giants Bring Institutional Data On-Chain

· 8 min read
Dora Noda
Software Engineer

For decades, accessing institutional-grade financial data meant paying six-figure annual licenses to Bloomberg, Refinitiv, or S&P Global—and even then, the data arrived through proprietary terminals and rigid APIs designed for a pre-internet era. On April 9, 2026, Pyth Network quietly launched a product that could rewrite those economics entirely: the Pyth Data Marketplace, a blockchain-native distribution layer where traditional financial institutions publish proprietary market data directly on-chain.

The launch partners aren't crypto-native startups. They're Euronext, Fidelity Investments, OTC Markets Group, SGX FX, Tradeweb, and Exchange Data International (EDI)—firms that collectively touch trillions of dollars in daily trading volume. Their decision to distribute data through a blockchain oracle network marks a structural shift in how the $30 billion financial data industry thinks about distribution.