Skip to main content

201 posts tagged with "Infrastructure"

Blockchain infrastructure and node services

View all tags

Walrus Becomes the Brain: How Sui's Storage Protocol Turned Into 2026's Default Memory Layer for AI Agents

· 13 min read
Dora Noda
Software Engineer

Every autonomous AI agent running on-chain today has the same humiliating secret: it forgets almost everything. A trading agent rebalances a $2M treasury on Monday, crushes a complex arbitrage on Tuesday, and by Wednesday it has no coherent memory of either — because the infrastructure to remember doesn't yet exist in a form that fits the way agents actually work. That gap is now the single most important unsolved problem in the $450B on-chain agent economy, and in April 2026 a storage network originally designed for files has positioned itself as the answer.

Walrus Protocol, Mysten Labs' Sui-native decentralized storage network, crossed 450TB of data stored on its one-year anniversary, surpassing Arweave's 385TB and emerging as the dominant write-heavy storage layer in Web3. But the more interesting story isn't the raw tonnage — it's MemWal, the AI memory SDK Walrus shipped on March 25, 2026, which reframes the entire protocol as infrastructure for agents instead of files. For developers building the next wave of autonomous systems, this quietly redraws the decentralized storage map.

The Memory Bottleneck Nobody Wanted to Talk About

LLM-based agents live inside a cruel constraint: the context window. Every reasoning step, every tool call, every observation has to fit inside a few hundred thousand tokens, and anything that doesn't fit simply ceases to exist from the agent's perspective. Human developers paper over this with vector databases, Redis caches, and Postgres tables — centralized infrastructure that works fine until you want the agent to hold its own keys, sign its own transactions, and operate without a trusted backend.

The on-chain agent movement made this problem acute. By Q1 2026, Virtuals Protocol alone was tracking $479M+ in agent-generated economic activity and more than 17,000 on-chain agents holding balances. These agents need state between sessions. They need to remember which counterparties defaulted, which strategies lost money, which users granted them permissions. And they can't just write that to AWS — the whole point of running autonomously on-chain is that there is no "they" to trust with a database password.

The existing decentralized storage options all stumbled on different edges of the problem:

  • IPFS is content-addressed and peer-to-peer, but has no native economic incentive for anyone to keep pinning your data. Files disappear when the last node loses interest.
  • Filecoin fixes incentives with storage deals, but its retrieval latency — often tens of seconds for cold data — is incompatible with an agent that needs to fetch a memory fragment mid-reasoning loop.
  • Arweave offers genuine permanence with a pay-once-store-forever model, but its economics optimize for archival: cheap long-term storage, expensive and awkward small-object writes, no native integration with the compute layer where agents actually live.

None of these were designed with a use case in mind where a million autonomous programs want to write small, structured state blobs every few seconds and read them back with sub-second latency while also anchoring ownership to a wallet-controlled object on a smart-contract chain. Walrus was.

What Walrus Actually Is

Walrus is a decentralized storage and data-availability protocol built on top of Sui by Mysten Labs. It launched its mainnet in 2025 and hit its one-year milestone in early 2026 with some impressive vitals: 100 storage nodes across 19 countries, 4.12 PB of total system capacity with about 39% currently used, and a growing pipeline of protocol integrations. The top validators by stake are concentrated in the US, Finland, Netherlands, Germany, and Lithuania — a geographic distribution that matters for both latency and regulatory resilience.

Under the hood, the magic trick is an erasure-coding scheme called Red Stuff. Instead of replicating each blob across many full copies (the classic Filecoin/S3 approach), Red Stuff splits each blob into slivers and spreads them across 100+ nodes with only a 4.5x replication factor. That means Walrus pays far less for durability than naive replication while still tolerating a supermajority of node failures. Just as importantly, the scheme is self-healing: when a node goes offline, recovering its slice of the data costs bandwidth proportional to only the lost data rather than the whole blob — so the network degrades and repairs gracefully rather than hitting cliffs.

The economic layer is the WAL token. Blob publishers pay per-epoch retention fees denominated in WAL; stakers provide storage bandwidth and earn those fees; Sui objects anchor ownership and access control for every blob. As of mid-April 2026, WAL trades around $0.098 with a market cap of roughly $225M, up 45% in 24 hours after the MemWal announcement cycle. That's still about 87% off the May 2025 all-time high of $0.76, which tells you most of the value accretion is still ahead of the protocol if the AI-agent thesis plays out.

Crucially — and this is the part competitors keep missing — Walrus writes are cheap and fast. You can upload gigabytes at a time because the blob only traverses the network once, and storage nodes operate on slivers a fraction of the original size. That makes small, frequent writes economically viable, which matters enormously if the thing writing is an agent that wants to checkpoint its state every few tool calls.

Enter MemWal: Storage Reframed as Cognition

On March 25, 2026, the Walrus team introduced MemWal, a developer SDK and runtime for building agents with persistent memory. It is currently in beta, but it has already reframed how developers talk about the protocol: Walrus is no longer "the cheap decentralized storage layer," it's "where your agents remember things."

The core abstraction MemWal introduces is the memory space — a structured, purpose-built container that replaces the unstructured log files agents used to dump state into. A trading agent might have three memory spaces: a short-term working-memory space with a few minutes of recent observations, a medium-term portfolio-state space with positions and unrealized P&L, and a long-term counterparty-reputation space that persists across weeks or months of interaction history. Each space has its own retention policy, access permissions, and update cadence.

Under the covers, an agent using the MemWal SDK talks to a backend relayer that handles the batching, encoding, and Sui interaction for blob commits. The relayer pushes data to Walrus for storage and simultaneously updates Sui objects that describe ownership and access control for each memory space. That means an agent's memory isn't just stored — it's owned by a Sui object, which means it can be transferred, delegated, revoked, or composed with other on-chain primitives just like any other asset.

Three concrete use cases are already driving early integrations:

  1. Cross-session persistence without an always-on backend. An agent can spin up, load its relevant memory spaces from Walrus via the SDK, reason for a while, commit updates, and shut down — with no centralized server in the loop. The next time it wakes up, either in the same process or a different machine, it reconstructs its own state from the chain.

  2. Multi-agent shared context with cryptographic permissions. Because Sui's object model allows fine-grained capability delegation, one agent can grant another read-only access to a specific memory space without exposing the rest of its state. This is the primitive that "agent swarms" like those emerging on ElizaOS have been asking for — a way to let a sentiment-analysis agent read the scraping agent's output without either having to trust a shared database.

  3. Auditable decision trails for regulated agents. Financial agents that execute trades, approve loans, or manage compliance workflows need to produce records that regulators, auditors, and counterparties can verify. A memory space anchored to a Sui object with an immutable commit log is exactly what "verifiable compliance" means in an agent-native system.

The hierarchical design — short-term working memory separated from long-term persistent storage, with cryptographic integrity checks layered in — mirrors the architecture that cognitive-science research has been nudging AI builders toward for years. The difference is that MemWal makes it a protocol primitive rather than a per-application concern.

Why the Incumbents Can't Just Pivot Here

It's tempting to assume Filecoin or Arweave could just add an "agent memory" SDK and compete. The problem is architectural, not marketing.

Filecoin's F3 fast-finality upgrade in 2025 did meaningful work on its latency profile and pushed the network's market cap north of $5B, but the deal-based storage model fundamentally assumes that writes are large, infrequent, and negotiated in advance. Retrieval is getting better, but it's still measured in seconds for cold data, which is outside the budget of an agent reasoning loop. You could force agents to work around it with aggressive caching, but at that point you've rebuilt an off-chain backend.

Arweave's permaweb is philosophically different — it's designed for data that should outlive the creator, which is wonderful for journalism, provenance records, and historical archives, and poor for rapidly-updating agent state. The pay-once-store-forever model also doesn't match the actual economic shape of agent memory, where most state is interesting for a few days or weeks and then can be aged out. Arweave's AO computing layer is interesting and deserves watching, but it's a different bet: parallel on-permaweb compute rather than a memory layer for agents running elsewhere.

IPFS remains the closest thing to a lingua franca for Web3 file addressing, but without persistence guarantees, no serious agent developer will put load-bearing state there. The ecosystem of pinning services that grew up around IPFS is a pragmatic patch, not an architectural solution.

Walrus's advantage isn't that it invented a new primitive — erasure coding has existed for decades. It's that the economic model (per-epoch rental rather than perpetual endowment), the latency profile (sub-second reads on small blobs), and the smart-contract integration (Sui objects as ownership anchors) line up with how autonomous agents actually need to behave. The rest of the stack has to jam those properties into existing architectures that were designed for something else.

There's a useful comparison table from the Four Pillars research team that surfaces another non-obvious advantage: cost. Walrus's erasure coding and low replication factor make it roughly 100x cheaper than Filecoin or Arweave per MB of durable storage. For agents that might write hundreds of small state updates per day, that compounds into real money at scale.

What This Means for Infrastructure Builders

The emergence of Walrus as an agent-memory layer is part of a broader pattern that anyone building Web3 infrastructure in 2026 needs to internalize. The agent economy is fracturing into specialized substrates, each solving one sharp problem:

  • Coinbase's Agentic Wallet solves custody: where the keys live.
  • Mind Network's x402z handles confidential payments: how agents transact without leaking strategy.
  • Nava Labs tackles intent verification: did the executed action match what the user asked for.
  • ERC-8004 defines identity: who the agent is on-chain.
  • Warden is building the cryptoeconomic settlement layer: how agents post collateral and get slashed for misbehavior.
  • Walrus + MemWal now owns the memory layer: what the agent knows and remembers.

None of these is a winner-take-all market on its own, but together they form the new agentic stack — and the projects that win will be the ones that integrate cleanly across the layers. A developer launching a new on-chain trading agent in 2026 should expect to compose a Sui wallet, a Walrus memory layer, an identity credential, a verification proof, and a payment rail. No single protocol does all five well, and the ones that try usually do none well.

The World Economic Forum's DePIN projection — from $50B in 2025 to $3.5T by 2028 — is the macro wind blowing through all of this. Storage and compute are the biggest components of that projection, and storage is where Walrus is planting its flag most aggressively. The Allium partnership, which brought 65TB of verifiable, institutional-grade blockchain data (Bitcoin, Ethereum, Sui historical records) onto the Walrus platform earlier this year, is the institutional validation the protocol needed: it's not just a toy for Sui-native NFT projects but a viable substrate for serious data workloads.

The Open Questions

None of this is guaranteed. Three things could still derail the thesis:

Sui concentration risk. Walrus is economically tied to Sui through WAL tokenomics and technically tied through object-model integration. If Sui loses relevance as a smart-contract platform — to Aptos, Solana, or an L2 renaissance — Walrus's agent-memory story has to rebuild from a weaker base. So far Sui's developer traction looks healthy, but "so far" is how you describe every crypto platform before its inflection point in either direction.

MemWal adoption curve. The SDK is still in beta. The real test is whether major agent frameworks — ElizaOS, AutoGPT-style systems, the emerging MCP/A2A agent protocols — make MemWal a first-class integration or just one option among several. Without tight framework support, MemWal becomes a niche tool for developers who go out of their way to use Sui.

Commercial centralization pressure. If OpenAI or Anthropic ship a first-party "agent memory" product with tight LLM integration, many developers will take the convenient option over the decentralized one. Walrus's answer has to be that decentralized memory unlocks use cases — agents holding their own assets, multi-party agent collaboration without a trusted operator — that centralized memory cannot. That's true, but the go-to-market requires sustained education.

Building on the New Agentic Stack

The next 18 months will decide whether the agentic Web3 stack ossifies around three or four incumbents or fragments across a dozen competing layers. Walrus's bet is that memory becomes a distinct, claimable layer in that stack — and that the winner of the memory layer is whoever combines programmable ownership, low-latency reads, sustainable economics, and actual developer tooling. By that checklist, it is further ahead than any of its direct competitors today.

For builders who want to ship agent-native products in 2026, the practical recommendation is simple: treat memory as a first-class infrastructure concern, not an afterthought. The agents that remember their users, their strategies, and their mistakes will compound advantages that stateless agents simply cannot.

BlockEden.xyz provides reliable, production-grade Sui RPC infrastructure for teams building on-chain agents and dApps that integrate with Walrus, MemWal, and the broader Sui ecosystem. Explore our Sui API services to build on the same foundations powering the agent-native Web3 stack.

Sources

Chainlink Puts €2 Trillion of European Equities On-Chain: Why SIX Group's DataLink Deal Rewires Tokenization

· 10 min read
Dora Noda
Software Engineer

For years, the biggest problem with tokenized European equities was not regulation, liquidity, or custody. It was the data. On-chain builders could tokenize a wrapper of Nestlé or Santander, but they were forced to reference prices from American sources, aggregators, or synthetic feeds of unknown provenance. Any institutional counterparty asked the same question — "whose tape are you quoting?" — and the answer was never satisfying.

On April 16, 2026, that answer changed. SIX, the group that operates SIX Swiss Exchange and BME Spanish Exchanges, announced a direct integration with Chainlink that puts equity reference data for Swiss and Spanish blue chips — a combined €2 trillion in market capitalization — natively on-chain. Available instantly to 2,600+ applications across 75+ public and private blockchains, the deal quietly dismantles one of the last structural barriers to tokenizing European capital markets.

Cysic Venus Open-Sources the ZK Proving Stack Making Ethereum Real-Time Verification Economical

· 11 min read
Dora Noda
Software Engineer

Seven point four seconds. That is how long it now takes to generate a zero-knowledge proof for an entire Ethereum mainnet block on a 24-GPU cluster running Cysic's new Venus prover. A year ago, the same task required 200 high-end cards and ten seconds to hit real-time parity. The collapse of that gap — roughly an order of magnitude in hardware cost while breaking below Ethereum's twelve-second slot time — is the quietest inflection point in crypto infrastructure this quarter. And it is happening precisely as Fusaka's PeerDAS upgrade throws open the data availability floodgates, turning proof generation into the single remaining bottleneck between Ethereum and a hundred-rollup future.

On April 8, 2026, Cysic open-sourced Venus, a hardware-optimized proving backend built on top of Zisk, the zkVM originally developed by Polygon Hermez. The release was not marketed with the usual token unlock choreography. It was dropped on GitHub with a technical note claiming a nine-percent end-to-end improvement over ZisK 0.16.1 and an invitation to contribute. That understatement conceals the real story: ZK proving has quietly crossed from research project to commodity compute, and the infrastructure stack that wins the next two years will not look like what most L2 teams are currently building toward.

The Bottleneck Nobody Priced In

For three years, Ethereum's scaling debate has fixated on data availability. Blobs, EIP-4844, PeerDAS, danksharding — every roadmap conversation assumed that once Ethereum could cheaply post rollup data, L2s would inherit the cost reduction automatically. That assumption quietly broke in late 2025. Fusaka shipped on December 3, 2025, and PeerDAS arrived with it, promising 48 blobs per block and a path to 12,000 transactions per second. Data availability, for the first time in Ethereum's history, stopped being the tightest constraint on the system.

The new tightest constraint is proof generation. ZK rollups need cryptographic attestations that their state transitions are valid. Generating those proofs is expensive compute work that happens off-chain, on specialized hardware. Optimistic rollups, which settle disputes through a challenge window rather than mathematical proof, skip this cost entirely — which is why the top ZK L2s currently sit at roughly $3.3 billion in total value locked, while optimistic rollups have passed $40 billion. The twelve-to-one gap is not a narrative problem. It is a prover economics problem.

Succinct's internal research put the math bluntly. To prove every Ethereum block in real time with SP1 Turbo required a cluster of 160-200 RTX 4090 GPUs — a capital outlay of $300,000 to $400,000 per proving cluster, consuming grid-scale electricity. Any L2 wanting to run its own prover faced a choice between centralizing proof generation with a handful of operators who could afford that stack, or accepting multi-minute proving latencies that broke the user experience. Neither option delivered the "ZK endgame" that Vitalik has been sketching since 2021.

How Venus Actually Works

Venus is interesting less for what it is than for what it represents. Cysic did not invent a new proof system. The underlying cryptography comes from Zisk, which descended from years of work by Jordi Baylina and the Polygon team. What Cysic did was re-architect the execution layer so that proof generation becomes an explicit computation graph — a directed acyclic diagram of operations that can be scheduled end-to-end across heterogeneous hardware.

In practice, this means the CPU-GPU synchronization overhead that dominated prior zkVMs gets optimized away at the scheduling layer. The prover does not stop and wait for a GPU kernel to finish before dispatching the next operation. The graph is known in advance, so data movement, memory allocation, and kernel launches can be pipelined. That is where the nine-percent improvement over ZisK 0.16.1 comes from — not a breakthrough in polynomial math, but an engineering win in how the math touches silicon.

More importantly, the same computation graph runs on FPGAs and, eventually, on Cysic's dedicated ZK ASIC. The company has publicly claimed its ASIC can perform 1.33 million Keccak hash function evaluations per second, a hundred-fold improvement over typical GPU workloads, with roughly fiftyfold better energy efficiency. Internal estimates suggest a single purpose-built ZK Pro unit could replace roughly 50 GPUs while drawing a fraction of the power. If those numbers hold in production, the economics of proving shift from renting warehouse space full of RTX cards to operating a compact rack of specialized chips.

The Race to Sub-Twelve-Second Proving

Venus did not arrive in a vacuum. Over the last twelve months, three teams have converged on the same milestone: proving Ethereum blocks in under the twelve-second slot time that defines real-time verification.

Succinct hit it first in public. SP1 Hypercube, announced in May 2025, proved 93 percent of a 10,000-block mainnet sample in real time using a 200-card RTX 4090 cluster. A November 2025 revision pushed the success rate to 99.7 percent using just sixteen RTX 5090 GPUs — a hardware cost reduction of roughly 90 percent in six months. The system is now live on Ethereum mainnet, producing proofs for every block as they are mined.

Cysic's number is even tighter on cost. Seven point four seconds with 24 GPUs puts end-to-end proving comfortably inside the slot time on commodity hardware. The current Venus release is open source, not audited for production, and still under active development. But the engineering trajectory suggests that a sub-ten-second proof on a consumer-grade cluster is now a matter of software tuning rather than fundamental architecture.

Per-proof costs have collapsed in lockstep. Industry benchmarks place the current best-case cost at roughly two cents per Ethereum block proof using 16x RTX 5090 hardware. The target for mass adoption is below one cent. A year ago, that same proof cost closer to a dollar. Three years ago, it was literally uneconomic — the gas fees on the settled rollup would not cover the prover's electricity bill. This is the kind of cost curve that quietly kills entire product categories, and it is accelerating.

The Marketplace Wars Are Already Here

Cheap, fast proving does not automatically become accessible. Someone has to operate the hardware, match demand, price proof jobs, and settle payments. Three different architectural bets are now competing for that middleware layer.

Boundless, launched on mainnet by RISC Zero in September 2025, runs an auction marketplace. GPU operators bid to produce proofs, and the system routes work to the lowest cost qualified prover. The model borrows from spot compute markets like AWS Spot Instances and promises to drive proof costs toward marginal hardware cost. Boundless recently added Bitcoin settlement, which lets Ethereum and Base proofs verify on the Bitcoin base layer — a niche but meaningful expansion of where ZK attestations can live.

Succinct's Prover Network takes a different bet. Rather than pure auction, it operates a routing protocol with approved high-performance provers handling specific workloads. Cysic joined the network as a multi-node prover operator, running GPU clusters tuned for SP1 Hypercube production traffic. The arrangement suggests Succinct sees value in reliability and latency guarantees that a pure spot market cannot provide for consumer-facing rollups.

Cysic itself launched its mainnet and CYS token on December 11, 2025, and has since processed over ten million ZK proofs integrated with Scroll, Aleo, Succinct, ETHProof, and others. The network's pitch is "ComputeFi" — turning proving capacity into a liquid, onchain asset that operators can tokenize and stake. Whether this becomes a third major marketplace or settles into a supplier role for the two larger networks is the open question of 2026.

Why This Matters for Rollup Economics

The punchline sits three layers down from the infrastructure news, in the unit economics of actual L2s. Today, a zkEVM rollup spends a meaningful fraction of its per-transaction costs on proof generation. Those costs get passed through to users as gas fees or eaten by the rollup operator as margin. Either way, they widen the gap between what a ZK rollup can charge and what an optimistic rollup charges for the same transaction.

If proof costs drop to sub-cent levels and proving latency fits inside Ethereum's slot time, that gap closes. A ZK rollup stops needing to charge a security premium. The user-facing experience becomes indistinguishable from an optimistic rollup — except that withdrawals settle in minutes rather than the seven-day challenge window that still friction-taxes every optimistic bridge.

That flip matters structurally because the largest pools of institutional liquidity still cite the optimistic-rollup withdrawal delay as a reason to stay on L1. Real-time ZK proving with marketplace-driven pricing removes the last functional argument against ZK-first rollup architecture. Every L2 team currently shipping an optimistic stack will face a serious technical review in 2026. Several will migrate, or at minimum ship a ZK fork of their sequencer.

What Still Might Break

The Venus release is honest about its limitations. The code has not been audited for production use. Running unaudited prover software in a live rollup is the kind of decision that sinks careers if a soundness bug creates an invalid proof the verifier accepts. Expect production deployment to lag the open-source release by months, not weeks.

The hardware story also concentrates risk. If ASIC-based proving delivers the promised fiftyfold efficiency gain, a handful of fabricators will dominate prover hardware the way Bitmain dominated Bitcoin mining. That dynamic cuts against the decentralization narrative that justified ZK rollups in the first place. Cysic's ASIC roadmap is an answer to a compute problem, but it is a fresh question about who owns the chips that secure the world's largest smart contract platform.

Finally, real-time proving only matters if the rest of the stack keeps up. Data availability sampling via PeerDAS needs to actually work at production scale, not just in testnet benchmarks. Sequencer decentralization remains an unresolved problem across every major L2. Proving is necessary but not sufficient for the endgame, and the industry has a history of declaring victory on one layer while quietly papering over breakdowns in adjacent layers.

The Near-Term Inflection

Zoom out and the pattern becomes clear. In May 2025, real-time Ethereum proving required a $400,000 GPU cluster and a nine-figure research budget. In April 2026, it runs on 24 commodity cards with open-source software. The next eighteen months will compress the cost curve further — toward ASIC economics, toward cent-level per-proof pricing, toward proof generation as a utility service rather than a bespoke infrastructure project.

For builders, the practical implication is that ZK-based architectures which were uneconomic in 2024 are worth re-evaluating now. Privacy-preserving transaction protocols, verifiable AI inference, cross-chain messaging with mathematical rather than multisig security, onchain identity with zero-knowledge credential disclosure — all of these sat behind a prover cost wall that is no longer there.

The Cysic Venus release, read alone, is a modest engineering update to an open-source proving backend. Read in the context of Succinct's Hypercube shipping to mainnet, Boundless running live proof auctions, and Fusaka's PeerDAS clearing the data availability bottleneck — it is the point where ZK infrastructure stops being the constraint and starts being the substrate. Every rollup thesis written before that transition needs a rewrite.

BlockEden.xyz provides enterprise-grade RPC and data infrastructure across 27+ chains including Ethereum L2s, Scroll, and Aptos. As real-time proving reshapes the L2 landscape, explore our API marketplace to build on reliable foundations for the ZK-native era.


Sources:

Ethereum's Glamsterdam Upgrade: How ePBS and EIP-7732 End the Flashbots Era and Rewrite MEV

· 9 min read
Dora Noda
Software Engineer

Two companies currently decide which transactions land on Ethereum. Titan Builder and Beaverbuild together construct roughly 86% of mainnet blocks, and adding Rsync and Flashbots pushes the top four past 90%. For a network whose brand rests on decentralization, that is an uncomfortable number — and it is about to change.

The Glamsterdam hard fork, scheduled for the first half of 2026, brings Enshrined Proposer-Builder Separation (ePBS) — formalized as EIP-7732 — into Ethereum's consensus layer. After three years of MEV-Boost running as off-chain middleware, block production is finally being absorbed into the protocol itself. The winners and losers of that shift will define the next cycle of Ethereum infrastructure.

The Duopoly Problem Glamsterdam Is Trying To Solve

To understand why ePBS matters, start with the market it is replacing.

MEV-Boost, the relay system Flashbots shipped after The Merge, was meant to be a temporary fix. It let validators outsource block construction to specialized builders who could squeeze more value out of each slot, then redistribute that value back to the proposer. It worked almost too well. Within two years, over 90% of Ethereum blocks were built via MEV-Boost, and the construction market calcified around a handful of players.

The 2025 numbers from relayscan.io tell the story bluntly:

  • Titan Builder: ~46.5% of blocks, ~$19.7M profit
  • Rsync Builder: ~15.6%
  • Flashbots: ~12.8%
  • Beaverbuild: ~9.4%

A Herfindahl-Hirschman Index reading near 3,892 places the builder market well beyond the U.S. Department of Justice's threshold of 1,800 for "highly concentrated." Titan's profit margin under exclusive order flow deals reportedly exceeds 17%, while Flashbots — which originally seeded the entire MEV-Boost ecosystem — barely breaks even on block building today.

That is the market ePBS aims to dismantle at the protocol level.

What EIP-7732 Actually Changes

EIP-7732 is deceptively surgical. It is a consensus-layer-only upgrade that decouples execution validation from consensus validation, both logically and temporally. In plain terms, the proposer no longer needs to see the full block's execution payload before committing to it.

Here is the new flow:

  1. Builders assemble execution payloads off-chain and broadcast signed SignedExecutionPayloadBid commitments containing only a blockhash and a payment value.
  2. The proposer selects the highest bid and embeds the commitment in the beacon block — without seeing the transactions inside.
  3. A new subset of validators, the Payload Timeliness Committee (PTC), attests whether the builder revealed the committed payload on time with the correct blockhash.
  4. Execution validation is postponed until the next slot's beacon block validation.

The critical engineering insight is that the full execution payload no longer rides on the consensus critical path. Network propagation speeds up, validators shoulder less computational load per slot, and — the part every MEV researcher has been waiting for — the relay becomes redundant. The builder commits cryptographically; the protocol itself enforces the promise.

Why This Guts The Relay Business

Today, relays exist because proposers cannot trust builders directly. A relay like Flashbots or Titan Relay holds the full block, verifies it, and only reveals it to the proposer after the proposer signs the header — preventing the proposer from stealing the builder's MEV.

ePBS makes that trust relationship native to the protocol. The PTC handles timeliness enforcement. The consensus rules handle payment. The entire middleware layer Flashbots built to coordinate block building — the most important piece of Ethereum infrastructure outside the client software itself — becomes economically unnecessary.

This is why the coindesk coverage framed Glamsterdam as a fight about MEV fairness, not just performance. The question is not whether MEV disappears. MEV is a mathematical consequence of ordered transactions with public mempools. The question is who captures it and on what terms.

The Censorship Math Changes Too

The relay oligopoly did not just concentrate power; it concentrated compliance. At peak, roughly 72% of MEV-Boost blocks were classified as OFAC-compliant because the largest relays filtered sanctioned addresses. That number has since declined to around 30% of relayed blocks as non-censoring relays gained share, but the architecture still gives a handful of US-based companies veto power over which Ethereum transactions get proposed.

ePBS does not mandate censorship resistance. But by removing the relay bottleneck, it removes the natural enforcement point. Builders who censor now have to compete against builders who do not on raw auction price — and on a trustless bid-reveal market, price tends to win. Expect the OFAC-compliant share to drop further after Glamsterdam ships, simply because the easiest place to impose policy has been eliminated.

Jito, Base, and Three Ways To Price A Block

Ethereum is not the first chain to confront MEV markets, and it is worth comparing ePBS against the two other models that dominate 2026.

Solana's Jito approach. Over 94% of Solana stake runs the Jito-Solana client. Tips flow directly to validators through an explicit auction — no relay, no builder-proposer split. MEV contributes 15-25% of total validator rewards, and the connection to stakers via JitoSOL is direct. The upside is transparency; the downside is that Solana's leader schedule concentrates MEV extraction windows in ways that still produce sandwich attacks on DEX traders.

Base's sequencer model. Coinbase operates the single sequencer on Base and captures sequencer revenue directly. There is no MEV auction to third parties because there are no third parties. This maximizes revenue capture for the L2 operator but sacrifices the decentralization story entirely — a tradeoff that works for Coinbase-scale balance sheets and nobody else.

Ethereum's ePBS. A trustless bid-reveal auction between builders and proposers, mediated by consensus. In theory this combines Jito's transparency with the credibly neutral distribution Ethereum's ideology requires. In practice, nobody knows yet whether builder concentration simply reasserts itself under new rules, or whether the removal of exclusive-order-flow agreements genuinely reopens the market.

The $500M Question For DeFi Users

Researchers estimate DeFi users lose north of $500 million annually to sandwich attacks, frontrunning, and JIT liquidity extraction — with sandwich attacks alone responsible for 51% of MEV volume in 2025. EigenPhi's data from late 2025 found over 72,000 sandwich attacks targeting 35,000 victims on Ethereum in a single 30-day window. A single Uniswap v3 stablecoin swap in March 2025 saw $220,764 of USDC compressed into $5,271 of USDT — a 98% loss to the victim.

Does ePBS reduce this? Directly, no. The attack surface — public mempools plus arbitrary transaction ordering — remains. But ePBS reshapes the ecosystem around MEV protection:

  • Private mempool services like MEV-Blocker ($5B+ in protected transactions routed historically) and CowSwap's coincidence-of-wants batching retain their value, because the protocol still does not hide user intent.
  • Encrypted mempools like EIP-8105's "Universal Enshrined Encrypted Mempool" become the logical follow-on proposal, tackling the order visibility that ePBS leaves untouched.
  • SUAVE and decentralized sequencing remain relevant as application-layer MEV protection rather than infrastructure monopolies.

The short version: ePBS fixes who gets paid for ordering transactions, not whether users can be exploited through ordering. The second fight is just beginning.

What Builders Should Actually Watch

Three signals will tell you whether ePBS delivers on its decentralization promise or quietly reproduces the old oligopoly:

  1. HHI after six months. If the builder HHI remains above 2,500 post-ePBS, the concentration problem was about economies of scale, not middleware, and no amount of protocol surgery will help. If it falls below 1,800, ePBS worked as advertised.

  2. Exclusive order flow agreements. Current builder margins depend on private deals with Uniswap, Banana Gun, and other high-value order flow sources. ePBS does not directly outlaw these, but it changes the leverage. Watch whether flagship integrations migrate to BuilderNet-style open consortia or stay exclusive.

  3. Non-censoring block share. Post-Glamsterdam, the relay-based censorship chokepoint is gone. If OFAC-compliance share stays above 50% anyway, it reveals that compliance pressure on Ethereum is structural rather than infrastructural.

The Infrastructure Reality Check

Glamsterdam will reshape how Ethereum orders transactions, but it will not touch what most infrastructure providers actually do: run nodes, serve RPCs, index state. The block-building layer has always been a rarefied slice of the stack. For developers building on top of Ethereum, the practical impact of ePBS is indirect — slightly faster propagation, modestly more credible neutrality, and a likely shift in which MEV protection services matter most.

BlockEden.xyz provides enterprise-grade API infrastructure for Ethereum, Sui, Aptos, and 20+ other chains, with SLA-backed RPC endpoints that insulate your application from consensus-layer changes. Explore our API marketplace to build on infrastructure designed to outlast any single upgrade.

Sources

Google's Quantum AI Whitepaper Maps Five Attack Paths That Put $100B of Ethereum at Risk

· 12 min read
Dora Noda
Software Engineer

One key cracked every nine minutes. The top 1,000 Ethereum wallets emptied in under nine days. A 20-fold collapse in the qubit count needed to break the cryptography that secures more than $100 billion of on-chain value. These are not the projections of a doomsday Twitter thread — they come from a 57-page whitepaper Google Quantum AI published on March 30, 2026, co-authored with Ethereum Foundation researcher Justin Drake and Stanford cryptographer Dan Boneh.

For a decade, "quantum risk" lived in the same intellectual neighborhood as asteroid strikes — real, catastrophic, but distant enough that no one had to act. The Google paper relocated the threat. It mapped five concrete attack paths against Ethereum, named the wallets, named the contracts, and gave engineers a number — fewer than 500,000 physical qubits — that maps directly onto the published roadmaps of IBM, Google, and a half-dozen well-funded startups. Q-Day, in other words, just acquired a calendar invite.

A 57-Page Paper That Changes the Threat Model

The paper, titled "Securing Elliptic Curve Cryptocurrencies against Quantum Vulnerabilities," is the first time a major quantum hardware lab has done the unglamorous engineering work of translating Shor's algorithm from a 1994 theoretical attack into a step-by-step blueprint against the elliptic-curve discrete logarithm problem (ECDLP) that secures Bitcoin, Ethereum, and virtually every chain that signs transactions with secp256k1 or secp256r1.

Three things make the paper land harder than prior estimates.

First, the qubit count. Earlier academic work pegged the resource requirement for breaking 256-bit ECDLP at multiple millions of physical qubits. The Google authors knock that down to fewer than 500,000 — a 20-fold reduction driven by improved circuit synthesis, better error-correction overhead, and tighter routing of magic states. IBM has publicly committed to a 100,000-qubit machine by 2029. Google has not published a comparable target, but its in-house roadmap is widely understood to be similar in slope. Half a million qubits is no longer a number that requires hand-waving toward the 2050s.

Second, the runtime. The paper estimates that once a sufficient machine exists, recovering a single private key from a public key takes on the order of nine minutes of quantum runtime — not days, not hours. That number matters enormously, because it determines how many high-value targets an attacker can drain inside the window between detection and response.

Third, and most consequential for Ethereum specifically, the authors do not stop at "ECDSA is broken." They walk through the protocol stack and identify five distinct attack surfaces, each with named victims.

The Five Attack Paths Against Ethereum

The paper organizes Ethereum's quantum exposure into five vectors, deliberately avoiding the lazy framing of "all crypto dies on the same day."

1. Externally Owned Account (EOA) compromise. Once an Ethereum address has signed even a single transaction, its public key is permanent and visible on-chain. A quantum attacker derives the private key in roughly nine minutes, then drains the wallet. Google's analysis identifies the top 1,000 wallets by ETH balance — collectively holding about 20.5 million ETH — as the most economically rational targets. At nine minutes per key, an attacker clears the entire list in under nine days.

2. Admin-controlled smart contract takeover. Ethereum's stablecoin economy and most production DeFi protocols rely on multisigs, upgrade keys, and minter roles controlled by EOAs. The paper enumerates 70-plus admin-controlled contracts, including the upgrade or minter keys behind major stablecoins. Compromising those keys does not just steal a balance — it lets the attacker mint, freeze, or rewrite the contract logic. Google estimates roughly $200 billion in stablecoins and tokenized assets sit downstream of these vulnerable keys.

3. Proof-of-stake validator key compromise. Ethereum's consensus layer uses BLS signatures, which are also based on elliptic-curve assumptions and equally broken by Shor's algorithm. An attacker who recovers enough validator private keys can, in principle, equivocate, finalize conflicting blocks, or stall finality. The exposure here is not stolen ETH — it is the integrity of the chain itself.

4. Layer 2 settlement compromise. The paper extends the analysis to major rollups. Optimistic rollups depend on EOA-signed proposer and challenger keys; ZK rollups depend on operator keys for sequencing and proving. Compromising those keys does not break the underlying validity proofs, but it does let an attacker steal sequencer fees, censor exits, or — in the worst case — rug the bridge that holds canonical L2 deposits.

5. Permanent forgery of historical data availability. This is the path that cryptographers find most disturbing. The original Ethereum trusted setup (and the KZG ceremony powering EIP-4844 blobs) relies on assumptions that a sufficiently powerful quantum machine can break by reconstructing setup secrets from public artifacts. The result is not theft — it is a permanent ability to forge historical state proofs that look valid forever. There is no rotation that fixes data already published.

The five paths collectively put more than $100 billion at immediate risk, and an order of magnitude more at structural risk if confidence in chain integrity collapses.

Ethereum Is More Exposed Than Bitcoin

A subtle but important conclusion of the paper: Ethereum's quantum exposure runs deeper than Bitcoin's, despite both chains using the same secp256k1 curve.

The reason is account abstraction in reverse. Bitcoin's UTXO model, particularly post-Taproot, supports addresses derived from a hash of the public key — meaning the public key is only revealed at spend time. A user who never reuses an address has a one-shot exposure window measured in the seconds between broadcast and confirmation. Funds parked in unspent, untouched addresses are quantum-safe by construction.

Ethereum has no such property. The moment an EOA signs its first transaction, its public key is on-chain forever. There is no "fresh address" pattern that hides it. A wallet that has transacted even once is a static target whose vulnerability does not decay over time. The 20.5 million ETH in the top 1,000 wallets is not just theoretically exposed — it is permanently fingerprinted on a public ledger waiting for a sufficiently powerful machine.

Worse, Ethereum cannot rotate keys without abandoning the account. Sending funds to a new address creates a new account with a new public key, but anything still associated with the old address — ENS names, contract permissions, vesting positions, governance allowlists — does not move with the funds. The migration cost is not just the gas to move tokens; it is the cost of unwinding every relationship the old address has accumulated.

The 2029 Deadline and Ethereum's Multi-Fork Roadmap

In parallel with the Google paper, the Ethereum Foundation launched pq.ethereum.org in March 2026 as the canonical hub for post-quantum research, the roadmap, open-source client repos, and weekly devnet results. More than 10 client teams are now running interoperability devnets focused on post-quantum primitives, and the community has converged on a target of completing L1 protocol-layer upgrades by 2029 — the same year Google has set for migrating its own authentication services off ECDSA.

The roadmap is staged across four upcoming hard forks rather than one big-bang fork. Roughly:

  • Fork 1 — Post-Quantum Key Registry. A native registry that lets accounts publish a post-quantum public key alongside their ECDSA key, enabling opt-in PQ co-signing without breaking existing tooling.
  • Fork 2 — Account Abstraction Hooks. Building on EIP-8141's "Frame Transaction" abstraction, accounts can specify validation logic that no longer assumes ECDSA, providing a native off-ramp toward lattice-based schemes such as ML-DSA (Dilithium) or hash-based SLH-DSA (SPHINCS+).
  • Fork 3 — PQ Consensus. Validator BLS signatures are replaced with a post-quantum aggregation scheme, the largest engineering lift in the entire roadmap because of the signature-size implications for block propagation.
  • Fork 4 — PQ Data Availability. A new trusted setup or transparent setup for blob commitments that does not depend on ECC assumptions, closing the historical-forgery vector.

Vitalik Buterin signaled the urgency in late February 2026 when he wrote that "validator signatures, data storage, accounts, and proofs all need to be updated" — naming all four forks in a single sentence and implicitly conceding that piecemeal upgrades will not suffice.

The challenge is not the cryptography. NIST has already standardized ML-KEM, ML-DSA, and SLH-DSA. The challenge is rolling those primitives through a live $300B+ network without breaking thousands of dapps that hard-code ECDSA assumptions, and without leaving billions of dollars of dormant ETH stranded in wallets whose owners never migrate.

The Frozen-or-Stolen Dilemma

Both Ethereum and Bitcoin face a governance question that no purely technical roadmap resolves: what happens to coins in vulnerable addresses whose owners never migrate?

The Ethereum Foundation's own FAQ frames the choice in plain terms: do nothing, or freeze. Doing nothing means that on Q-Day, an attacker drains every dormant address with a known public key — including the genesis-era wallets, the legacy ICO buyers, the lost-key holders, and a meaningful slice of Vitalik's own historical contributions to public goods funding. Freezing means social-consensus action to invalidate withdrawals from any address that has not migrated by a deadline.

Bitcoin's BIP 361, "Post Quantum Migration and Legacy Signature Sunset," lays out the same trilemma in a three-phase framework. Co-author Ethan Heilman has publicly estimated that a full Bitcoin migration to a quantum-resistant signature scheme would take seven years from the day rough consensus forms — which means BIP 361 needs to be substantively merged in 2026 to hit the 2033 horizon, and probably much sooner to hit 2029.

Neither chain has a precedent for mass coin invalidation. Ethereum did roll back the DAO hack in 2016, but that was a single-event reversal, not the deliberate freezing of millions of unrelated wallets based on cryptographic posture. The decision will inevitably read as a referendum on whether immutability or solvency is the chain's deeper commitment.

What This Means for Builders Right Now

The 2029 deadline can feel comfortably distant, but the decisions that determine whether a project is ready or scrambling get made in 2026 and 2027. A few practical implications surface immediately.

Smart contract architects should audit for ECDSA assumptions. Any contract that hard-codes ecrecover, embeds an immutable signer address, or depends on EOA-signed proposer keys needs an upgrade path. Contracts deployed without admin keys today look elegant; in a post-quantum world, they may look unrecoverable.

Custodians need to begin key-rotation hygiene now. A custody provider with billions under management cannot rotate every wallet in a single Q-Day weekend. Rotation, segregation by exposure tier, and pre-positioned PQ-ready cold storage are 2026 problems, not 2028 ones.

Bridge operators face the highest urgency. Bridges concentrate value behind a small number of multisig keys. The first economically rational quantum attack will not target a randomly chosen wallet — it will target the most valuable single key in the ecosystem. Bridges should be the first to implement hybrid PQ + ECDSA signing.

Application teams should track the four-fork roadmap. Each Ethereum hard fork in the PQ sequence will introduce new transaction types and validation semantics. Wallets, indexers, block explorers, and node operators that lag the upgrade window will degrade gracefully if they planned for it and break catastrophically if they did not.

BlockEden.xyz operates production RPC and indexing infrastructure across Ethereum, Sui, Aptos, and a dozen other chains, and tracks each network's post-quantum migration roadmap so application developers don't have to. Explore our API marketplace to build on infrastructure designed to survive the next decade of cryptographic transitions, not just the current one.

The Quiet Revolution in Threat Modeling

The deepest contribution of the Google paper may be sociological rather than technical. For ten years, "quantum-resistant" was a marketing claim that mostly attached to projects no one used. The serious chains treated PQ migration as a problem for the next generation of researchers. The 57 pages from Google, Justin Drake, and Dan Boneh shifted that posture in a single publication.

Three quantum-cryptography papers have landed in three months. A consensus has formed that the resource gap between current quantum hardware and a cryptographically relevant machine is closing faster than the gap between current chain protocols and post-quantum readiness. The intersection of those two curves — somewhere between 2029 and 2032, depending on whose estimate proves correct — is the most important deadline crypto infrastructure has ever faced.

The chains that treat 2026 as a year for serious engineering work, not vague reassurance, will still be standing on the other side. The ones that wait for the first headline about a stolen Vitalik wallet will not have time to react.

Sources

Delete Three Forever: Why Only One of MegaETH, Monad, Eclipse, or Berachain Will Matter by 2027

· 11 min read
Dora Noda
Software Engineer

Four chains. One seat at the table. In the last eighteen months, Monad, MegaETH, Eclipse, and Berachain have each promised to make Ethereum feel instant — and each has raised hundreds of millions to prove it. By Q2 2026, the marketing has cooled and the metrics are talking. Monad's TVL cleared $355M while its daily fees struggled to break $3,000. MegaETH shipped a mainnet built for 100,000 TPS and spent its first day averaging 29. Eclipse cut 65% of staff and watched ecosystem TVL collapse 95% from peak. Berachain's flagship integration, Dolomite, quietly trimmed its DAO-governed BERA allocation from 35% to 20%.

Pendle's Quiet Coup: How a $9B Yield Protocol Built DeFi's First Real Bond Market

· 10 min read
Dora Noda
Software Engineer

On a Tuesday in January 2026, Pendle's smart contract repository went read-only. No press release. No confetti. Just a GitHub commit flipping the flag — the protocol-level equivalent of a bond issuer locking the indenture and walking away from the notary's office. For a DeFi sector that ships breaking upgrades every quarter, the move was almost brutal in its confidence: we're done iterating on the primitive; now we scale it.

That quiet switch is arguably the most important infrastructure signal of 2026's fixed-income thesis. Because while everyone was watching BlackRock's BUIDL and Ondo's OUSG stretch tokenized Treasuries past $10 billion, Pendle was solving a different problem entirely — not how to wrap a T-bill in an ERC-20, but how to turn any on-chain yield into a zero-coupon bond. The result is the first venue where a crypto-native asset like stETH trades with the same rate-locking, duration-matching, and institutional-friendly properties that TradFi has enjoyed for five decades.

Bitcoin's $1.3T Quantum Clock: The 9-Minute ECDSA Break and BIP-360 Race to Save 6.9M BTC

· 11 min read
Dora Noda
Software Engineer

Nine minutes. That is the window a 57-page Google Quantum AI paper says a future quantum computer would need to reverse-engineer a Bitcoin private key from an exposed public key — short enough to fit inside a single block confirmation, long enough to rewrite the risk profile of the entire $1.3 trillion network. The paper, co-authored with researchers from Stanford and the Ethereum Foundation and published on March 30, 2026, did something subtler than predict the apocalypse. It shrank the number that matters. The resources needed to break ECDSA dropped by a factor of 20 compared to prior estimates. Google now internally targets post-quantum migration by 2029.

The $0.000001 Transaction That Changes Everything: Circle's USDC Nanopayments and the Machine Economy

· 9 min read
Dora Noda
Software Engineer

When a robot dog autonomously identified its drained battery, located the nearest charging station, and paid for its own electricity with a fraction of a cent in USDC — all without human involvement — it wasn't a science fiction demo. It was February 2026, and the machine economy had quietly arrived.

Circle's launch of USDC Nanopayments on testnet in March 2026 formalized what that robot dog demonstrated in the wild: for the first time, the financial plumbing exists to let machines pay machines, at costs so small they barely register as money at all. Transfers as tiny as $0.000001 — one millionth of a dollar — with zero gas fees. The economics of the machine economy suddenly work.