Skip to main content

9 posts tagged with "Scalability"

Blockchain scaling solutions and performance

View all tags

Boundless by RISC Zero: Can the Decentralized Proof Market Solve ZK's $97M Bottleneck?

· 9 min read
Dora Noda
Software Engineer

Zero-knowledge rollups were supposed to be the future of blockchain scaling. Instead, they've become hostages to a $97 million centralized prover market where a handful of companies extract 60-70% of fees — while users wait minutes for proofs that should take seconds.

Boundless, RISC Zero's decentralized proof marketplace that launched on mainnet in September 2025, claims to have cracked this problem. By turning ZK proof generation into an open market where GPU operators compete for work, Boundless promises to make verifiable computation "as cheap as execution." But can a token-incentivized network really break the centralization death spiral that's kept ZK technology expensive and inaccessible?

The Billion-Dollar Bottleneck: Why ZK Proofs Are Still Expensive

The promise of zero-knowledge rollups was elegant: execute transactions off-chain, generate a cryptographic proof of correct execution, and verify that proof on Ethereum for a fraction of the cost. In theory, this would deliver Ethereum-level security at sub-cent transaction costs.

Reality proved messier.

A single ZK proof for a batch of 4,000 transactions takes two to five minutes to generate on a high-end A100 GPU, costing $0.04 to $0.17 in cloud computing fees alone. That's before factoring in the specialized software, engineering expertise, and redundant infrastructure needed to run a reliable proving service.

The result? Over 90% of ZK-L2s rely on a handful of prover-as-a-service providers. This centralization introduces exactly the risks that blockchain was designed to eliminate: censorship, MEV extraction, single points of failure, and web2-style rent extraction.

The Technical Challenge

The bottleneck isn't network congestion — it's the mathematics itself. ZK proving relies on multi-scalar multiplications (MSMs) and number-theoretic transforms (NTTs) over elliptic curves. These operations are fundamentally different from the matrix math that makes GPUs excellent for AI workloads.

After years of MSM optimization, NTTs now account for up to 90% of proof generation latency on GPUs. The cryptography community has hit diminishing returns on software optimization alone.

Enter Boundless: The Open Proof Market

Boundless attempts to solve this problem by decoupling proof generation from blockchain consensus entirely. Instead of each rollup running its own prover infrastructure, Boundless creates a marketplace where:

  1. Requestors submit proof requests (from any chain)
  2. Provers compete to generate proofs using GPUs and commodity hardware
  3. Settlement happens on the destination chain specified by the requester

The key innovation is "Proof of Verifiable Work" (PoVW) — a mechanism that rewards provers not for useless hashes (like Bitcoin mining) but for generating useful ZK proofs. Each proof carries cryptographic metadata proving how much computation went into it, creating a transparent record of work.

How It Actually Works

Under the hood, Boundless builds on RISC Zero's zkVM — a zero-knowledge virtual machine that can execute any program compiled for the RISC-V instruction set. This means developers can write applications in Rust, C++, or any language that compiles to RISC-V, then generate proofs of correct execution without learning specialized ZK circuits.

The three-layer architecture includes:

  • zkVM Layer: Executes arbitrary programs and generates STARK proofs
  • Recursion Layer: Aggregates multiple STARKs into compact proofs
  • Settlement Layer: Converts proofs to Groth16 format for on-chain verification

This design allows Boundless to generate proofs that are small enough (around 200KB) for economical on-chain verification while supporting complex computations.

The ZKC Token: Mining Proofs Instead of Hashes

Boundless introduced ZK Coin (ZKC) as the native token powering its proof market. Unlike typical utility tokens, ZKC is actively mined through proof generation — provers earn ZKC rewards proportional to the computational work they contribute.

Tokenomics Overview

  • Total Supply: 1 billion ZKC (with 7% inflation in Year 1, tapering to 3% by Year 8)
  • Ecosystem Growth: 41.6% allocated to adoption initiatives
  • Strategic Partners: 21.5% with 1-year cliff and 2-year vesting
  • Community: 8.3% for token sale and airdrops
  • Current Price: ~$0.12 (down from $0.29 ICO price)

The inflationary model has sparked debate. Proponents argue ongoing emissions are necessary to incentivize a healthy prover network. Critics point out that 7% annual inflation creates constant sell pressure, potentially limiting ZKC's value appreciation even as the network grows.

Market Turbulence

ZKC's first months weren't smooth. In October 2025, South Korean exchange Upbit flagged the token with an "investment warning," triggering a 46% price crash. Upbit lifted the warning after Boundless clarified its tokenomics, but the episode highlighted the volatility risks of infrastructure tokens tied to emerging markets.

Mainnet Reality: Who's Actually Using Boundless?

Since launching mainnet beta on Base in July 2025 and full mainnet in September, Boundless has secured notable integrations:

Wormhole Integration

Wormhole is integrating Boundless to add ZK verification to Ethereum consensus, making cross-chain transfers more secure. Instead of relying purely on multi-sig guardians, Wormhole NTT (Native Token Transfers) can now include optional ZK proofs for users who want cryptographic guarantees.

Citrea Bitcoin L2

Citrea, a Bitcoin Layer-2 zk-rollup built by Chainway Labs, uses RISC Zero's zkVM to generate validity proofs posted to Bitcoin via BitVM. This enables EVM-equivalent programmability on Bitcoin while using BTC for settlement and data availability.

Google Cloud Partnership

Through its Verifiable AI Program, Boundless partnered with Google Cloud to enable ZK-powered AI proofs. Developers can build applications that prove AI model outputs without revealing inputs — a crucial capability for privacy-preserving machine learning.

Stellar Bridge

In September 2025, Nethermind deployed RISC Zero verifiers for Stellar zk Bridge integration, enabling cross-chain proofs between Stellar's low-cost payment network and Ethereum's security guarantees.

The Competition: Succinct SP1 and the zkVM Wars

Boundless isn't the only player racing to solve ZK's scalability problem. Succinct Labs' SP1 zkVM has emerged as a major competitor, sparking a benchmarking war between the two teams.

RISC Zero's Claims

RISC Zero asserts that properly configured zkVM deployments are "at least 7x less expensive than SP1" and up to 60x cheaper for small workloads. They point to tighter proof sizes and more efficient GPU utilization.

Succinct's Response

Succinct counters that RISC Zero's benchmarks "misleadingly compared CPU performance to GPU results." Their SP1 Hypercube prover claims $0.02 proofs with ~2 minute latency — though it remains closed source.

Independent Analysis

A Fenbushi Capital comparison found RISC Zero demonstrated "superior speed and efficiency across all benchmark categories in GPU environments," but noted SP1 excels in developer adoption, powering projects like Celestia's Blobstream with $3.14B in total value secured versus RISC Zero's $239M.

The real competitive advantage may not be raw performance but ecosystem lock-in. Boundless plans to support competing zkVMs including SP1, ZKsync's Boojum, and Jolt — positioning itself as a protocol-agnostic proof marketplace rather than a single-vendor solution.

2026 Roadmap: What's Next for Boundless

RISC Zero's roadmap for Boundless includes several ambitious targets:

Ecosystem Expansion (Q4 2025 - 2026)

  • Extend ZK proof support to Solana
  • Bitcoin integration via BitVM
  • Additional L2 deployments

Hybrid Rollup Upgrades

The most significant technical milestone is transitioning optimistic rollups (like Optimism and Base chains) to use validity proofs for faster finality. Instead of waiting 7 days for fraud proof windows, OP chains could settle in minutes.

Multi-zkVM Support

Support for competing zkVMs is on the roadmap, allowing developers to switch between RISC Zero, SP1, or other proving systems without leaving the marketplace.

Decentralization Completion

RISC Zero terminated its hosted proof service in December 2025, forcing all proof generation through the decentralized Boundless network. This marked a significant commitment to the decentralization thesis — but also means the network's reliability now depends entirely on independent provers.

The Bigger Picture: Will Decentralized Proving Become the Standard?

The success of Boundless hinges on a fundamental bet: that proof generation will commoditize the way cloud computing did. If that thesis holds, having the most efficient prover network matters less than having the largest and most liquid marketplace.

Several factors support this view:

  1. Hardware commoditization: ZK-specific ASICs from companies like Cysic promise 50x energy efficiency improvements, potentially lowering barriers to entry
  2. Proof aggregation: Networks like Boundless can batch proofs from multiple applications, amortizing fixed costs
  3. Cross-chain demand: As more chains adopt ZK verification, demand for proof generation could outpace any single provider's capacity

But risks remain:

  1. Centralization creep: Early prover networks tend toward concentration as economies of scale favor large operators
  2. Token dependency: If ZKC price collapses, prover incentives evaporate — potentially causing a death spiral
  3. Technical complexity: Running a competitive prover requires significant expertise, potentially limiting decentralization in practice

What This Means for Developers

For builders considering ZK integration, Boundless represents a pragmatic middle ground:

  • No infrastructure overhead: Submit proof requests via API without running your own provers
  • Multi-chain settlement: Generate proofs once, verify on any supported chain
  • Language flexibility: Write in Rust or any RISC-V compatible language instead of learning ZK DSLs

The trade-off is dependency on a token-incentivized network whose long-term stability remains unproven. For production applications, many teams may prefer Boundless for testnet and experimentation while maintaining fallback prover infrastructure for critical workloads.

Conclusion

Boundless represents the most ambitious attempt yet to solve ZK's centralization problem. By turning proof generation into an open market incentivized by ZKC tokens, RISC Zero is betting that competition will drive costs down faster than any single vendor could achieve alone.

The mainnet launch, major integrations with Wormhole and Citrea, and commitment to supporting rival zkVMs suggest serious technical capability. But the inflationary tokenomics, exchange volatility, and unproven decentralization at scale leave important questions unanswered.

For the ZK ecosystem, Boundless's success or failure will signal whether decentralized infrastructure can compete with centralized efficiency — or whether the blockchain industry's scaling future remains in the hands of a few well-funded prover services.


Building applications that need ZK verification across multiple chains? BlockEden.xyz provides enterprise RPC endpoints and APIs for Ethereum, Base, and 20+ networks — the reliable connectivity layer your cross-chain ZK applications need.

Ethereum vs Solana 2026: The Battle Reshapes After Pectra and Firedancer

· 11 min read
Dora Noda
Software Engineer

In December 2025, two seismic upgrades landed within weeks of each other: Ethereum's Pectra hard fork on May 7 and Solana's Firedancer validator client on December 12. For the first time in years, the performance narrative isn't hypothetical—it's measurable, deployed, and fundamentally reshaping the Ethereum vs Solana debate.

The old talking points are obsolete. Ethereum isn't just "slow but decentralized" anymore, and Solana isn't just "fast but risky." Both chains delivered their most ambitious infrastructure upgrades since The Merge and the network restart crisis, respectively. The question isn't which chain is "better"—it's which architecture wins specific use cases in a multi-chain world where L2s process 40,000 TPS and Solana aims for 1 million.

Let's dissect what actually changed, what the data shows, and where each chain stands heading into 2026.

Pectra: Ethereum's Biggest Upgrade Since The Merge

Ethereum's Pectra upgrade combined the Prague execution layer and Electra consensus layer updates, delivering 11 EIPs focused on three core improvements: account abstraction, validator efficiency, and L2 scalability.

Account Abstraction Goes Mainstream

EIP-7702 introduces temporary smart contract functionality to Externally Owned Accounts (EOAs), enabling gas abstraction (pay fees in any token), batched transactions, and customizable security—all without permanently converting to a contract account. This bridges the UX gap between EOAs and smart wallets, making Ethereum accessible to users who don't want to manage gas tokens or sign every transaction individually.

For developers, this means building wallet experiences that rival Web2 apps: social recovery, sponsored transactions, and automated workflows—without forcing users into smart wallet migration. The upgrade eliminates a major onboarding friction point that has plagued Ethereum since inception.

Validator Staking Overhaul

Pectra raised the maximum effective balance from 32 ETH to 2,048 ETH per validator—a 64x increase. For institutional stakers running thousands of validators, this change dramatically simplifies operations. Instead of managing 1,000 separate 32 ETH validators, institutions can consolidate into ~16 validators staking 2,048 ETH each.

Deposit activation time dropped from hours to approximately 13 minutes due to simpler processing. Validator queue times, which previously stretched to weeks during high-demand periods, are now negligible. Staking became operationally cheaper and faster—critical for attracting institutional capital that views validator management overhead as a barrier.

Blob Throughput Doubles

Ethereum increased the target blob count from 3 to 6 per block, with a maximum of 9 (up from 6). This effectively doubles the data availability bandwidth for L2 rollups, which rely on blobs to post transaction data affordably.

Combined with PeerDAS (activated December 8, 2025), which expands blob capacity from 6 to 48 per block by distributing blob data across nodes, Layer 2 fees are expected to drop an additional 50-70% through 2026 on top of the 70-95% reduction achieved post-Dencun. Data availability currently represents 90% of L2 operating costs, so this change directly impacts rollup economics.

What Didn't Change

Ethereum's base layer still processes 15-30 TPS. Pectra didn't touch Layer 1 throughput—because it doesn't need to. Ethereum's scaling thesis is modular: L1 provides security and data availability, while L2s (Arbitrum, Optimism, Base) handle execution. Arbitrum already achieves 40,000 TPS theoretically, and PeerDAS aims to push combined L2 capacity toward 100,000+ TPS.

The trade-off remains: Ethereum prioritizes decentralization (8,000+ nodes) and security, accepting lower L1 throughput in exchange for credible neutrality and censorship resistance.

Firedancer: Solana's Path to 1 Million TPS

Solana's Firedancer validator client, developed by Jump Crypto and written in C for hardware-level optimization, went live on mainnet December 12, 2024, after 100 days of testing and 50,000 blocks produced. This isn't a protocol upgrade—it's a complete reimplementation of the validator software designed to eliminate bottlenecks in the original Agave (formerly Labs) client.

Architecture: Parallel Processing at Scale

Unlike Agave's monolithic architecture, Firedancer uses a "tile-based" modular design where different validator tasks (consensus, transaction processing, networking) run in parallel across CPU cores. This allows Firedancer to extract maximum performance from commodity hardware without requiring specialized infrastructure.

The results are measurable: Kevin Bowers, Chief Scientist at Jump Trading Group, demonstrated over 1 million transactions per second on commodity hardware at Breakpoint 2024. While real-world conditions haven't reached that yet, early adopters report significant improvements.

Real-World Performance Gains

Figment's flagship Solana validator migrated to Firedancer and reported:

  • 18-28 basis points higher staking rewards compared to Agave-based validators
  • 15% reduction in missed voting credits (improved consensus participation)
  • Vote latency optimized at 1.002 slots (near-instantaneous consensus contributions)

The rewards boost comes primarily from better MEV capture and more efficient transaction processing—Firedancer's parallel architecture allows validators to process more transactions per block, increasing fee revenue.

As of late 2025, the hybrid "Frankendancer" client (combining Firedancer's consensus with Agave's execution layer) captured over 26% of validator market share within weeks of mainnet launch. Full Firedancer adoption is expected to accelerate through 2026 as remaining edge cases are resolved.

The 1 Million TPS Timeline

Firedancer's 1 million TPS capability was demonstrated in controlled environments, not production. Solana currently processes 3,000-5,000 real-world TPS, with peak capacity around 4,700 TPS. Reaching 1 million TPS requires not just Firedancer, but network-wide adoption and complementary upgrades like Alpenglow (expected Q1 2026).

The path forward involves:

  1. Full Firedancer migration across all validators (currently ~26% hybrid, 0% full Firedancer)
  2. Alpenglow upgrade to optimize consensus and state management
  3. Network hardware improvements as validators upgrade infrastructure

Realistically, 1 million TPS is a 2027-2028 target, not 2026. However, Firedancer's immediate impact—doubling or tripling effective throughput—is already measurable and positions Solana to handle consumer-scale applications today.

Head-to-Head: Where Each Chain Wins in 2026

Transaction Speed and Cost

Solana: 3,000-5,000 real-world TPS, with $0.00025 average transaction cost. Firedancer adoption should push this toward 10,000+ TPS by mid-2026 as more validators migrate.

Ethereum L1: 15-30 TPS, with variable gas fees ($1-50+ depending on congestion). L2 solutions (Arbitrum, Optimism, Base) achieve 40,000 TPS theoretically, with transaction costs of $0.10-1.00—still 400-4,000x more expensive than Solana.

Winner: Solana for raw throughput and cost efficiency. Ethereum L2s are faster than Ethereum L1 but remain orders of magnitude more expensive than Solana for high-frequency use cases (payments, gaming, social).

Decentralization and Security

Ethereum: ~8,000 validators (each representing a 32+ ETH stake), with client diversity (Geth, Nethermind, Besu, Erigon) and geographically distributed nodes. Pectra's 2,048 ETH staking limit improves institutional efficiency but doesn't compromise decentralization—large stakers still run multiple validators.

Solana: ~3,500 validators, with Firedancer introducing client diversity for the first time. Historically, Solana ran exclusively on the Labs client (now Agave), creating single-point-of-failure risks. Firedancer's 26% adoption is a positive step, but full client diversity remains years away.

Winner: Ethereum maintains a structural decentralization advantage through client diversity, geographic distribution, and a larger validator set. Solana's history of network outages (most recently September 2022) reflects centralization trade-offs, though Firedancer mitigates single-client risk.

Developer Ecosystem and Liquidity

Ethereum: $50B+ TVL across DeFi protocols, with established infrastructure for RWA tokenization (BlackRock's BUIDL), NFT markets, and institutional integrations. Solidity remains the dominant smart contract language, with the largest developer community and audit ecosystem.

Solana: $8B+ TVL (growing rapidly), with dominance in consumer-facing apps (Tensor for NFTs, Jupiter for DEX aggregation, Phantom wallet). Rust-based development attracts high-performance engineers but has a steeper learning curve than Solidity.

Winner: Ethereum for DeFi depth and institutional trust; Solana for consumer apps and payment rails. These are increasingly divergent use cases, not direct competition.

Upgrade Path and Roadmap

Ethereum: Fusaka upgrade (Q2/Q3 2026) will expand blob capacity to 48 per block, with PeerDAS pushing L2s toward 100,000+ combined TPS. Long-term, "The Surge" aims to enable L2s to scale indefinitely while maintaining L1 as the settlement layer.

Solana: Alpenglow (Q1 2026) will optimize consensus and state management. Firedancer's full rollout should complete by late 2026, with 1 million TPS feasible by 2027-2028 if network-wide migration succeeds.

Winner: Ethereum has a clearer, more predictable roadmap. Solana's roadmap depends heavily on Firedancer adoption rates and potential edge cases that emerge during migration.

The Real Debate: Monolithic vs Modular

The Ethereum vs Solana comparison increasingly misses the point. These chains solve different problems:

Ethereum's modular thesis: L1 provides security and data availability; L2s handle execution. This separates concerns, allowing L2s to specialize (Arbitrum for DeFi, Base for consumer apps, Optimism for governance experiments) while inheriting Ethereum's security. The trade-off is complexity—users must bridge between L2s, and liquidity fragments across chains.

Solana's monolithic thesis: One unified state machine maximizes composability. Every app shares the same liquidity pool, and atomic transactions span the entire network. The trade-off is centralization risk—higher hardware requirements (validators need powerful machines) and single-client dependency (mitigated but not eliminated by Firedancer).

Neither approach is "correct." Ethereum dominates high-value, low-frequency use cases (DeFi, RWA tokenization) where security justifies higher costs. Solana dominates high-frequency, low-value use cases (payments, gaming, social) where speed and cost are paramount.

What Developers Should Know

If you're building in 2026, here's the decision framework:

Choose Ethereum (+ L2) if:

  • Your application requires maximum security and decentralization (DeFi protocols, custody solutions)
  • You're targeting institutional users or RWA tokenization
  • You need access to Ethereum's $50B+ TVL and liquidity depth
  • Your users tolerate $0.10-1.00 transaction costs

Choose Solana if:

  • Your application requires high-frequency transactions (payments, gaming, social)
  • Transaction costs must be sub-cent ($0.00025 avg)
  • You're building consumer-facing apps where UX latency matters (400ms Solana finality vs 12-second Ethereum finality)
  • You prioritize composability over modular complexity

Consider both if:

  • You're building cross-chain infrastructure (bridges, aggregators, wallets)
  • Your application has distinct high-value and high-frequency components (DeFi protocol + consumer payment layer)

Looking Ahead: 2026 and Beyond

The performance gap is narrowing, but not converging. Pectra positioned Ethereum to scale L2s toward 100,000+ TPS, while Firedancer set Solana on a path toward 1 million TPS. Both chains delivered on multi-year technical roadmaps, and both face new challenges:

Ethereum's challenge: L2 fragmentation. Users must bridge between dozens of L2s (Arbitrum, Optimism, Base, zkSync, Starknet), fragmenting liquidity and complicating UX. Shared sequencing and native L2 interoperability are 2026-2027 priorities to address this.

Solana's challenge: Proving decentralization at scale. Firedancer introduces client diversity, but Solana must demonstrate that 10,000+ TPS (and eventually 1 million TPS) doesn't require hardware centralization or sacrifice censorship resistance.

The real winner? Developers and users who finally have credible, production-ready options for both high-security and high-performance applications. The blockchain trilemma isn't solved—it's bifurcated into two specialized solutions.

BlockEden.xyz provides enterprise-grade API infrastructure for both Ethereum (L1 and L2s) and Solana, with dedicated nodes optimized for Pectra and Firedancer. Explore our API marketplace to build on infrastructure designed to scale with both ecosystems.

Sources

BNB Chain's Fermi Upgrade: What 0.45-Second Blocks Mean for DeFi, Gaming, and High-Frequency Trading

· 9 min read
Dora Noda
Software Engineer

On January 14, 2026, BNB Chain will activate the Fermi hard fork, slashing block times from 0.75 seconds to 0.45 seconds. That's faster than a human blink—and it represents the culmination of an aggressive scaling roadmap that has transformed BSC from a three-second-block chain to one of the fastest EVM-compatible networks in production.

The implications extend far beyond bragging rights. With finality now achievable in just 1.125 seconds and throughput targets of 5,000 DEX swaps per second, BNB Chain is positioning itself as the infrastructure layer for applications where milliseconds translate directly to money—or lost opportunities.


The Evolution: From 3 Seconds to 0.45 Seconds in Under a Year

BNB Chain's block time reduction has been methodical and aggressive. Here's the progression:

UpgradeDateBlock TimeFinality
Pre-upgrade baseline-3.0 seconds~7.5 seconds
Lorentz Hard ForkApril 20251.5 seconds~3.75 seconds
Maxwell Hard ForkJune 30, 20250.75 seconds~1.875 seconds
Fermi Hard ForkJanuary 14, 20260.45 seconds~1.125 seconds

Each upgrade required careful engineering to maintain network stability while doubling or nearly doubling performance. The Maxwell upgrade alone, powered by BEP-524, BEP-563, and BEP-564, improved peer-to-peer messaging between validators, allowed faster block proposal communication, and created a more stable validator network to reduce the risk of missed votes or sync delays.

Fermi continues this trajectory with five BEPs:

  • BEP-590: Extended voting rules for fast finality stability
  • BEP-619: The actual block interval reduction to 0.45 seconds
  • BEP-592: Non-consensus based block-level access list
  • BEP-593: Incremental snapshot
  • BEP-610: EVM super instruction implementation

The result: a chain that processed 31 million daily transactions at peak (October 5, 2025) while maintaining zero downtime and handling up to five trillion gas daily.


Why Sub-Second Blocks Matter: The DeFi Perspective

For decentralized finance, block time isn't just a technical metric—it's the heartbeat of every trade, liquidation, and yield strategy. Faster blocks create compounding advantages.

Reduced Slippage and Better Price Discovery

When blocks occur every 0.45 seconds instead of every 3 seconds, the price oracle updates 6-7x more frequently. For DEX traders, this means:

  • Tighter spreads as arbitrageurs keep prices aligned more quickly
  • Reduced slippage on larger orders as the order book updates more frequently
  • Better execution quality for retail traders competing against sophisticated actors

Enhanced Liquidation Efficiency

Lending protocols like Venus or Radiant depend on timely liquidations to maintain solvency. With 0.45-second blocks:

  • Liquidation bots can respond to price movements almost instantly
  • The window between a position becoming undercollateralized and liquidation shrinks dramatically
  • Protocol bad debt risk decreases, enabling more aggressive capital efficiency

MEV Reduction

Here's where it gets interesting. BNB Chain reports a 95% reduction in malicious MEV—specifically sandwich attacks—through a combination of faster blocks and the Good Will Alliance security enhancements.

The logic is straightforward: sandwich attacks require bots to detect pending transactions, front-run them, and then back-run them. With only 450 milliseconds between blocks, there's far less time for bots to detect, analyze, and exploit pending transactions. The attack window has shrunk from seconds to fractions of a second.

Fast finality compounds this advantage. With confirmation times under 2 seconds (1.125 seconds with Fermi), the window for any form of transaction manipulation narrows substantially.


Gaming and Real-Time Applications: The New Frontier

The 0.45-second block time opens possibilities that simply weren't practical with slower chains.

Responsive In-Game Economies

Blockchain games have struggled with latency. A three-second block time means a minimum three-second delay between player action and on-chain confirmation. For competitive games, that's unplayable. For casual games, it's annoying.

At 0.45 seconds:

  • Item trades can confirm in under 1.5 seconds (including finality)
  • In-game economies can respond to player actions in near-real-time
  • Competitive game state updates become feasible for more game types

Live Betting and Prediction Markets

Prediction markets and betting applications require rapid settlement. The difference between 3-second and 0.45-second blocks is the difference between "tolerable" and "feels instant" for end users. Markets can:

  • Accept bets closer to event outcomes
  • Settle positions more quickly
  • Enable more dynamic, in-play betting experiences

High-Frequency Automated Agents

The infrastructure is increasingly well-suited for automated trading systems, arbitrage bots, and AI agents executing on-chain strategies. BNB Chain explicitly notes that the network is designed for "high-frequency trading bots, MEV strategies, arbitrage systems, and gaming applications where microseconds matter."


The 2026 Roadmap: 1 Gigagas and Beyond

Fermi is not the end state. BNB Chain's 2026 roadmap targets ambitious goals:

1 Gigagas Per Second: A 10x increase in throughput capacity, designed to support up to 5,000 DEX swaps per second. This would put BNB Chain's raw capacity ahead of most competing L1s and many L2s.

Sub-150ms Finality: The longer-term vision calls for a next-generation L1 with finality under 150 milliseconds—faster than human perception, competitive with centralized exchanges.

20,000+ TPS for Complex Transactions: Not just simple transfers, but complex smart contract interactions at scale.

Native Privacy for 200+ Million Users: A significant expansion of privacy-preserving capabilities at the network level.

The explicit goal is to "rival centralized platforms" in user experience while maintaining decentralized guarantees.


Validator and Node Operator Implications

The Fermi upgrade isn't free. Faster blocks mean more work per unit time, creating new requirements for infrastructure operators.

Hardware Requirements

Validators must upgrade to v1.6.4 or later before the January 14 activation. The upgrade involves:

  • Snapshot regeneration (approximately 5 hours on BNB Chain's reference hardware)
  • Log indexing updates
  • Temporary performance impact during the upgrade process

Network Bandwidth

With blocks arriving 40% faster (0.45s vs 0.75s), the network must propagate more data more quickly. BEP-563's improved peer-to-peer messaging helps, but operators should expect increased bandwidth requirements.

State Growth

More transactions per second means faster state growth. While BEP-593's incremental snapshot system helps manage this, node operators should plan for increased storage requirements over time.


Competitive Positioning: Where Does BNB Chain Stand?

The sub-second block landscape is increasingly crowded:

ChainBlock TimeFinalityNotes
BNB Chain (Fermi)0.45s~1.125sEVM compatible, 5T+ gas/day proven
Solana~0.4s~12s (with vote lag)Higher theoretical TPS, different trade-offs
Sui~0.5s~0.5sObject-centric model, newer ecosystem
Aptos~0.9s~0.9sMove-based, parallel execution
Avalanche C-Chain~2s~2sSubnet architecture
Ethereum L1~12s~15minDifferent design philosophy

BNB Chain's competitive advantage lies in the combination of:

  1. EVM compatibility: Direct porting from Ethereum/other EVM chains
  2. Proven scale: 31M daily transactions, 5T daily gas, zero downtime
  3. Ecosystem depth: Established DeFi, gaming, and infrastructure projects
  4. MEV mitigation: 95% reduction in sandwich attacks

The trade-off is centralization. BNB Chain's Proof of Staked Authority (PoSA) consensus uses a smaller validator set than fully decentralized networks, which enables the speed but raises different trust assumptions.


What Builders Should Know

For developers building on BNB Chain, Fermi creates both opportunities and requirements:

Opportunities

  • Latency-sensitive applications: Games, trading bots, and real-time applications become more viable
  • Better UX: Sub-2-second confirmation times enable smoother user experiences
  • MEV-resistant designs: Less exposure to sandwich attacks simplifies some protocol designs
  • Higher throughput: More transactions per second means more users without congestion

Requirements

  • Block producer assumptions: With faster blocks, code that assumes block timing may need updates
  • Oracle update frequency: Protocols may want to leverage faster block times for more frequent price updates
  • Gas estimation: Block gas dynamics may shift with faster block production
  • RPC infrastructure: Applications may need higher-performance RPC providers to keep up with faster block production

Conclusion: Speed as Strategy

BNB Chain's progression from 3-second to 0.45-second blocks over roughly 18 months represents one of the most aggressive scaling trajectories in production blockchain infrastructure. The Fermi upgrade on January 14, 2026, is the latest step in a roadmap that explicitly aims to compete with centralized platforms on user experience.

For DeFi protocols, this means tighter markets, better liquidations, and reduced MEV. For gaming applications, it means near-real-time on-chain interactions. For high-frequency traders and automated systems, it means microsecond advantages become meaningful.

The question isn't whether faster blocks are useful—they clearly are. The question is whether BNB Chain's centralization trade-offs remain acceptable to users and builders as the network scales toward its 1 gigagas and sub-150ms finality goals.

For applications where speed matters more than maximum decentralization, BNB Chain is making a compelling case. The Fermi upgrade is the latest proof point in that argument.


References

Modular Blockchain Wars: Celestia vs EigenDA vs Avail and the Rollup Economics Breakdown

· 9 min read
Dora Noda
Software Engineer

Data availability is the new battleground for blockchain dominance—and the stakes have never been higher. As Layer 2 TVL climbs past $47 billion and rollup transactions eclipse Ethereum mainnet by a factor of four, the question of where to store transaction data has become the most consequential infrastructure decision in crypto.

Three protocols are racing to become the backbone of the modular blockchain era: Celestia, the pioneer that proved the concept; EigenDA, the Ethereum-aligned challenger leveraging $19 billion in restaked assets; and Avail, the universal DA layer aiming to connect every ecosystem. The winner won't just capture fees—they'll define how the next generation of blockchains are built.


The Economics That Started a War

Here's the brutal math that launched the modular blockchain movement: posting data to Ethereum costs approximately $100 per megabyte. Even with the introduction of EIP-4844's blobs, that figure only dropped to $20.56 per MB—still prohibitively expensive for high-throughput applications.

Enter Celestia, with data availability at roughly $0.81 per MB. That's a 99% cost reduction that fundamentally changed what's economically viable on-chain.

For rollups, data availability isn't a nice-to-have—it's their largest variable cost. Every transaction a rollup processes must be posted somewhere for verification. When that somewhere charges a 100x premium, the entire business model suffers. Rollups must either:

  1. Pass costs to users (killing adoption)
  2. Subsidize costs indefinitely (killing sustainability)
  3. Find cheaper DA (killing nothing)

By 2025, the market has spoken decisively: over 80% of Layer 2 activity now relies on dedicated DA layers rather than Ethereum's base layer.


Celestia: The First-Mover Advantage

Celestia was built from scratch for a single purpose: being a plug-and-play consensus and data layer. It doesn't support smart contracts or dApps. Instead, it offers blobspace—the ability for protocols to publish large chunks of data without executing any logic.

The technical innovation that makes this work is Data Availability Sampling (DAS). Rather than requiring every node to download every block, DAS allows lightweight nodes to confirm data availability by randomly sampling tiny pieces. This seemingly simple change unlocks massive scalability without sacrificing decentralization.

By the Numbers (2025)

Celestia's ecosystem has exploded:

  • 56+ rollups deployed (37 mainnet, 19 testnet)
  • 160+ gigabytes of blob data processed to date
  • Eclipse alone has posted over 83 GB through the network
  • 128 MB blocks enabled after the November 2025 Matcha upgrade
  • 21.33 MB/s throughput achieved in testnet conditions (16x mainnet capacity)

The network's namespace activity hit an all-time high on December 26, 2025—ironically, while TIA experienced a 90% yearly price decline. Usage and token price have decoupled spectacularly, raising questions about value capture in pure DA protocols.

Finality characteristics: Celestia creates blocks every 6 seconds with Tendermint consensus. However, because it uses fraud proofs rather than validity proofs, true DA finality requires a ~10 minute challenge period.

Decentralization trade-offs: With 100 validators and a Nakamoto Coefficient of 6, Celestia offers meaningful decentralization but remains susceptible to validator centralization risks inherent to delegated proof-of-stake systems.


EigenDA: The Ethereum Alignment Play

EigenDA takes a fundamentally different approach. Rather than building a new blockchain, it leverages Ethereum's existing security through restaking. Validators who stake ETH on Ethereum can "restake" it to secure additional services—including data availability.

This design offers two killer features:

Economic security at scale: EigenDA is backed by $335+ million in restaked assets specifically allocated to DA services, drawing from EigenLayer's $19 billion+ TVL pool. No new trust assumptions, no new token to secure.

Raw throughput: EigenDA claims 100 MB/s on mainnet—achievable because it separates data dispersal from consensus. While Celestia processes at roughly 1.33 MB/s live (8 MB blocks / 6 seconds), EigenDA can move data an order of magnitude faster.

Adoption Momentum

Major rollups have committed to EigenDA:

  • Mantle Network: Upgraded from MantleDA (10 operators) to EigenDA (200+ operators), reporting up to 80% cost reduction
  • Celo: Leveraging EigenDA for their L2 transition
  • ZKsync Elastic Network: Designated EigenDA as preferred alternative DA solution for its customizable rollup ecosystem

The operator network now exceeds 200 nodes with over 40,000 individual restakers delegating ETH.

The centralization critique: Unlike Celestia and Avail, EigenDA operates as a Data Availability Committee rather than a publicly verified blockchain. End users cannot independently verify data availability—they rely on economic guarantees and slashing risks. For applications where pure decentralization matters more than throughput, this is a meaningful trade-off.

Finality characteristics: EigenDA inherits Ethereum's finality timeline—between 12 and 15 minutes, significantly longer than Celestia's native 6-second blocks.


Avail: The Universal Connector

Avail emerged from Polygon but was designed from day one to be chain-agnostic. While Celestia and EigenDA focus primarily on Ethereum ecosystem rollups, Avail positions itself as the universal DA layer connecting every major blockchain.

The technical differentiator is how Avail implements data availability sampling. While Celestia relies on fraud proofs (requiring a challenge period for full security), Avail combines validity proofs with DAS through KZG commitments. This provides faster cryptographic guarantees of data availability.

2025 Milestones

Avail's year has been marked by aggressive expansion:

  • 70+ partnerships secured including major L2 players
  • Arbitrum, Optimism, Polygon, StarkWare, and zkSync announced integrations following mainnet launch
  • 10+ rollups currently in production
  • $75 million raised including $45M Series A from Founders Fund, Dragonfly Capital, and Cyber Capital
  • Avail Nexus launched November 2025, enabling cross-chain coordination across 11+ ecosystems

The Nexus upgrade is particularly significant. It introduced a ZK-powered cross-chain coordination layer that lets applications interact with assets across Ethereum, Solana (coming soon), TRON, Polygon, Base, Arbitrum, Optimism, and BNB without manual bridging.

The Infinity Blocks roadmap targets 10 GB block capacity—an order of magnitude beyond any current competitor.

Current constraints: Avail's mainnet runs at 4 MB per 20-second block (0.2 MB/s), the lowest throughput of the three major DA layers. However, testing has proven capability for 128 MB blocks, suggesting significant headroom for growth.


The Rollup Economics Breakdown

For rollup operators, choosing a DA layer is one of the most consequential decisions they'll make. Here's how the math works:

Cost Comparison (Per MB, 2025)

DA SolutionCost per MBNotes
Ethereum L1 (calldata)~$100Legacy approach
Ethereum Blobs (EIP-4844)~$20.56Post-Pectra with 6 blob target
Celestia~$0.81PayForBlob model
EigenDATieredReserved bandwidth pricing
AvailFormula-basedBase + length + weight

Throughput Comparison

DA SolutionLive ThroughputTheoretical Max
EigenDA15 MB/s (claimed 100 MB/s)100 MB/s
Celestia~1.33 MB/s21.33 MB/s (tested)
Avail~0.2 MB/s128 MB blocks (tested)

Finality Characteristics

DA SolutionBlock TimeEffective Finality
Celestia6 seconds~10 minutes (fraud proof window)
EigenDAN/A (uses Ethereum)12-15 minutes
Avail20 secondsFaster (validity proofs)

Trust Model

DA SolutionVerificationTrust Assumption
CelestiaPublic DAS1-of-N honest light node
EigenDADACEconomic (slashing risk)
AvailPublic DAS + KZGCryptographic validity

Security Considerations: The DA-Saturation Attack

Recent research has identified a new vulnerability class specific to modular rollups: DA-saturation attacks. When DA costs are externally priced (by the parent L1) but locally consumed (by the L2), malicious actors can saturate a rollup's DA capacity at artificially low cost.

This decoupling of pricing and consumption is intrinsic to the modular architecture and opens attack vectors absent from monolithic chains. Rollups using alternative DA layers should implement:

  • Independent capacity pricing mechanisms
  • Rate limiting for suspicious data patterns
  • Economic reserves for DA spikes

Strategic Implications: Who Wins?

The DA wars aren't winner-take-all—at least not yet. Each protocol has carved out distinct positioning:

Celestia wins if you value:

  • Proven production track record (50+ rollups)
  • Deep ecosystem integration (OP Stack, Arbitrum Orbit, Polygon CDK)
  • Transparent per-blob pricing
  • Strong developer tooling

EigenDA wins if you value:

  • Maximum throughput (100 MB/s)
  • Ethereum security alignment via restaking
  • Predictable capacity-based pricing
  • Institutional-grade economic guarantees

Avail wins if you value:

  • Cross-chain universality (11+ ecosystems)
  • Validity proof-based DA verification
  • Long-term throughput roadmap (10 GB blocks)
  • Chain-agnostic architecture

The Road Ahead

By 2026, the DA layer landscape will look dramatically different:

Celestia is targeting 1 GB blocks with its continued network upgrades. The inflation reduction from Matcha (2.5%) and Lotus (33% lower issuance) suggests a long-term play for sustainable economics.

EigenDA benefits from EigenLayer's growing restaking economy. The proposed Incentives Committee and fee-sharing model could create powerful flywheel effects for EIGEN holders.

Avail aims for 10 GB blocks with Infinity Blocks, potentially leapfrogging competitors on pure capacity while maintaining its cross-chain positioning.

The meta-trend is clear: DA capacity is becoming abundant, competition is driving costs toward zero, and the real value capture may shift from charging for blobspace to controlling the coordination layer that routes data between chains.

For rollup builders, the takeaway is straightforward: DA costs are no longer a meaningful constraint on what you can build. The modular blockchain thesis has won. Now it's just a question of which modular stack captures the most value.


References

Ethereum 2026 Upgrades: How PeerDAS and zkEVMs Finally Cracked the Blockchain Trilemma

· 9 min read
Dora Noda
Software Engineer

"The trilemma has been solved—not on paper, but with live running code."

Those words from Vitalik Buterin on January 3, 2026, marked a watershed moment in blockchain history. For nearly a decade, the blockchain trilemma—the seemingly impossible task of achieving scalability, security, and decentralization simultaneously—had haunted every serious protocol designer. Now, with PeerDAS running on mainnet and zkEVMs reaching production-grade performance, Ethereum claims to have done what many thought impossible.

But what exactly changed? And what does this mean for developers, users, and the broader crypto ecosystem heading into 2026?


The Fusaka Upgrade: Ethereum's Biggest Leap Since the Merge

On December 3, 2025, at slot 13,164,544 (21:49:11 UTC), Ethereum activated the Fusaka network upgrade—its second major code change of the year and arguably its most consequential since the Merge. The upgrade introduced PeerDAS (Peer Data Availability Sampling), a networking protocol that fundamentally transforms how Ethereum handles data.

Before Fusaka, every Ethereum node had to download and store all blob data—the temporary data packets that rollups use to post transaction batches to Layer 1. This requirement created a bottleneck: increasing data throughput meant demanding more from every node operator, threatening decentralization.

PeerDAS changes this equation entirely. Now, each node is responsible for only 1/8th of the total blob data, with the network using erasure coding to ensure any 50% of pieces can reconstruct the full dataset. Validators who previously downloaded 750 MB of blob data per day now need only about 112 MB—an 85% reduction in bandwidth requirements.

The immediate results speak for themselves:

  • Layer 2 transaction fees dropped 40-60% within the first month
  • Blob targets increased from 6 to 10 per block (with 21 coming in January 2026)
  • The L2 ecosystem can now theoretically handle 100,000+ TPS—exceeding Visa's average of 65,000

How PeerDAS Actually Works: Data Availability Without the Download

The genius of PeerDAS lies in sampling. Instead of downloading everything, nodes verify that data exists by requesting random portions. Here's the technical breakdown:

Extended blob data is divided into 128 pieces called columns. Each regular node participates in at least 8 randomly chosen column subnets. Because the data was extended using erasure coding before distribution, receiving just 8 of 128 columns (about 12.5% of the data) is mathematically sufficient to prove the full data was made available.

Think of it like checking a jigsaw puzzle: you don't need to assemble every piece to verify the box isn't missing half of them. A carefully chosen sample tells you what you need to know.

This design achieves something remarkable: theoretical 8x scaling compared to the previous "everyone downloads everything" model, without increasing hardware requirements for node operators. Solo stakers running validator nodes from home can still participate—decentralization preserved.

The upgrade also includes EIP-7918, which ties blob base fees to L1 gas demand. This prevents fees from dropping to meaningless 1-wei levels, stabilizing validator rewards and reducing spam from rollups gaming the fee market.


zkEVMs: From Theory to "Production-Quality Performance"

While PeerDAS handles data availability, the second half of Ethereum's trilemma solution involves zkEVMs—zero-knowledge Ethereum Virtual Machines that allow blocks to be validated using cryptographic proofs instead of re-execution.

The progress here has been staggering. In July 2025, the Ethereum Foundation published "Shipping an L1 zkEVM #1: Realtime Proving," formally introducing the roadmap for ZK-based validation. Nine months later, the ecosystem crushed its targets:

  • Proving latency: Dropped from 16 minutes to 16 seconds
  • Proving costs: Collapsed by 45x
  • Block coverage: 99% of all Ethereum blocks proven in under 10 seconds on target hardware

These numbers represent a fundamental shift. The main participating teams—SP1 Turbo (Succinct Labs), Pico (Brevis), RISC Zero, ZisK, Airbender (zkSync), OpenVM (Axiom), and Jolt (a16z)—have collectively demonstrated that real-time proving isn't just possible, it's practical.

The ultimate goal is what Vitalik calls "Validate instead of Execute." Validators would verify a small cryptographic proof rather than re-computing every transaction. This decouples security from computational intensity, allowing the network to process far more throughput while maintaining (or even improving) its security guarantees.


The zkEVM Type System: Understanding the Trade-offs

Not all zkEVMs are created equal. Vitalik's 2022 classification system remains essential for understanding the design space:

Type 1 (Full Ethereum Equivalence): These zkEVMs are identical to Ethereum at the bytecode level—the "holy grail" but also the slowest to generate proofs. Existing apps and tools work out of the box with zero modifications. Taiko exemplifies this approach.

Type 2 (Full EVM Compatibility): These prioritize EVM equivalence while making minor modifications to improve proof generation. They might replace Ethereum's Keccak-based Merkle Patricia tree with ZK-friendlier hash functions like Poseidon. Scroll and Linea take this path.

Type 2.5 (Semi-Compatibility): Slight modifications to gas costs and precompiles in exchange for meaningful performance gains. Polygon zkEVM and Kakarot operate here.

Type 3 (Partial Compatibility): Greater departures from strict EVM compatibility to enable easier development and proof generation. Most Ethereum applications work, but some require rewrites.

The December 2025 announcement from the Ethereum Foundation set clear milestones: teams must achieve 128-bit provable security by year-end 2026. Security, not just performance, is now the gating factor for wider zkEVM adoption.


The 2026-2030 Roadmap: What Comes Next

Buterin's January 2026 post outlined a detailed roadmap for Ethereum's continued evolution:

2026 Milestones:

  • Large gas limit increases independent of zkEVMs, enabled by BALs (Block Auction Limits) and ePBS (enshrined Proposer-Builder Separation)
  • First opportunities to run a zkEVM node
  • BPO2 fork (January 2026) raising gas limit from 60M to 80M
  • Max blobs reaching 21 per block

2026-2028 Phase:

  • Gas repricings to better reflect actual computational costs
  • Changes to state structure
  • Execution payload migration into blobs
  • Other adjustments to make higher gas limits safe

2027-2030 Phase:

  • zkEVMs become the primary validation method
  • Initial zkEVM operation alongside standard EVM in Layer 2 rollups
  • Potential evolution to zkEVMs as default validators for Layer 1 blocks
  • Full backward compatibility for all existing applications maintained

The "Lean Ethereum Plan" spanning 2026-2035 aims for quantum resistance and sustained 10,000+ TPS at the base layer, with Layer 2s pushing aggregate throughput even higher.


What This Means for Developers and Users

For developers building on Ethereum, the implications are significant:

Lower costs: With L2 fees dropping 40-60% post-Fusaka and potentially 90%+ reductions as blob counts scale in 2026, previously uneconomical applications become viable. Micro-transactions, frequent state updates, and complex smart contract interactions all benefit.

Preserved tooling: The focus on EVM equivalence means existing development stacks remain relevant. Solidity, Hardhat, Foundry—the tools developers know continue to work as zkEVM adoption grows.

New verification models: As zkEVMs mature, applications can leverage cryptographic proofs for previously impossible use cases. Trustless bridges, verifiable off-chain computation, and privacy-preserving logic all become more practical.

For users, the benefits are more immediate:

Faster finality: ZK proofs can provide cryptographic finality without waiting for challenge periods, reducing settlement times for cross-chain operations.

Lower fees: The combination of data availability scaling and execution efficiency improvements flows directly to end users through reduced transaction costs.

Same security model: Importantly, none of these improvements require trusting new parties. The security derives from mathematics—cryptographic proofs and erasure coding guarantees—not from new validator sets or committee assumptions.


The Remaining Challenges

Despite the triumphant framing, significant work remains. Buterin himself acknowledged that "safety is what remains" for zkEVMs. The Ethereum Foundation's security-focused 2026 roadmap reflects this reality.

Proving security: Achieving 128-bit provable security across all zkEVM implementations requires rigorous cryptographic auditing and formal verification. The complexity of these systems creates substantial attack surface.

Prover centralization: Currently, ZK proving is computationally intensive enough that only specialized entities can economically produce proofs. While decentralized prover networks are in development, premature zkEVM rollout risks creating new centralization vectors.

State bloat: Even with execution efficiency improvements, Ethereum's state continues to grow. The roadmap includes state expiry and Verkle Trees (planned for the Hegota upgrade in late 2026), but these are complex changes that could disrupt existing applications.

Coordination complexity: The number of moving pieces—PeerDAS, zkEVMs, BALs, ePBS, blob parameter adjustments, gas repricings—creates coordination challenges. Each upgrade must be sequenced carefully to avoid regressions.


Conclusion: A New Era for Ethereum

The blockchain trilemma defined a decade of protocol design. It shaped Bitcoin's conservative approach, justified countless "Ethereum killers," and drove billions in alternative L1 investment. Now, with live code running on mainnet, Ethereum claims to have navigated the trilemma through clever engineering rather than fundamental compromise.

The combination of PeerDAS and zkEVMs represents something genuinely new: a system where nodes can verify more data while downloading less, where execution can be proven rather than re-computed, and where scalability improvements strengthen rather than weaken decentralization.

Will this hold up under the stress of real-world adoption? Will zkEVM security prove robust enough for L1 integration? Will the coordination challenges of the 2026-2030 roadmap be met? These questions remain open.

But for the first time, the path from current Ethereum to a truly scalable, secure, decentralized network runs through deployed technology rather than theoretical whitepapers. That distinction—live code versus academic papers—may prove to be the most significant shift in blockchain history since the invention of proof-of-stake.

The trilemma, it seems, has met its match.


References

EigenCloud: Rebuilding Web3's Trust Foundation Through Verifiable Cloud Infrastructure

· 19 min read
Dora Noda
Software Engineer

EigenCloud represents the most ambitious attempt to solve blockchain's fundamental scalability-versus-trust tradeoff. By combining $17.5 billion in restaked assets, a novel fork-based token mechanism, and three verifiable primitives—EigenDA, EigenCompute, and EigenVerify—Eigen Labs has constructed what it calls "crypto's AWS moment": a platform where any developer can access cloud-scale computation with cryptographic proof of correct execution. The June 2025 rebranding from EigenLayer to EigenCloud signaled a strategic pivot from infrastructure protocol to full-stack verifiable cloud, backed by $70 million from a16z crypto and partnerships with Google, LayerZero, and Coinbase. This transformation aims to expand the addressable market from 25,000 crypto developers to the 20+ million software developers worldwide who need both programmability and trust.

The Eigen ecosystem trilogy: from security fragmentation to trust marketplace

The Eigen ecosystem addresses a structural problem that has constrained blockchain innovation since Ethereum's inception: every new protocol requiring decentralized validation must bootstrap its own security from scratch. Oracles, bridges, data availability layers, and sequencers each built isolated validator networks, fragmenting the total capital available for security across dozens of competing services. This fragmentation meant that attackers needed only compromise the weakest link—a $50 million bridge—rather than the $114 billion securing Ethereum itself.

Eigen Labs' solution unfolds across three architectural layers that work in concert. The Protocol Layer (EigenLayer) creates a marketplace where Ethereum's staked ETH can simultaneously secure multiple services, transforming isolated security islands into a pooled trust network. The Token Layer (EIGEN) introduces an entirely new cryptoeconomic primitive—intersubjective staking—that enables slashing for faults that code cannot prove but humans universally recognize. The Platform Layer (EigenCloud) abstracts this infrastructure into developer-friendly primitives: 100 MB/s data availability through EigenDA, verifiable off-chain computation through EigenCompute, and programmable dispute resolution through EigenVerify.

The three layers create what Eigen Labs calls a "trust stack"—each primitive building upon the security guarantees of the layers below. An AI agent running on EigenCompute can store its execution traces on EigenDA, face challenges through EigenVerify, and ultimately fall back on EIGEN token forking as the nuclear option for disputed outcomes.


Protocol Layer: how EigenLayer creates a trust marketplace

The dilemma of isolated security islands

Before EigenLayer, launching a decentralized service required solving an expensive bootstrapping problem. A new oracle network needed to attract validators, design tokenomics, implement slashing conditions, and convince stakers that rewards justified the risks—all before delivering any actual product. The costs were substantial: Chainlink maintains its own LINK-staked security; each bridge operated independent validator sets; data availability layers like Celestia launched entire blockchains.

This fragmentation created perverse economics. The cost to attack any individual service was determined by its isolated stake, not the aggregate security of the ecosystem. A bridge securing $100 million with $10 million in staked collateral remained vulnerable even while billions sat idle in Ethereum validators.

The solution: making ETH work for multiple services simultaneously

EigenLayer introduced restaking—a mechanism allowing Ethereum validators to extend their staked ETH to secure additional services called Actively Validated Services (AVSs). The protocol supports two restaking paths:

Native restaking requires running an Ethereum validator (32 ETH minimum) and pointing withdrawal credentials to an EigenPod smart contract. The validator's stake gains dual functionality: securing Ethereum consensus while simultaneously backing AVS guarantees.

Liquid Staking Token (LST) restaking accepts derivatives like Lido's stETH, Mantle's mETH, or Coinbase's cbETH. Users deposit these tokens into EigenLayer's StrategyManager contract, enabling participation without running validator infrastructure. No minimum exists—participation starts at fractions of an ETH through liquid restaking protocols like EtherFi and Renzo.

The current restaking composition shows 83.7% native ETH and 16.3% liquid staking tokens, representing over 6.25 million ETH locked in the protocol.

Market engine: the triangular game theory

Three stakeholder classes participate in EigenLayer's marketplace, each with distinct incentives:

Restakers provide capital and earn stacked yields: base Ethereum staking returns (~4% APR) plus AVS-specific rewards paid in EIGEN, WETH, or native tokens like ARPA. Current combined yields reach approximately 4.24% in EIGEN plus base rewards. The risk: exposure to additional slashing conditions from every AVS their delegated operators serve.

Operators run node infrastructure and execute AVS validation tasks. They earn default 10% commissions (configurable from 0-100%) on delegated rewards plus direct AVS payments. Over 2,000 operators have registered, with 500+ actively validating AVSs. Operators choose which AVSs to support based on risk-adjusted returns, creating a competitive marketplace.

AVSs consume pooled security without bootstrapping independent validator networks. They define slashing conditions, set reward structures, and compete for operator attention through attractive economics. Currently 40+ AVSs operate on mainnet with 162 in development, totaling 190+ across the ecosystem.

This triangular structure creates natural price discovery: AVSs offering insufficient rewards struggle to attract operators; operators with poor track records lose delegations; restakers optimize by selecting trustworthy operators supporting valuable AVSs.

Protocol operational flow

The delegation mechanism follows a structured flow:

  1. Stake: Users stake ETH on Ethereum or acquire LSTs
  2. Opt-in: Deposit into EigenLayer contracts (EigenPod for native, StrategyManager for LSTs)
  3. Delegate: Select an operator to manage validation
  4. Register: Operators register with EigenLayer and choose AVSs
  5. Validate: Operators run AVS software and perform attestation tasks
  6. Rewards: AVSs distribute rewards weekly via on-chain merkle roots
  7. Claim: Stakers and operators claim after a 1-week delay

Withdrawals require a 7-day waiting period (14 days for slashing-enabled stakes), allowing time for fault detection before funds exit.

Protocol effectiveness and market performance

EigenLayer's growth trajectory demonstrates market validation:

  • Current TVL: ~$17.51 billion (December 2025)
  • Peak TVL: $20.09 billion (June 2024), making it the second-largest DeFi protocol behind Lido
  • Unique staking addresses: 80,000+
  • Restakers qualified for incentives: 140,000+
  • Total rewards distributed: $128.02 million+

The April 17, 2025 slashing activation marked a critical milestone—the protocol became "feature-complete" with economic enforcement. Slashing uses Unique Stake Allocation, allowing operators to designate specific stake portions for individual AVSs, isolating slashing risk across services. A Veto Committee can investigate and overturn unjust slashing, providing additional safeguards.


Token Layer: how EIGEN solves the subjectivity problem

The dilemma of code-unprovable errors

Traditional blockchain slashing works only for objectively attributable faults—behaviors provable through cryptography or mathematics. Double-signing a block, producing invalid state transitions, or failing liveness checks can all be verified on-chain. But many critical failures defy algorithmic detection:

  • An oracle reporting false prices (data withholding)
  • A data availability layer refusing to serve data
  • An AI model producing manipulated outputs
  • A sequencer censoring specific transactions

These intersubjective faults share a defining characteristic: any two reasonable observers would agree the fault occurred, yet no smart contract can prove it.

The solution: forking as punishment

EIGEN introduces a radical mechanism—slashing-by-forking—that leverages social consensus rather than algorithmic verification. When operators commit intersubjective faults, the token itself forks:

Step 1: Fault detection. A bEIGEN staker observes malicious behavior and raises an alert.

Step 2: Social deliberation. Consensus participants discuss the issue. Honest observers converge on whether fault occurred.

Step 3: Challenge initiation. A challenger deploys three contracts: a new bEIGEN token contract (the fork), a Challenge Contract for future forks, and a Fork-Distributor Contract identifying malicious operators. The challenger submits a significant bond in EIGEN to deter frivolous challenges.

Step 4: Token selection. Two versions of EIGEN now exist. Users and AVSs freely choose which to support. If consensus confirms misbehavior, only the forked token retains value—malicious stakers lose their entire allocation.

Step 5: Resolution. The bond is rewarded if the challenge succeeds, burned if rejected. The EIGEN wrapper contract upgrades to point to the new canonical fork.

The dual-token architecture

EIGEN uses two tokens to isolate forking complexity from DeFi applications:

TokenPurposeForking behavior
EIGENTrading, DeFi, collateralFork-unaware—protected from complexity
bEIGENStaking, securing AVSsSubject to intersubjective forking

Users wrap EIGEN into bEIGEN for staking; after withdrawal, bEIGEN unwraps back to EIGEN. During forks, bEIGEN splits (bEIGENv1 → bEIGENv2) while EIGEN holders not staking can redeem without exposure to fork mechanics.

Token economics

Initial supply: 1,673,646,668 EIGEN (encoding "1. Open Innovation" on a telephone keypad)

Allocation breakdown:

  • Community (45%): 15% stakedrops, 15% community initiatives, 15% R&D/ecosystem
  • Investors (29.5%): ~504.73M tokens with monthly unlocks post-cliff
  • Early contributors (25.5%): ~458.55M tokens with monthly unlocks post-cliff

Vesting: Investors and core contributors face 1-year lockup from token transferability (September 30, 2024), then 4% monthly unlocks over 3 years.

Inflation: 4% annual inflation distributed via Programmatic Incentives to stakers and operators, currently ~1.29 million EIGEN weekly.

Current market status (December 2025):

  • Price: ~$0.50-0.60
  • Market cap: ~$245-320 million
  • Circulating supply: ~485 million EIGEN
  • All-time high: $5.65 (December 17, 2024)—current price represents ~90% decline from ATH

Governance and community voice

EigenLayer governance remains in a "meta-setup phase" where researchers and community shape parameters for full protocol actuation. Key mechanisms include:

  • Free-market governance: Operators determine risk/reward by opting in/out of AVSs
  • Veto committees: Protect against unwarranted slashing
  • Protocol Council: Reviews EigenLayer Improvement Proposals (ELIPs)
  • Token-based governance: EIGEN holders vote on fork support during disputes—the forking process itself constitutes governance

Platform Layer: EigenCloud's strategic transformation

EigenCloud verifiability stack: three primitives building trust infrastructure

The June 2025 rebrand to EigenCloud signaled Eigen Labs' pivot from restaking protocol to verifiable cloud platform. The vision: combine cloud-scale programmability with crypto-grade verification, targeting the $10+ trillion public cloud market where both performance and trust matter.

The architecture maps directly to familiar cloud services:

EigenCloudAWS equivalentFunction
EigenDAS3Data availability (100 MB/s)
EigenComputeLambda/ECSVerifiable off-chain execution
EigenVerifyN/AProgrammable dispute resolution

The EIGEN token secures the entire trust pipeline through cryptoeconomic mechanisms.


EigenDA: the cost killer and throughput engine for rollups

Problem background: Rollups post transaction data to Ethereum for security, but calldata costs consume 80-90% of operational expenses. Arbitrum and Optimism have spent tens of millions on data availability. Ethereum's combined throughput of ~83 KB/s creates a fundamental bottleneck as rollup adoption grows.

Solution architecture: EigenDA moves data availability to a non-blockchain structure while maintaining Ethereum security through restaking. The insight: DA doesn't require independent consensus—Ethereum handles coordination while EigenDA operators manage data dispersal directly.

The technical implementation uses Reed-Solomon erasure coding for information-theoretically minimal overhead and KZG commitments for validity guarantees without fraud-proof waiting periods. Key components include:

  • Dispersers: Encode blobs, generate KZG proofs, distribute chunks, aggregate attestations
  • Validator nodes: Verify chunks against commitments, store portions, return signatures
  • Retrieval nodes: Collect shards and reconstruct original data

Results: EigenDA V2 launched July 2025 with industry-leading specifications:

MetricEigenDA V2CelestiaEthereum blobs
Throughput100 MB/s~1.33 MB/s~0.032 MB/s
Latency5 seconds average6 sec block + 10 min fraud proof12 seconds
Cost~98.91% reduction vs calldata~$0.07/MB~$3.83/MB

At 100 MB/s, EigenDA can process 800,000+ ERC-20 transfers per second—12.8x Visa's peak throughput.

Ecosystem security: 4.3 million ETH staked (March 2025), 245 operators, 127,000+ unique staking wallets, over $9.1 billion in restaked capital.

Current integrations: Fuel (first rollup achieving stage 2 decentralization), Aevo, Mantle, Celo, MegaETH, AltLayer, Conduit, Gelato, Movement Labs, and others. 75% of all assets on Ethereum L2s with alternative DA use EigenDA.

Pricing (10x reduction announced May 2025):

  • Free tier: 1.28 KiB/s for 12 months
  • On-demand: 0.015 ETH/GB
  • Reserved bandwidth: 70 ETH/year for 256 KiB/s

EigenCompute: the cryptographic shield for cloud-scale computing

Problem background: Blockchains are trustworthy but not scalable; clouds are scalable but not trustworthy. Complex AI inference, data processing, and algorithmic trading require cloud resources, but traditional providers offer no guarantee that code ran unmodified or outputs weren't tampered.

Solution: EigenCompute enables developers to run arbitrary code off-chain within Trusted Execution Environments (TEEs) while maintaining blockchain-level verification guarantees. Applications deploy as Docker containers—any language that runs in Docker (TypeScript, Rust, Go, Python) works.

The architecture provides:

  • On-chain commitment: Agent strategy, code container hash, and data sources stored verifiably
  • Slashing-enabled collateral: Operators stake assets slashable for execution deviation
  • Attestation infrastructure: TEEs provide hardware-based proof that code ran unmodified
  • Audit trail: Every execution recorded to EigenDA

Flexible trust models: EigenCompute's roadmap includes multiple verification approaches:

  1. TEEs (current mainnet alpha)—Intel SGX/TDX, AMD SEV-SNP
  2. Cryptoeconomic security (upcoming GA)—EIGEN-backed slashing
  3. Zero-knowledge proofs (future)—trustless mathematical verification

Developer experience: The EigenCloud CLI (eigenx) provides scaffolding, local devnet testing, and one-command deployment to Base Sepolia testnet. Sample applications include chat interfaces, trading agents, escrow systems, and the x402 payment protocol starter kit.


EigenAI: extending verifiability to AI inference

The AI trust gap: Traditional AI providers offer no cryptographic guarantee that prompts weren't modified, responses weren't altered, or models are the claimed versions. This makes AI unsuitable for high-stakes applications like trading, contract negotiation, or DeFi governance.

EigenAI's breakthrough: Deterministic LLM inference at scale. The team claims bit-exact deterministic execution of LLM inference on GPUs—widely considered impossible or impractical. Re-executing prompt X with model Y produces exactly output Z; any discrepancy is cryptographic evidence of tampering.

Technical approach: Deep optimization across GPU types, CUDA kernels, inference engines, and token generation enables consistent deterministic behavior with sufficiently low overhead for practical UX.

Current specifications:

  • OpenAI-compatible API (drop-in replacement)
  • Currently supports gpt-oss-120b-f16 (120B parameter model)
  • Tool calling supported
  • Additional models including embedding models on near-term roadmap

Applications being built:

  • FereAI: Trading agents with verifiable decision-making
  • elizaOS: 50,000+ agents with cryptographic attestations
  • Dapper Labs (Miquela): Virtual influencer with untamperable "brain"
  • Collective Memory: 1.6M+ images/videos processed with verified AI
  • Humans vs AI: 70K+ weekly active users in prediction market games

EigenVerify: the ultimate arbiter of trust

Core positioning: EigenVerify functions as the "ultimate, impartial dispute resolution court" for EigenCloud. When execution disputes arise, EigenVerify examines evidence and delivers definitive judgments backed by economic enforcement.

Dual verification modes:

Objective verification: For deterministic computation, anyone can challenge by triggering re-execution with identical inputs. If outputs differ, cryptographic evidence proves fault. Secured by restaked ETH.

Intersubjective verification: For tasks where rational humans would agree but algorithms cannot verify—"Who won the election?" "Does this image contain a cat?"—EigenVerify uses majority consensus among staked validators. The EIGEN fork mechanism serves as the nuclear backstop. Secured by EIGEN staking.

AI-adjudicated verification (newer mode): Disputes resolved by verifiable AI systems, combining algorithmic objectivity with judgment flexibility.

Synergy with other primitives: EigenCompute orchestrates container deployment; execution results record to EigenDA for audit trails; EigenVerify handles disputes; the EIGEN token provides ultimate security through forkability. Developers select verification modes through a "trust dial" balancing speed, cost, and security:

  • Instant: Fastest, lowest security
  • Optimistic: Standard security with challenge period
  • Forkable: Full intersubjective guarantees
  • Eventual: Maximum security with cryptographic proofs

Status: Devnet live Q2 2025, mainnet targeted Q3 2025.


Ecosystem layout: from $17B+ TVL to strategic partnerships

AVS ecosystem map

The AVS ecosystem spans multiple categories:

Data availability: EigenDA (59M EIGEN and 3.44M ETH restaked, 215 operators, 97,000+ unique stakers)

Oracle networks: Eoracle (first Ethereum-native oracle)

Rollup infrastructure: AltLayer MACH (fast finality), Xterio MACH (gaming), Lagrange State Committees (ZK light client with 3.18M ETH restaked)

Interoperability: Hyperlane (interchain messaging), LayerZero DVN (cross-chain validation)

DePIN coordination: Witness Chain (Proof-of-Location, Proof-of-Bandwidth)

Infrastructure: Infura DIN (decentralized infrastructure), ARPA Network (trustless randomization)

Partnership with Google: A2A + MCP + EigenCloud

Announced September 16, 2025, EigenCloud joined as launch partner for Google Cloud's Agent Payments Protocol (AP2).

Technical integration: The A2A (Agent-to-Agent) protocol enables autonomous AI agents to discover and interact across platforms. AP2 extends A2A using HTTP 402 ("payment required") via the x402 standard for blockchain-agnostic payments. EigenCloud provides:

  • Verifiable payment service: Abstracts asset conversion, bridging, and network complexity with restaked operator accountability
  • Work verification: EigenCompute enables TEE or deterministic execution with attestations and ZK proofs
  • Cryptographic accountability: "Mandates"—tamper-proof, cryptographically signed digital contracts

Partnership scope: Consortium of 60+ organizations including Coinbase, Ethereum Foundation, MetaMask, Mastercard, PayPal, American Express, and Adobe.

Strategic significance: Positions EigenCloud as infrastructure backbone for the AI agent economy projected to grow 45% annually.

Partnership with Recall: verifiable AI model evaluation

Announced October 16, 2025, Recall integrated EigenCloud for end-to-end verifiable AI benchmarking.

Skills marketplace concept: Communities fund skills they need, crowdsource AI with those capabilities, and get rewarded for identifying top performers. AI models compete in head-to-head competitions verified by EigenCloud's deterministic inference.

Integration details: EigenAI provides cryptographic proof that models produce specific outputs for given inputs; EigenCompute ensures performance results are transparent, reproducible, and provable using TEEs.

Prior results: Recall tested 50 AI models across 8 skill markets, generating 7,000+ competitions with 150,000+ participants submitting 7.5 million predictions.

Strategic significance: Creates "first end-to-end framework for delivering cryptographically provable and transparent rankings for frontier AI models"—replacing marketing-driven benchmarks with verifiable performance data.

Partnership with LayerZero: EigenZero decentralized verification

Framework announced October 2, 2024; EigenZero launched November 13, 2025.

Technical architecture: The CryptoEconomic DVN Framework allows any team to deploy Decentralized Verifier Network AVSs accepting ETH, ZRO, and EIGEN as staking assets. EigenZero implements optimistic verification with an 11-day challenge period and economic slashing for verification failures.

Security model: Shifts from "trust-based systems to economically quantifiable security that can be audited on-chain." DVNs must back commitments with staked assets rather than reputation alone.

Current specifications: $5 million ZRO stake for EigenZero; LayerZero supports 80+ blockchains with 600+ applications and 35 DVN entities including Google Cloud.

Strategic significance: Establishes restaking as the security standard for cross-chain interoperability—addressing persistent vulnerabilities in messaging protocols.

Other significant partnerships

Coinbase: Day-one mainnet operator; AgentKit integration enabling agents running on EigenCompute with EigenAI inference.

elizaOS: Leading open-source AI framework (17K GitHub stars, 50K+ agents) integrated EigenCloud for cryptographically guaranteed inference and secure TEE workflows.

Infura DIN: Decentralized Infrastructure Network now runs on EigenLayer, allowing Ethereum stakers to secure services and earn rewards.

Securitize/BlackRock: Validating pricing data for BlackRock's $2B tokenized treasury fund BUIDL—first enterprise implementation.


Risk analysis: technical trade-offs and market dynamics

Technical risks

Smart contract vulnerabilities: Audits identified reentrancy risks in StrategyBase, incomplete slashing logic implementation, and complex interdependencies between base contracts and AVS middleware. A $2 million bug bounty program acknowledges ongoing vulnerability risks.

Cascading slashing failures: Validators exposed to multiple AVSs face simultaneous slashing conditions. If significant stake is penalized, several services could degrade simultaneously—creating "too big to fail" systemic risk.

Crypto-economic attack vectors: If $6M in restaked ETH secures 10 modules each with $1M locked value, attack cost ($3M slashing) may be lower than potential gain ($10M across modules), making the system economically insecure.

TEE security issues

EigenCompute's mainnet alpha relies on Trusted Execution Environments with documented vulnerabilities:

  • Foreshadow (2018): Combines speculative execution and buffer overflow to bypass SGX
  • SGAxe (2020): Leaks attestation keys from SGX's private quoting enclave
  • Tee.fail (2024): DDR5 row-buffer timing side-channel affecting Intel SGX/TDX and AMD SEV-SNP

TEE vulnerabilities remain a significant attack surface during the transition period before cryptoeconomic security and ZK proofs are fully implemented.

Limitations of deterministic AI

EigenAI claims bit-exact deterministic LLM inference, but limitations persist:

  • TEE dependency: Current verification inherits SGX/TDX vulnerability surface
  • ZK proofs: Promised "eventually" but not yet implemented at scale
  • Overhead: Deterministic inference adds computational costs
  • zkML limitations: Traditional zero-knowledge machine learning proofs remain resource-intensive

Market and competitive risks

Restaking competition:

ProtocolTVLKey differentiator
EigenLayer$17-19BInstitutional focus, verifiable cloud
Symbiotic$1.7BPermissionless, immutable contracts
Karak$740-826MMulti-asset, nation-state positioning

Symbiotic shipped full slashing functionality first (January 2025), reached $200M TVL in 24 hours, and uses immutable non-upgradeable contracts eliminating governance risk.

Data availability competition: EigenDA's DAC architecture introduces trust assumptions absent in Celestia's blockchain-based DAS verification. Celestia offers lower costs (~$3.41/MB) and deeper ecosystem integration (50+ rollups). Aevo's migration to Celestia reduced DA costs by 90%+.

Regulatory risks

Securities classification: SEC's May 2025 guidance explicitly excluded liquid staking, restaking, and liquid restaking from safe harbor provisions. The Kraken precedent ($30M fine for staking services) raises compliance concerns. Liquid Restaking Tokens could face securities classification given layered claims on future money.

Geographic restrictions: EIGEN airdrop banned US and Canada-based users, creating complex compliance frameworks. Wealthsimple's risk disclosure notes "legal and regulatory risks associated with EIGEN."

Security incidents

October 2024 email hack: 1.67 million EIGEN ($5.7M) stolen via compromised email thread intercepting investor token transfer communication—not a smart contract exploit but undermining "verifiable cloud" positioning.

October 2024 X account hack: Official account compromised with phishing links; one victim lost $800,000.


Future outlook: from infrastructure to digital society endgame

Application scenario prospects

EigenCloud enables previously impossible application categories:

Verifiable AI agents: Autonomous systems managing real capital with cryptographic proof of correct behavior. The Google AP2 partnership positions EigenCloud as backbone for agentic economy payments.

Institutional DeFi: Complex trading algorithms with off-chain computation but on-chain accountability. Securitize/BlackRock BUIDL integration demonstrates enterprise adoption pathway.

Permissionless prediction markets: Markets resolving on any real-world outcome with intersubjective dispute handling and cryptoeconomic finality.

Verifiable social media: Token rewards tied to cryptographically verified engagement; community notes with economic consequences for misinformation.

Gaming and entertainment: Provable randomness for casinos; location-based rewards with cryptoeconomic verification; verifiable esports tournaments with automated escrow.

Development path analysis

The roadmap progression reflects increasing decentralization and security:

Near-term (Q1-Q2 2026): EigenVerify mainnet launch; EigenCompute GA with full slashing; additional LLM models; on-chain API for EigenAI.

Medium-term (2026-2027): ZK proof integration for trustless verification; cross-chain AVS deployment across major L2s; full investor/contributor token unlock.

Long-term vision: The stated goal—"Bitcoin disrupted money, Ethereum made it programmable, EigenCloud makes verifiability programmable for any developer building any application in any industry"—targets the $10+ trillion public cloud market.

Critical success factors

EigenCloud's trajectory depends on several factors:

  1. TEE-to-ZK transition: Successfully migrating verification from vulnerable TEEs to cryptographic proofs
  2. Competitive defense: Maintaining market share against Symbiotic's faster feature delivery and Celestia's cost advantages
  3. Regulatory navigation: Achieving compliance clarity for restaking and LRTs
  4. Institutional adoption: Converting partnerships (Google, Coinbase, BlackRock) into meaningful revenue

The ecosystem currently secures $2B+ in application value with $12B+ in staked assets—a 6x overcollateralization ratio providing substantial security margin. With 190+ AVSs in development and the fastest-growing developer ecosystem in crypto according to Electric Capital, EigenCloud has established significant first-mover advantages. Whether those advantages compound into durable network effects or erode under competitive and regulatory pressure remains the central question for the ecosystem's next phase.

Directed Acyclic Graph (DAG) in Blockchain

· 47 min read
Dora Noda
Software Engineer

What is a DAG and How Does it Differ from a Blockchain?

A Directed Acyclic Graph (DAG) is a type of data structure consisting of vertices (nodes) connected by directed edges that never form a cycle. In the context of distributed ledgers, a DAG-based ledger organizes transactions or events in a web-like graph rather than a single sequential chain. This means that unlike a traditional blockchain where each new block references only one predecessor (forming a linear chain), a node in a DAG may reference multiple previous transactions or blocks. As a result, many transactions can be confirmed in parallel, rather than strictly one-by-one in chronological blocks.

To illustrate the difference, if a blockchain looks like a long chain of blocks (each block containing many transactions), a DAG-based ledger looks more like a tree or web of individual transactions. Every new transaction in a DAG can attach to (and thereby validate) one or more earlier transactions, instead of waiting to be packaged into the next single block. This structural difference leads to several key distinctions:

  • Parallel Validation: In blockchains, miners/validators add one block at a time to the chain, so transactions are confirmed in batches per new block. In DAGs, multiple transactions (or small “blocks” of transactions) can be added concurrently, since each can attach to different parts of the graph. This parallelization means DAG networks don’t have to wait for a single long chain to grow one block at a time.
  • No Global Sequential Order: A blockchain inherently creates a total order of transactions (every block has a definite place in one sequence). A DAG ledger, by contrast, forms a partial order of transactions. There is no single “latest block” that all transactions queue for; instead, many tips of the graph can coexist and be extended simultaneously. Consensus protocols are then needed to eventually sort out or agree on the order or validity of transactions in the DAG.
  • Transaction Confirmation: In a blockchain, transactions are confirmed when they are included in a mined/validated block and that block becomes part of the accepted chain (often after more blocks are added on top). In DAG systems, a new transaction itself helps confirm previous transactions by referencing them. For example, in IOTA’s Tangle (a DAG), each transaction must approve two previous transactions, effectively having users collaboratively validate each other’s transactions. This removes the strict division between “transaction creators” and “validators” that exists in blockchain mining – every participant issuing a transaction also does a bit of validation work.

Importantly, a blockchain is actually a special case of a DAG – a DAG that has been constrained to a single chain of blocks. Both are forms of distributed ledger technology (DLT) and share goals like immutability and decentralization. However, DAG-based ledgers are “blockless” or multi-parent in structure, which gives them different properties in practice. Traditional blockchains like Bitcoin and Ethereum use sequential blocks and often discard any competing blocks (forks), whereas DAG ledgers attempt to incorporate and arrange all transactions without discarding any, as long as they’re not conflicting. This fundamental difference lays the groundwork for the contrasts in performance and design detailed below.

Technical Comparison: DAG vs. Blockchain Architecture

To better understand DAGs vs blockchains, we can compare their architectures and validation processes:

  • Data Structure: Blockchains store data in blocks linked in a linear sequence (each block contains many transactions and points to a single previous block, forming one long chain). DAG ledgers use a graph structure: each node in the graph represents a transaction or an event block, and it can link to multiple previous nodes. This directed graph has no cycles, meaning if you follow the links “backwards” you can never loop back to a transaction you started from. The lack of cycles allows a topological ordering of transactions (a way to sort them so that every reference comes after the referenced transaction). In short, blockchains = one-dimensional chain, DAGs = multi-dimensional graph.
  • Throughput and Concurrency: Because of the structural differences, blockchains and DAGs handle throughput differently. A blockchain, even under optimal conditions, adds blocks one by one (often waiting for each block to be validated and propagated network-wide before the next one). This inherently limits transaction throughput – for example, Bitcoin averages 5–7 transactions per second (TPS) and Ethereum ~15–30 TPS under the classic proof-of-work design. DAG-based systems, by contrast, allow many new transactions/blocks to enter the ledger concurrently. Multiple branches of transactions can grow simultaneously and later mesh together, dramatically increasing potential throughput. Some modern DAG networks claim throughput in the thousands of TPS, approaching or exceeding traditional payment networks in capacity.
  • Transaction Validation Process: In blockchain networks, transactions wait in a mempool and are validated when a miner or validator packages them into a new block, then other nodes verify that block against the history. In DAG networks, validation is often more continuous and decentralized: each new transaction carries out a validation action by referencing (approving) earlier transactions. For example, each transaction in IOTA’s Tangle must confirm two previous transactions by checking their validity and doing a small proof-of-work, thereby “voting” for those transactions. In Nano’s block-lattice DAG, each account’s transactions form their own chain and are validated via votes by representative nodes (more on this later). The net effect is that DAGs spread out the work of validation: rather than a single block producer validating a batch of transactions, every participant or many validators concurrently validate different transactions.
  • Consensus Mechanism: Both blockchains and DAGs need a way for the network to agree on the state of the ledger (which transactions are confirmed and in what order). In blockchains, consensus often comes from Proof of Work or Proof of Stake producing the next block and the rule of “longest (or heaviest) chain wins”. In DAG ledgers, consensus can be more complex since there isn’t a single chain. Different DAG projects use different approaches: some use gossip protocols and virtual voting (as in Hedera Hashgraph) to come to agreement on transaction order, others use Markov Chain Monte Carlo tip selection (IOTA’s early approach) or other voting schemes to decide which branches of the graph are preferred. We will discuss specific consensus methods in DAG systems in a later section. Generally, reaching network-wide agreement in a DAG can be faster in terms of throughput, but it requires careful design to handle conflicts (like double-spend attempts) since multiple transactions can exist in parallel before final ordering.
  • Fork Handling: In a blockchain, a “fork” (two blocks mined at nearly the same time) results in one branch eventually winning (longest chain) and the other being orphaned (discarded), which wastes any work done on the orphan. In a DAG, the philosophy is to accept forks as additional branches of the graph rather than waste them. The DAG will incorporate both forks; the consensus algorithm then determines which transactions end up confirmed (or how conflicting transactions are resolved) without throwing away all of one branch. This means no mining power or effort is wasted on stale blocks, contributing to efficiency. For example, Conflux’s Tree-Graph (a PoW DAG) attempts to include all blocks in the ledger and orders them, rather than orphaning any, thereby utilizing 100% of produced blocks.

In summary, blockchains offer a simpler, strictly ordered structure where validation is block-by-block, whereas DAGs provide a more complex graph structure allowing asynchronous and parallel transaction processing. DAG-based ledgers must employ additional consensus logic to manage this complexity, but they promise significantly higher throughput and efficiency by utilizing the network’s full capacity rather than forcing a single-file queue of blocks.

Benefits of DAG-Based Blockchain Systems

DAG architectures were introduced primarily to overcome the limitations of traditional blockchains in scalability, speed, and cost. Here are the key benefits of DAG-based distributed ledgers:

  • High Scalability & Throughput: DAG networks can achieve high transaction throughput because they handle many transactions in parallel. Since there is no single chain bottleneck, the TPS (transactions per second) can scale with network activity. In fact, some DAG protocols have demonstrated throughput on the order of thousands of TPS. For example, Hedera Hashgraph has the capacity to process 10,000+ transactions per second in the base layer, far outpacing Bitcoin or Ethereum. In practice, Hedera has demonstrated finalizing transactions in about 3–5 seconds, compared to the minutes or longer confirmation times on PoW blockchains. Even DAG-based smart contract platforms like Fantom have achieved near-instant finality (~1–2 seconds) for transactions under normal loads. This scalability makes DAGs attractive for applications requiring high volume, such as IoT microtransactions or real-time data streams.
  • Low Transaction Costs (Feeless or Minimal Fees): Many DAG-based ledgers boast negligible fees or even feeless transactions. By design, they often don’t rely on miners expecting block rewards or fees; for instance, in IOTA and Nano, there are no mandatory transaction fees – a crucial property for micro-payments in IoT and everyday use. Where fees exist (e.g., Hedera or Fantom), they tend to be very low and predictable, since the network can handle load without bidding wars for limited block space. Hedera transactions cost around $0.0001 (a ten-thousandth of a dollar) in fees, a tiny fraction of typical blockchain fees. Such low costs open the door to use cases like high-frequency transactions or tiny payments which would be infeasible on fee-heavy chains. Also, because DAGs include all valid transactions rather than dropping some in case of forks, there’s less “wasted” work – which indirectly helps keep costs down by utilizing resources efficiently.
  • Fast Confirmation and Low Latency: In DAG ledgers, transactions don’t need to wait for inclusion in a global block, so confirmation can be faster. Many DAG systems achieve quick finality – the point at which a transaction is considered permanently confirmed. For example, Hedera Hashgraph's consensus typically finalizes transactions within a few seconds with 100% certainty (ABFT finality). Nano's network often sees transactions confirmed in <1 second thanks to its lightweight voting process. This low latency enhances user experience, making transactions appear nearly instant, which is important for real-world payments and interactive applications.
  • Energy Efficiency: DAG-based networks often do not require the intensive proof-of-work mining that many blockchains use, making them far more energy-efficient. Even compared to proof-of-stake blockchains, some DAG networks use minimal energy per transaction. For instance, a single Hedera Hashgraph transaction consumes on the order of 0.0001 kWh (kilowatt-hour) of energy. This is several orders of magnitude less than Bitcoin (which can be hundreds of kWh per transaction) or even many PoS chains. The efficiency comes from eliminating wasteful computations (no mining race) and from not discarding any transaction attempts. If blockchain networks were to switch to DAG-based models universally, the energy savings could be monumental. The carbon footprint of DAG networks like Hedera is so low that its overall network is carbon-negative when offsets are considered. Such energy efficiency is increasingly crucial for sustainable Web3 infrastructure.
  • No Mining & Democratized Validation: In many DAG models, there is no distinct miner/validator role that ordinary users can’t perform. For example, every IOTA user who issues a transaction is also helping validate two others, essentially decentralizing the validation work to the edges of the network. This can reduce the need for powerful mining hardware or staking large amounts of capital to participate in consensus, potentially making the network more accessible. (However, some DAG networks do still use validators or coordinators – see the discussion on consensus and decentralization later.)
  • Smooth Handling of High Traffic: Blockchains often suffer from mempool backlogs and fee spikes under high load (since only one block at a time can clear transactions). DAG networks, due to their parallel nature, generally handle traffic spikes more gracefully. As more transactions flood the network, they simply create more parallel branches in the DAG, which the system can process concurrently. There is less of a hard cap on throughput (scalability is more “horizontal”). This leads to better scalability under load, with fewer delays and only modest increases in confirmation times or fees, up to the capacity of the nodes’ network and processing power. In essence, a DAG can absorb bursts of transactions without congesting as quickly, making it suitable for use cases that involve bursts of activity (e.g., IoT devices all sending data at once, or a viral DApp event).

In summary, DAG-based ledgers promise faster, cheaper, and more scalable transactions than the classical blockchain approach. They aim to support mass adoption scenarios (micropayments, IoT, high-frequency trading, etc.) that current mainstream blockchains struggle with due to throughput and cost constraints. These benefits, however, come with certain trade-offs and implementation challenges, which we will address in later sections.

Consensus Mechanisms in DAG-Based Platforms

Because DAG ledgers don’t naturally produce a single chain of blocks, they require innovative consensus mechanisms to validate transactions and ensure everyone agrees on the ledger state. Different projects have developed different solutions tailored to their DAG architecture. Here we outline some notable consensus approaches used by DAG-based platforms:

  • IOTA’s Tangle – Tip Selection and Weighted Voting: IOTA’s Tangle is a DAG of transactions designed for the Internet of Things (IoT). In IOTA’s original model, there are no miners; instead, every new transaction must do a small Proof of Work and approve two previous transactions (these are the “tips” of the graph). This tip selection is often done via a Markov Chain Monte Carlo (MCMC) algorithm that probabilistically chooses which tips to approve, favoring the heaviest subtangle to prevent fragmentation. Consensus in early IOTA was partly achieved by this cumulative weight of approvals – the more future transactions indirectly approve yours, the more “confirmed” it becomes. However, to secure the network in its infancy, IOTA relied on a temporary centralized Coordinator node that issued periodic milestone transactions to finalize the Tangle. This was a major criticism (centralization) and is being removed in the upgrade known as “Coordicide” (IOTA 2.0). In IOTA 2.0, a new consensus model applies a leaderless Nakamoto-style consensus on a DAG. Essentially, nodes perform on-tangle voting: when a node attaches a new block, that block implicitly votes on the validity of the transactions it references. A committee of validator nodes (chosen via a staking mechanism) issues validation blocks as votes, and a transaction is confirmed when it accumulates enough weighted approvals (a concept called approval weight). This approach combines the idea of the heaviest DAG (similar to longest chain) with explicit voting to achieve consensus without a coordinator. In short, IOTA’s consensus evolved from tip selection + Coordinator to a fully decentralized voting on DAG branches by nodes, aiming for security and quick agreement on the ledger state.
  • Hedera Hashgraph – Gossip and Virtual Voting (aBFT): Hedera Hashgraph uses a DAG of events coupled with an asynchronous Byzantine Fault-Tolerant (aBFT) consensus algorithm. The core idea is “gossip about gossip”: each node rapidly gossips signed information about transactions and about its gossip history to other nodes. This creates a Hashgraph (the DAG of events) where every node eventually knows what every other node has gossiped, including the structure of who heard what and when. Using this DAG of events, Hedera implements virtual voting. Instead of sending out actual vote messages for ordering transactions, nodes simulate a voting algorithm locally by analyzing the graph of gossip connections. Leemon Baird’s Hashgraph algorithm can deterministically calculate how a theoretical round of votes on transaction order would go, by looking at the “gossip network” history recorded in the DAG. This yields a consensus timestamp and a total order of transactions that is fair and final (transactions are ordered by the median time they were received by the network). Hashgraph’s consensus is leaderless and achieves aBFT, meaning it can tolerate up to 1/3 of nodes being malicious without compromising consensus. In practice, Hedera’s network is governed by a set of 39 known organization-run nodes (the Hedera Council), so it’s permissioned but geographically distributed. The benefit is extremely fast and secure consensus: Hedera can reach finality in seconds with guaranteed consistency. The Hashgraph consensus mechanism is patented but has been open-sourced as of 2024, and it showcases how DAG + innovative consensus (gossip & virtual voting) can replace a traditional blockchain protocol.
  • Fantom’s Lachesis – Leaderless PoS aBFT: Fantom is a smart contract platform that uses a DAG-based consensus called Lachesis. Lachesis is an aBFT Proof-of-Stake protocol inspired by Hashgraph. In Fantom, each validator node assembles received transactions into an event block and adds it to its own local DAG of events. These event blocks contain transactions and references to earlier events. Validators gossip these event blocks to each other asynchronously – there’s no single sequence in which blocks must be produced or agreed upon. As event blocks propagate, the validators periodically identify certain events as milestones (or “root event blocks”) once a supermajority of nodes have seen them. Lachesis then orders these finalized events and commits them to a final Opera Chain (a traditional blockchain data structure) that acts as the ledger of confirmed blocks. In essence, the DAG of event blocks allows Fantom to achieve consensus asynchronously and very fast, then the final outcome is a linear chain for compatibility. This yields about 1–2 second finality for transactions on Fantom. Lachesis has no miners or leaders proposing blocks; all validators contribute event blocks and the protocol deterministically orders them. The consensus is secured by a Proof-of-Stake model (validators must stake FTM tokens and are weighted by stake). Lachesis is also aBFT, tolerating up to 1/3 faulty nodes. By combining DAG concurrency with a final chain output, Fantom achieves high throughput (several thousand TPS in tests) while remaining EVM-compatible for smart contracts. It’s a good example of using a DAG internally to boost performance, without exposing a DAG’s complexity to the application layer (developers still see a normal chain of transactions in the end).
  • Nano’s Open Representative Voting (ORV): Nano is a payment cryptocurrency that uses a unique DAG structure called a block-lattice. In Nano, each account has its own blockchain (account-chain) that only the account owner can update. All these individual chains form a DAG, since transactions from different accounts link asynchronously (a send in one account-chain references a receive in another, etc.). Consensus in Nano is achieved via a mechanism called Open Representative Voting (ORV). Users designate a representative node for their account (this is a weight delegation, not locking up funds), and these representatives vote on the validity of transactions. Every transaction is settled individually (there are no blocks bundling multiple txns) and is considered confirmed when a supermajority (e.g. >67%) of the voting weight (from representatives) agrees on it. Since honest account owners won’t double-spend their own funds, forks are rare and usually only caused by malicious attempts, which reps can quickly vote to reject. Finality is typically achieved in under a second for each transaction. ORV is similar to Proof-of-Stake in that voting weight is based on account balances (stake), but there is no staking reward or fee – representatives are voluntary nodes. The lack of mining and block production means Nano can operate feelessly and efficiently. However, it relies on a set of trusted representatives being online to vote, and there’s an implicit centralization in which nodes accumulate large voting weight (though users can switch reps anytime, maintaining decentralization control in the hands of users). Nano’s consensus is lightweight and optimized for speed and energy efficiency, aligning with its goal of being a fast, feeless digital cash.
  • Other Notable Approaches: Several other DAG-based consensus protocols exist. Hedera Hashgraph and Fantom Lachesis we covered; beyond those:
    • Avalanche Consensus (Avalanche/X-Chain): Avalanche uses a DAG-based consensus where validators repeatedly sample each other in a randomized process to decide which transactions or blocks to prefer. The Avalanche X-Chain (exchange chain) is a DAG of transactions (UTXOs) and achieves consensus via this network sampling method. Avalanche’s protocol is probabilistic but extremely fast and scalable – it can finalize transactions in ~1 second and reportedly handle up to 4,500 TPS per subnet. Avalanche’s approach is unique in combining DAG data structures with a metastable consensus (Snowball protocol), and it’s secured by Proof-of-Stake (anyone can be a validator with sufficient stake).
    • Conflux Tree-Graph: Conflux is a platform that extended Bitcoin’s PoW into a DAG of blocks. It uses a Tree-Graph structure where blocks reference not just one parent but all known previous blocks (no orphaning). This allows Conflux to use Proof-of-Work mining but keep all forks as part of the ledger, leading to much higher throughput than a typical chain. Conflux can thus achieve on the order of 3–6k TPS in theory, using PoW, by having miners produce blocks continually without waiting for a single chain. Its consensus then orders these blocks and resolves conflicts by a heaviest subtree rule. This is an example of a hybrid PoW DAG.
    • Hashgraph Variants and Academic Protocols: There are numerous academic DAG protocols (some implemented in newer projects): SPECTRE and PHANTOM (blockDAG protocols aimed at high throughput and fast confirmation, from DAGlabs), Aleph Zero (a DAG aBFT consensus used in Aleph Zero blockchain), Parallel Chains / Prism (research projects splitting transaction confirmation into parallel subchains and DAGs), and recent advancements like Sui’s Narwhal & Bullshark which use a DAG mempool for high throughput and a separate consensus for finality. While not all of these have large-scale deployments, they indicate a rich field of research. Many of these protocols differentiate between availability (writing lots of data fast to a DAG) and consistency (agreeing on one history), trying to get the best of both.

Each DAG platform tailors its consensus to its needs – whether it’s feeless microtransactions, smart contract execution, or interoperability. A common theme, however, is avoiding a single serial bottleneck: DAG consensus mechanisms strive to allow lots of concurrent activity and then use clever algorithms (gossip, voting, sampling, etc.) to sort things out, rather than constraining the network to a single block producer at a time.

Case Studies: Examples of DAG-Based Blockchain Projects

Several projects have implemented DAG-based ledgers, each with unique design choices and target use cases. Below we examine some prominent DAG-based platforms:

  • IOTA (The Tangle): IOTA is one of the first DAG-based cryptocurrencies, designed for the Internet of Things. Its ledger, called the Tangle, is a DAG of transactions where each new transaction confirms two previous ones. IOTA’s goal is to enable feeless microtransactions between IoT devices (paying tiny amounts for data or services). It launched in 2016, and to bootstrap security it used a Coordinator node (run by the IOTA Foundation) to prevent attacks on the early network. IOTA has been working on “Coordicide” to fully decentralize the network by introducing a voting consensus (as described earlier) where nodes vote on conflicting transactions using a leaderless Nakamoto consensus on the heaviest DAG. In terms of performance, IOTA can, in theory, achieve very high throughput (the protocol doesn’t set a hard TPS limit; more activity actually helps it confirm transactions faster). In practice, testnets have demonstrated hundreds of TPS, and the upcoming IOTA 2.0 is expected to scale well for IoT demand. Use cases for IOTA revolve around IoT and data integrity: e.g., sensor data streaming with integrity proofs, vehicle-to-vehicle payments, supply chain tracking, and even decentralized identity (the IOTA Identity framework allows issuing and verifying digital credentials/DIDs on the Tangle). IOTA does not natively support smart contracts on its base layer, but the project has introduced a parallel Smart Contracts framework and tokens on a secondary layer to enable more complex DApp functionality. A notable feature of IOTA is its zero fees, which is enabled by requiring a small PoW by the sender instead of charging a fee – this makes it attractive for high-volume, low-value transactions (e.g., a sensor sending data every few seconds for a negligible cost).
  • Hedera Hashgraph (HBAR): Hedera is a public distributed ledger that uses the Hashgraph consensus algorithm (invented by Dr. Leemon Baird). Hedera started in 2018 and is governed by a council of large organizations (Google, IBM, Boeing, and others) who run the initial set of nodes. Unlike most others, Hedera is permissioned in governance (only approved council members run consensus nodes currently, up to 39 nodes) though anyone can use the network. Its Hashgraph DAG enables very high throughput and fast finality – Hedera can process over 10,000 TPS with finality in 3-5 seconds under optimal conditions. It achieves this with the aBFT gossip-based consensus described earlier. Hedera emphasizes enterprise and Web3 use cases that need reliability at scale: its network offers services for tokenization (Hedera Token Service), a Consensus Service for tamper-proof event logging, and a Smart Contract service (which is EVM-compatible). Notable applications on Hedera include supply chain provenance (e.g., Avery Dennison’s apparel tracking), high-volume NFT minting (low fees make minting NFTs inexpensive), payments and micropayments (like ad tech micropayments), and even decentralized identity solutions. Hedera has a DID method registered with W3C and frameworks like Hedera Guardian to support verifiable credentials and regulatory compliance (for example, tracking carbon credits). A key feature is Hedera’s strong performance combined with claimed stability (the Hashgraph algorithm guarantees no forks and mathematically proven fairness in ordering). The trade-off is that Hedera is less decentralized in node count than open networks (by design, with its governance model), though the council nodes are located globally and the plan is to eventually increase openness. In summary, Hedera Hashgraph is a prime example of a DAG-based DLT targeting enterprise-grade applications, with an emphasis on high throughput, security, and governance.
  • Fantom (FTM): Fantom is a smart contract platform (Layer-1 blockchain) that employs a DAG-based consensus called Lachesis. Launched in 2019, Fantom gained popularity especially in the DeFi boom of 2021-2022 as an Ethereum-compatible chain with much higher performance. Fantom’s Opera network runs the Lachesis aBFT consensus (detailed above), where validators keep a local DAG of event blocks and achieve consensus asynchronously, then finalize transactions in a main chain. This gives Fantom a typical time-to-finality of ~1 second for transactions and the ability to handle thousands of transactions per second in throughput. Fantom is EVM-compatible, meaning developers can deploy Solidity smart contracts and use the same tooling as Ethereum, which greatly helped its adoption in DeFi. Indeed, Fantom became home to numerous DeFi projects (DEXes, lending protocols, yield farms) attracted by its speed and low fees. It also hosts NFT projects and gaming DApps – essentially any Web3 application that benefits from fast, cheap transactions. A noteworthy point is that Fantom achieved a high level of decentralization for a DAG platform: it has dozens of independent validators securing the network (permissionless, anyone can run a validator with the minimum stake), unlike some DAG networks that restrict validators. This positions Fantom as a credible alternative to more traditional blockchains for decentralized applications, leveraging DAG tech under the hood to break the performance bottleneck. The network’s FTM token is used for staking, governance and fees (which are only a few cents per transaction, much lower than Ethereum gas fees). Fantom demonstrated that DAG-based consensus can be integrated with smart contract platforms to achieve both speed and compatibility.
  • Nano (XNO): Nano is a lightweight cryptocurrency launched in 2015 (originally as RaiBlocks) that uses a DAG block-lattice structure. Nano’s primary focus is peer-to-peer digital cash: instant, feeless transactions with minimal resource usage. In Nano, each account has its own chain of transactions, and transfers between accounts are handled via a send block on the sender’s chain and a receive block on the recipient’s chain. This asynchronous design means the network can process transactions independently and in parallel. Consensus is achieved by Open Representative Voting (ORV), where the community appoints representative nodes by delegation of balance weight. Representatives vote on conflicting transactions (which are rare, usually only in double-spend attempts), and once a quorum (67% weight) agrees, the transaction is cemented (irreversibly confirmed). Nano’s typical confirmation times are well below a second, making it feel instantaneous in everyday use. Because there are no mining rewards or fees, running a Nano node or representative is a voluntary effort, but the network’s design minimizes load (each transaction is only 200 bytes and can be processed quickly). Nano’s DAG approach and consensus allow it to be extremely energy-efficient – there is a tiny PoW performed by senders (mainly as an anti-spam measure), but it’s trivial compared to PoW blockchains. The use cases for Nano are simple by design: it’s meant for currency transfers, from everyday purchases to remittances, where speed and zero fees are the selling points. Nano does not support smart contracts or complex scripting; it focuses on doing one thing very well. A challenge for Nano’s model is that it relies on the honest majority of representatives; since there are no monetary incentives, the security model is based on the assumption that large token holders will act in the network’s best interest. So far, Nano has maintained a fairly decentralized set of principal representatives and has seen use in merchant payments, tipping, and other micropayment scenarios online.
  • Hedera vs IOTA vs Fantom vs Nano (At a Glance): The table below summarizes some key characteristics of these DAG-based projects:
Project (Year)Data Structure & ConsensusPerformance (Throughput & Finality)Notable Features / Use Cases
IOTA (2016)DAG of transactions (“Tangle”); each tx approves 2 others. Originally coordinator-secured; moving to decentralized leaderless consensus (vote on heaviest DAG, no miners).Theoretically high TPS (scales with activity); ~10s confirmation in active network (faster as load increases). Ongoing research to improve finality. Feeless transactions.IoT micropayments and data integrity (feeless microtransactions), supply chain, sensor data, auto, decentralized identity (IOTA Identity DID method). No base-layer smart contracts (separate layers for that).
Hedera Hashgraph (2018)DAG of events (Hashgraph); gossip-about-gossip + virtual voting consensus (ABFT), run by ~29–39 council nodes (PoS weighted). No miners; timestamps for ordering.~10,000 TPS max; finality 3-5 seconds for transactions. Extremely low energy per tx (0.0001 kWh). Very low fixed fees ($0.0001 per transfer).Enterprise and Web3 applications: tokenization (HTS), NFTs and content services, payments, supply chain tracking, healthcare data, gaming, etc. Council governance by big corporations; network is EVM-compatible for smart contracts (Solidity). Focus on high throughput and security for businesses.
Fantom (FTM) (2019)DAG of validator event blocks; Lachesis aBFT PoS consensus (leaderless). Each validator builds DAG of events, which are confirmed and stitched into a final blockchain (Opera chain).Empirically a few hundred TPS in DeFi usage; 1-2 second finality typical. Capable of thousands of TPS in benchmarks. Low fees (fractions of a cent).DeFi and smart contracts on a high-speed L1. EVM-compatible (runs Solidity DApps). Supports DEXes, lending, NFT marketplaces (fast trading, cheap minting). DAG consensus hidden behind a developer-friendly blockchain interface. Staking available for anyone (decentralized validator set).
Nano (XNO) (2015)DAG of account-chains (block-lattice); each tx is its own block. Open Representative Voting for consensus (dPoS-like voting on conflicts). No mining, no fees.~Hundreds of TPS feasible (limited mainly by network I/O). <1s confirmation for typical transactions. No fees at all (feeless). Extremely low resource usage (efficient for IoT/mobile).Digital currency for instant payments. Ideal for micropayments, tipping, retail transactions, where fees and latency must be minimal. Not designed for smart contracts – focuses on simple transfers. Very low power consumption (green cryptocurrency). Community-run representatives (no central authority).

(Table: Comparison of selected DAG-based ledger projects and their characteristics. TPS = transactions per second.)

Other DAG-based projects not detailed above include Obyte (Byteball) – a DAG ledger for conditional payments and data storage, IoT Chain (ITC) – an IoT-focused DAG project, Avalanche – which we discussed as using DAG in its consensus and has become a major DeFi platform, Conflux – a high-throughput PoW DAG in China, and academic prototypes like SPECTRE/PHANTOM. Each explores the design space of DAG ledgers in different ways, but the four examples above (IOTA, Hedera, Fantom, Nano) illustrate the diversity: from feeless IoT transactions to enterprise networks and DeFi smart contract chains, all leveraging DAG structures.

Use Cases of DAG Technology in the Web3 Ecosystem

DAG-based blockchain systems unlock certain use cases particularly well, thanks to their high performance and unique properties. Here are some current and potential use cases where DAGs are making an impact in Web3:

  • Internet of Things (IoT): IoT involves millions of devices transmitting data and potentially transacting with each other (machine-to-machine payments). DAG ledgers like IOTA were explicitly designed for this scenario. With feeless microtransactions and the ability to handle high frequencies of small payments, a DAG ledger can enable IoT devices to pay for services or bandwidth on the fly. For example, a smart electric car might automatically pay a charging station a few cents worth of electricity, or sensors could sell data to a platform in real time. IOTA’s Tangle has been used in smart city pilots, supply chain IoT integrations (tracking goods and environmental conditions), and decentralized data marketplaces where sensor data is immutably logged and traded. The scalability of DAGs addresses the huge volume that widespread IoT networks generate, and their low cost suits micropayment economics.
  • Decentralized Finance (DeFi): DeFi applications like decentralized exchanges (DEXs), lending platforms, and payment networks benefit from high throughput and low latency. DAG-based smart contract platforms (e.g. Fantom, and to an extent Avalanche’s X-Chain for simple asset transfers) offer an advantage in that trades can settle faster and fees remain low even during high demand. In 2021, Fantom saw a surge of DeFi activity (yield farming, automated market makers, etc.) and was able to handle it with much lower congestion than Ethereum at the time. Additionally, DAG networks’ quick finality reduces the risk of trade execution uncertainty (on slow chains, users wait many blocks for finality which can introduce risk in fast-paced trading). Another angle is decentralized payment networks – Nano, for example, can be viewed as part of the DeFi spectrum, enabling peer-to-peer transfers and potentially being a micropayment rail for layer-2 of other systems. DAG’s performance could also support high-frequency trading or complex multi-step DeFi transactions executing more smoothly.
  • Non-Fungible Tokens (NFTs) and Gaming: The NFT boom has highlighted the need for low-cost minting and transfers. On Ethereum, minting NFTs became costly when gas fees spiked. DAG networks like Hedera and Fantom have been pitched as alternatives where minting an NFT costs a tiny fraction of a cent, making them viable for in-game assets, collectibles, or large-scale drops. Hedera’s Token Service allows native token and NFT issuance with the network’s low, predictable fees, and has been used by content platforms and even enterprises (e.g., music artists issuing tokens or universities tracking degrees). In gaming, where micro-transactions are common, a fast DAG ledger could handle frequent asset trades or reward distributions without slowing the game or bankrupting players in fees. The high throughput ensures that even if a popular game or NFT collection draws in millions of users, the network can handle the load (whereas we’ve seen games on Ethereum clog the network in the past). For instance, an NFT-based game on Fantom can update state quickly enough to provide near-real-time responsiveness.
  • Decentralized Identity (DID) and Credentials: Identity systems benefit from an immutable ledger to anchor identities, credentials, and attestations. DAG networks are being explored for this because they offer scalability for potentially billions of identity transactions (every login, certificate issuance, etc.) and low cost, which is crucial if, say, each citizen’s ID interactions were recorded. IOTA Identity is one example: it provides a DID method did:iota where identity documents are referenced on the Tangle. This can be used for self-sovereign identity: users control their identity documents, and verifiers can retrieve proofs from the DAG. Hedera is also active in the DID space – it has a DID specification and has been used in projects like securing tamper-proof logs of college degrees, COVID vaccination certificates, or supply chain compliance documents (via the Hedera Consensus Service as an ID anchoring service). The advantages of DAGs here are that writing data is cheap and fast, so updating an identity state (like rotating keys, adding a credential) doesn’t face the cost or delay hurdles of a busy blockchain. Additionally, the finality and ordering guarantees can be important for audits (Hashgraph, for example, provides a trusted timestamp order of events which is useful in compliance logging).
  • Supply Chain and Data Integrity: Beyond identity, any use case that involves logging a high volume of data entries can leverage DAG DLTs. Supply chain tracking is a notable one – products moving through a supply chain generate many events (manufactured, shipped, inspected, etc.). Projects have used Hedera and IOTA to log these events on a DAG ledger for immutability and transparency. The high throughput ensures the ledger won’t become a bottleneck even if every item in a large supply network is being scanned and recorded. Moreover, the low or zero fees mean you can record even low-value events on-chain without incurring major costs. Another example is IoT data integrity: energy grids or telecommunications might log device readings on a DAG ledger to later prove that data wasn’t tampered with. Constellation Network’s DAG (another DAG project) focuses on big data validation for enterprises and government (like US Air Force drone data integrity) – highlighting how a scalable DAG can handle big data streams in a trusted way.
  • Payments and Remittances: Fast and feeless transactions make DAG cryptocurrencies like Nano and IOTA well-suited for payment use cases. Nano has seen adoption in scenarios like online tipping (where a user can send a few cents to a content creator instantly) and international remittances (where speed and zero fees make a big difference compared to waiting hours and paying percent-level fees). DAG networks can serve as high-speed payment rails for integrating with point-of-sale systems or mobile payment apps. For instance, a coffee shop could use a DAG-based crypto for payments and not worry about latency or cost (the user experience can rival contactless credit card payments). Hedera’s HBAR is also used in some payment trials (due to its fast finality and low fee, some fintech applications consider it for settlement). Additionally, because DAG networks often have higher capacity, they can maintain performance even during global shopping events or spikes in usage, which is valuable for payment reliability.
  • Real-time Datafeeds and Oracles: Oracles (services that feed external data to blockchain smart contracts) require writing many data points to a ledger. A DAG ledger could act as a high-throughput oracle network, recording price feeds, weather data, IoT sensor readings, etc., with a guarantee of ordering and timestamp. The Hedera Consensus Service, for example, is used by some oracle providers to timestamp data before feeding it into other chains. The speed ensures that data is fresh, and the throughput means even rapid data streams can be handled. In decentralized Web3 analytics or advertising, where every click or impression might be logged for transparency, a DAG backend can cope with the event volume.

In all these use cases, the common thread is that DAG networks aim to provide the scalability, speed, and cost-efficiency that broaden the scope of what we can decentralize. They are particularly useful where high frequency or high volume transactions occur (IoT, microtransactions, machine data) or where user experience demands fast, seamless interactions (gaming, payments). That said, not every use case will migrate to DAG-based ledgers – sometimes the maturity and security of traditional blockchains, or simply network effects (e.g. Ethereum’s huge developer base), outweigh raw performance needs. Nonetheless, DAGs are carving out a niche in the Web3 stack for scenarios that strain conventional chains.

Limitations and Challenges of DAG-Based Networks

While DAG-based distributed ledgers offer enticing advantages, they also come with trade-offs and challenges. It’s important to critically examine these limitations:

  • Maturity and Security: The majority of DAG consensus algorithms are relatively new and less battle-tested compared to Bitcoin or Ethereum’s well-studied blockchain protocols. This can mean unknown security vulnerabilities or attack vectors might exist. The complexity of DAG systems potentially opens new avenues for attacks – for example, an attacker might try to spam or bloat the DAG with conflicting subtangles, or take advantage of the parallel structure to double-spend before the network reaches consensus. Academic analyses note that increased complexity introduces a broader range of vulnerabilities compared to simpler linear chains. Some DAG networks have suffered issues: e.g., early on, IOTA’s network experienced a few instances where it had to be paused due to irregularities/hacks (one incident in 2020 involved stolen funds and the Coordinator was shut off temporarily to resolve it). These incidents underline that the security models are still being refined. Moreover, finality in some DAGs is probabilistic – e.g., pre-Coordicide IOTA had no absolute finality, only increasing confirmation confidence – which can be tricky for certain applications (though newer DAGs like Hashgraph and Fantom provide instant finality with aBFT guarantees).
  • Consensus Complexity: Achieving consensus in a DAG often involves complicated algorithms (gossip protocols, virtual voting, random sampling, etc.). This complexity can translate to larger codebases and more complicated implementations, increasing the risk of software bugs. It also makes the system harder for developers to understand. A blockchain’s longest-chain rule is conceptually simple, whereas, say, Hashgraph’s virtual voting or Avalanche’s repeated random sampling are not immediately intuitive. The complexity can slow down adoption: developers and enterprises may be hesitant to trust a system they find harder to comprehend or audit. As one study pointed out, partial-order based systems (DAGs) require more effort to integrate with existing infrastructure and developer mindsets. Tools and libraries for DAG networks are also less mature in many cases, meaning the developer experience might be rougher than on Ethereum or Bitcoin.
  • Decentralization Trade-offs: Some current DAG implementations sacrifice some degree of decentralization to achieve their performance. For instance, Hedera’s reliance on a fixed council of 39 nodes means the network is not open to anyone to participate in consensus, which has drawn criticism despite its technical strengths. IOTA, for a long time, relied on a central Coordinator to prevent attacks, which was a single point of failure/control. Nano’s consensus relies on a small number of principal representatives holding most voting weight (as of 2023, the top few reps often control a large portion of online voting weight), which could be seen as a concentration of power – though this is somewhat analogous to mining pools in PoW. In general, blockchains are currently perceived as easier to decentralize widely (thousands of nodes) than some DAG networks. The reasons are varied: some DAG algorithms might have higher node bandwidth requirements (making it harder for many nodes to participate fully), or the project’s design might intentionally keep a permissioned structure initially. This isn’t an inherent limitation of DAGs per se, but rather of specific implementations. It’s possible to have a highly decentralized DAG network, but in practice many haven’t reached the node counts of major blockchains yet.
  • Need for Volume (Security vs Throughput): Some DAG networks paradoxically require high transaction volume to function optimally. For example, IOTA’s security model becomes robust when lots of honest transactions are constantly confirming each other (raising the cumulative weight of honest subtangles). If the network activity is very low, the DAG can suffer from laziness – tips not getting approved quickly, or an attacker finding it easier to try and override parts of the DAG. In contrast, a traditional blockchain like Bitcoin doesn’t require a minimum number of transactions to remain secure (even if few transactions occur, miners are still competing to extend the chain). Thus, DAGs often thrive under load but might stagnate under sparse usage, unless special measures are taken (like IOTA’s coordinator or background “maintenance” transactions). This means performance can be inconsistent – great when usage is high, but possibly slower confirmation in off-peak times or low-use scenarios.
  • Ordering and Compatibility: Because DAGs produce a partial order of events that eventually needs to be consistent, the consensus algorithms can be quite intricate. In smart contract contexts, total ordering of transactions is required to avoid double-spending and to maintain deterministic execution. DAG systems like Fantom solve this by building an ordering layer (the final Opera Chain), but not all DAG systems support complex smart contracts easily. The state management and programming model can be challenging on a pure DAG. For example, if two transactions are non-conflicting, they can confirm in parallel on a DAG – that’s fine. But if they do conflict (say, two txns spending the same output or two trades on the same order), the network must decide one and drop the other. Ensuring that all nodes make the same decision in a decentralized way is harder without a single chain ordering everything. This is why many DAG projects initially avoided smart contracts or global state and focused on payments (where conflicts are simpler to detect via UTXOs or account balances). Interfacing DAG ledgers with existing blockchain ecosystems can also be non-trivial; for example, connecting an EVM to a DAG required Fantom to create a mechanism to linearize the DAG for the EVM execution. These complexities mean that not every use case can be immediately implemented on a DAG without careful design.
  • Storage and Sync: A potential issue is that if a DAG ledger allows a high volume of parallel transactions, the ledger can grow quickly. Efficient algorithms for pruning the DAG (removing old transactions that are no longer needed for security) are important, as well as for letting light nodes operate (light clients need ways to confirm transactions without storing the entire DAG). Research has identified the reachability challenge: ensuring new transactions can reach and reference earlier ones efficiently, and figuring out how to truncate history safely in a DAG. While blockchains also face growth issues, the DAG’s structure might complicate things like calculating balances or proofs for partial state, since the ledger isn’t a simple list of blocks. This is largely a technical challenge that can be addressed, but it adds to the overhead of designing a robust DAG system.
  • Perception and Network Effects: Outside of pure technical issues, DAG projects face the challenge of proving themselves in a blockchain-dominated space. Many developers and users are simply more comfortable with blockchain L1s, and network effects (more users, more dApps, more tooling on existing chains) can be hard to overcome. DAGs are sometimes marketed with bold claims (“blockchain killer”, etc.), which can invite skepticism. For example, a project might claim unlimited scalability – but users will wait to see it demonstrated under real conditions. Until DAG networks host “killer apps” or large user bases, they may be seen as experimental. Additionally, getting listed on exchanges, custody solutions, wallets – the whole infrastructure that already supports major blockchains – is an ongoing effort for each new DAG platform. So there’s a bootstrapping challenge: despite technical merits, adoption can lag due to ecosystem inertia.

In summary, DAG-based ledgers trade simplicity for performance, and that comes with growing pains. The complexity of consensus, potential centralization in some implementations, and the need to gain trust equivalent to older blockchain systems are hurdles to overcome. The research community is actively studying these issues – for instance, a 2024 systematization-of-knowledge paper on DAG protocols notes the increasing variety of designs and the need for holistic understanding of their trade-offs. As DAG projects mature, we can expect many of these challenges (like removal of coordinators, open participation, better dev tools) to be addressed, but they are important to consider when evaluating DAG vs blockchain for a given application.

The adoption of DAG-based blockchain technology is still in its early stages relative to the widespread use of traditional blockchains. As of 2025, only a handful of public distributed ledgers use DAGs at scale – notable ones being Hedera Hashgraph, IOTA, Fantom, Nano, Avalanche (for part of its system), and a few others. Blockchains (linear chains) remain the dominant architecture in deployed systems. However, interest in DAGs has been steadily increasing in both industry and academia. We can identify a few trends and the outlook for DAG in blockchain:

  • Growing Number of DAG Projects and Research: There is a visible uptick in the number of new projects exploring DAG or hybrid architectures. For example, recent platforms like Aleph Zero (a privacy-focused network) use a DAG consensus for fast ordering, and Sui and Aptos (Move-language chains) incorporate DAG-based mempool or parallel execution engines to scale performance. Academic research into DAG-based consensus is flourishing – protocols like SPECTRE, PHANTOM, GhostDAG, and newer ones are pushing the boundaries, and comprehensive analyses (SoK papers) are being published to classify and evaluate DAG approaches. This indicates a healthy exploration and the emergence of best practices. As research identifies solutions to earlier weaknesses (for instance, how to achieve fairness, how to prune DAGs, how to secure DAGs under dynamic conditions), we’ll likely see these innovations trickle into implementations.
  • Hybrid Models in Mainstream Use: An interesting trend is that even traditional blockchains are adopting DAG concepts to improve performance. Avalanche is a prime example of a hybrid: it presents itself as a blockchain platform, but at its core uses a DAG consensus. It has gained significant adoption in DeFi and NFT circles, showing that users sometimes adopt a DAG-based system without even realizing it, as long as it meets their needs (fast and cheap). This trend may continue: DAG as an internal engine while exposing a familiar blockchain interface could be a winning strategy, easing developers in. Fantom did this with its Opera chain, and other projects might follow suit, effectively making DAG tech an unseen backbone for next-gen chains.
  • Enterprise and Niche Adoption: Enterprises that require high throughput, predictable costs, and are comfortable with more permissioned networks have been inclined to explore DAG ledgers. Hedera’s Governing Council model attracted big companies; they in turn drive use cases like asset tokenization for financial services, or tracking software licenses, etc., on Hedera. We’re seeing consortia consider DAG-based DLT for things like telecommunications settlements, advertising impression tracking, or interbank transfers, where the volume is high and they need finality. IOTA has been involved in European Union funded projects for infrastructure, digital identity pilots, and industrial IoT – these are more long-term adoption paths, but they show that DAGs are on the radar beyond just the crypto community. If some of these trials prove successful and scalable, we could see sector-specific adoption of DAG networks (e.g., an IoT consortium all using a DAG ledger to share and monetize data).
  • Community and Decentralization Progress: Early criticisms of DAG networks (central coordinators, permissioned validators) are gradually being addressed. IOTA’s Coordicide will, if successful, remove the central coordinator and transition IOTA to a fully decentralized network with a form of staking and community-run validators. Hedera has open-sourced its code and hinted at plans to further decentralize governance in the long run (beyond the initial council). Nano’s community continuously works on decentralizing representative distribution (encouraging more users to run reps or split their delegations). These moves are important for the credibility and trust in DAG networks, aligning them more with the ethos of blockchain. As decentralization increases, it’s likely that more crypto-native users and developers will be willing to build on or contribute to DAG projects, which can accelerate growth.
  • Interoperability and Layer-2 Use: We might also see DAGs being used as scaling layers or interoperable networks rather than standalone ecosystems. For example, a DAG ledger could serve as a high-speed layer-2 for Ethereum, periodically anchoring batched results to Ethereum for security. Alternatively, DAG networks could be linked via bridges to existing blockchains, allowing assets to flow where it’s cheapest to transact. If the UX can be made seamless, users might transact on a DAG network (enjoying high speed) while still relying on a base blockchain for settlement or security – getting the best of both worlds. Some projects consider this kind of layered approach.
  • Future Outlook – Complement, not Replacement (for now): It’s telling that even proponents often say DAG is an “alternative” or complement to blockchain rather than an outright replacement. In the near future, we can expect heterogeneous networks: some will be blockchain-based, some DAG-based, each optimized for different scenarios. DAGs might power the high-frequency backbone of Web3 (handling the grunt work of microtransactions and data logging), while blockchains might remain preferred for settlement, extremely high-value transactions, or where simplicity and robustness are paramount. Over a longer horizon, if DAG-based systems continue to prove themselves and if they can demonstrate equal or greater security and decentralization, it’s conceivable they could become the dominant paradigm for distributed ledgers. The energy efficiency angle also aligns DAGs well with global sustainability pressures, potentially making them more politically and socially acceptable in the long run. The carbon footprint benefits of DAG networks, combined with their performance advantages, could be a major driver if regulatory environments emphasize green technology.
  • Community Sentiment: There is a segment of the crypto community that is very excited about DAGs – seeing them as the next evolutionary step of DLT. You’ll often hear phrases like “DAGs are the future; blockchains will eventually be seen as the dial-up internet compared to DAG’s broadband.” This enthusiasm has to be balanced with practical results, but it suggests that talent and investments are flowing into this area. On the other hand, skeptics remain, pointing out that decentralization and security shouldn’t be compromised for speed – so DAG projects will have to demonstrate that they can have the best of both worlds.

In conclusion, the future outlook for DAG in blockchain is cautiously optimistic. Right now, blockchains still dominate, but DAG-based platforms are carving out their space and proving their capabilities in specific domains. As research resolves current challenges, we’ll likely see more convergence of ideas – with blockchains adopting DAG-inspired improvements and DAG networks adopting the lessons of blockchains on governance and security. Web3 researchers and developers would do well to keep an eye on DAG advancements, as they represent a significant branch of the DLT evolution tree. The coming years may see a diverse ecosystem of interoperable ledgers where DAGs play a vital role in scaling and special-purpose applications, moving us closer to the vision of a scalable, decentralized web.

In the words of one Hedera publication: DAG-based ledgers are “a promising step forward” in the evolution of digital currencies and decentralized tech – not a silver bullet to replace blockchains outright, but an important innovation that will work alongside and inspire improvements in the distributed ledger landscape as a whole.

Sources: The information in this report is drawn from a variety of credible sources, including academic research on DAG-based consensus, official documentation and whitepapers from projects like IOTA, Hedera Hashgraph, Fantom, and Nano, as well as technical blogs and articles that provide insights into DAG vs blockchain differences. These references support the comparative analysis, benefits, and case studies discussed above. The continued dialogue in the Web3 research community suggests that DAGs will remain a hot topic as we seek to solve the trilemma of scalability, security, and decentralization in the next generation of blockchain technology.

MegaETH: The 100,000 TPS Layer-2 Aiming to Supercharge Ethereum

· 9 min read

The Speed Revolution Ethereum Has Been Waiting For?

In the high-stakes world of blockchain scaling solutions, a new contender has emerged that's generating both excitement and controversy. MegaETH is positioning itself as Ethereum's answer to ultra-fast chains like Solana—promising sub-millisecond latency and an astonishing 100,000 transactions per second (TPS).

MegaETH

But these claims come with significant trade-offs. MegaETH is making calculated sacrifices to "Make Ethereum Great Again," raising important questions about the balance between performance, security, and decentralization.

As infrastructure providers who've seen many promising solutions come and go, we at BlockEden.xyz have conducted this analysis to help developers and builders understand what makes MegaETH unique—and what risks to consider before building on it.

What Makes MegaETH Different?

MegaETH is an Ethereum Layer-2 solution that has reimagined blockchain architecture with a singular focus: real-time performance.

While most L2 solutions improve on Ethereum's ~15 TPS by a factor of 10-100x, MegaETH aims for 1,000-10,000x improvement—speeds that would put it in a category of its own.

Revolutionary Technical Approach

MegaETH achieves its extraordinary speed through radical engineering decisions:

  1. Single Sequencer Architecture: Unlike most L2s that use multiple sequencers or plan to decentralize, MegaETH uses a single sequencer for ordering transactions, deliberately choosing performance over decentralization.

  2. Optimized State Trie: A completely redesigned state storage system that can handle terabyte-level state data efficiently, even on nodes with limited RAM.

  3. JIT Bytecode Compilation: Just-in-time compilation of Ethereum smart contract bytecode, bringing execution closer to "bare-metal" speed.

  4. Parallel Execution Pipeline: A multi-core approach that processes transactions in parallel streams to maximize throughput.

  5. Micro Blocks: Targeting ~1ms block times through continuous "streaming" block production rather than batch processing.

  6. EigenDA Integration: Using EigenLayer's data availability solution instead of posting all data to Ethereum L1, reducing costs while maintaining security through Ethereum-aligned validation.

This architecture delivers performance metrics that seem almost impossible for a blockchain:

  • Sub-millisecond latency (10ms target)
  • 100,000+ TPS throughput
  • EVM compatibility for easy application porting

Testing the Claims: MegaETH's Current Status

As of March 2025, MegaETH's public testnet is live. The initial deployment began on March 6th with a phased rollout, starting with infrastructure partners and dApp teams before opening to broader user onboarding.

Early testnet metrics show:

  • ~1.68 Giga-gas per second throughput
  • ~15ms block times (significantly faster than other L2s)
  • Support for parallel execution that will eventually push performance even higher

The team has indicated that the testnet is running in a somewhat throttled mode, with plans to enable additional parallelization that could double gas throughput to around 3.36 Ggas/sec, moving toward their ultimate target of 10 Ggas/sec (10 billion gas per second).

The Security and Trust Model

MegaETH's approach to security represents a significant departure from blockchain orthodoxy. Unlike Ethereum's trust-minimized design with thousands of validating nodes, MegaETH embraces a centralized execution layer with Ethereum as its security backstop.

The "Can't Be Evil" Philosophy

MegaETH employs an optimistic rollup security model with some unique characteristics:

  1. Fraud Proof System: Like other optimistic rollups, MegaETH allows observers to challenge invalid state transitions through fraud proofs submitted to Ethereum.

  2. Verifier Nodes: Independent nodes replicate the sequencer's computations and would initiate fraud proofs if discrepancies are found.

  3. Ethereum Settlement: All transactions are eventually settled on Ethereum, inheriting its security for final state.

This creates what the team calls a "can't be evil" mechanism—the sequencer can't produce invalid blocks or alter state incorrectly without being caught and punished.

The Centralization Trade-off

The controversial aspect: MegaETH runs with a single sequencer and explicitly has "no plans to ever decentralize the sequencer." This brings two significant risks:

  1. Liveness Risk: If the sequencer goes offline, the network could halt until it recovers or a new sequencer is appointed.

  2. Censorship Risk: The sequencer could theoretically censor certain transactions or users in the short term (though users could ultimately exit via L1).

MegaETH argues these risks are acceptable because:

  • The L2 is anchored to Ethereum for final security
  • Data availability is handled by multiple nodes in EigenDA
  • Any censorship or fraud can be seen and challenged by the community

Use Cases: When Ultra-Fast Execution Matters

MegaETH's real-time capabilities unlock use cases that were previously impractical on slower blockchains:

1. High-Frequency Trading and DeFi

MegaETH enables DEXs with near-instant trade execution and order book updates. Projects already building include:

  • GTE: A real-time spot DEX combining central limit order books and AMM liquidity
  • Teko Finance: A money market for leveraged lending with rapid margin updates
  • Cap: A stablecoin and yield engine that arbitrages across markets
  • Avon: A lending protocol with orderbook-based loan matching

These DeFi applications benefit from MegaETH's throughput to operate with minimal slippage and high-frequency updates.

2. Gaming and Metaverse

The sub-second finality makes fully on-chain games viable without waiting for confirmations:

  • Awe: An open-world 3D game with on-chain actions
  • Biomes: An on-chain metaverse similar to Minecraft
  • Mega Buddies and Mega Cheetah: Collectible avatar series

Such applications can deliver real-time feedback in blockchain games, enabling fast-paced gameplay and on-chain PvP battles.

3. Enterprise Applications

MegaETH's performance makes it suitable for enterprise applications requiring high throughput:

  • Instantaneous payments infrastructure
  • Real-time risk management systems
  • Supply chain verification with immediate finality
  • High-frequency auction systems

The key advantage in all these cases is the ability to run compute-intensive applications with immediate feedback while still being connected to Ethereum's ecosystem.

The Team Behind MegaETH

MegaETH was co-founded by a team with impressive credentials:

  • Li Yilong: PhD in computer science from Stanford specializing in low-latency computing systems
  • Yang Lei: PhD from MIT researching decentralized systems and Ethereum connectivity
  • Shuyao Kong: Former Head of Global Business Development at ConsenSys

The project has attracted notable backers, including Ethereum co-founders Vitalik Buterin and Joseph Lubin as angel investors. Vitalik's involvement is particularly noteworthy, as he rarely invests in specific projects.

Other investors include Sreeram Kannan (founder of EigenLayer), VC firms like Dragonfly Capital, Figment Capital, and Robot Ventures, and influential community figures such as Cobie.

Token Strategy: The Soulbound NFT Approach

MegaETH introduced an innovative token distribution method through "soulbound NFTs" called "The Fluffle." In February 2025, they created 10,000 non-transferable NFTs representing at least 5% of the total MegaETH token supply.

Key aspects of the tokenomics:

  • 5,000 NFTs were sold at 1 ETH each (raising ~$13-14 million)
  • The other 5,000 NFTs were allocated to ecosystem projects and builders
  • The NFTs are soulbound (cannot be transferred), ensuring long-term alignment
  • Implied valuation of around $540 million, extremely high for a pre-launch project
  • The team has raised approximately $30-40 million in venture funding

Eventually, the MegaETH token is expected to serve as the native currency for transaction fees and possibly for staking and governance.

How MegaETH Compares to Competitors

vs. Other Ethereum L2s

Compared to Optimism, Arbitrum, and Base, MegaETH is significantly faster but makes bigger compromises on decentralization:

  • Performance: MegaETH targets 100,000+ TPS vs. Arbitrum's ~250 ms transaction times and lower throughput
  • Decentralization: MegaETH uses a single sequencer vs. other L2s' plans for decentralized sequencers
  • Data Availability: MegaETH uses EigenDA vs. other L2s posting data directly to Ethereum

vs. Solana and High-Performance L1s

MegaETH aims to "beat Solana at its own game" while leveraging Ethereum's security:

  • Throughput: MegaETH targets 100k+ TPS vs. Solana's theoretical 65k TPS (typically a few thousand in practice)
  • Latency: MegaETH ~10 ms vs. Solana's ~400 ms finality
  • Decentralization: MegaETH has 1 sequencer vs. Solana's ~1,900 validators

vs. ZK-Rollups (StarkNet, zkSync)

While ZK-rollups offer stronger security guarantees through validity proofs:

  • Speed: MegaETH offers faster user experience without waiting for ZK proof generation
  • Trustlessness: ZK-rollups don't require trust in a sequencer's honesty, providing stronger security
  • Future Plans: MegaETH may eventually integrate ZK proofs, becoming a hybrid solution

MegaETH's positioning is clear: it's the fastest option within the Ethereum ecosystem, sacrificing some decentralization to achieve Web2-like speeds.

The Infrastructure Perspective: What Builders Should Consider

As an infrastructure provider connecting developers to blockchain nodes, BlockEden.xyz sees both opportunities and challenges in MegaETH's approach:

Potential Benefits for Builders

  1. Exceptional User Experience: Applications can offer instant feedback and high throughput, creating Web2-like responsiveness.

  2. EVM Compatibility: Existing Ethereum dApps can port over with minimal changes, unlocking performance without rewrites.

  3. Cost Efficiency: High throughput means lower per-transaction costs for users and applications.

  4. Ethereum Security Backstop: Despite centralization at the execution layer, Ethereum settlement provides a security foundation.

Risk Considerations

  1. Single Point of Failure: The centralized sequencer creates liveness risk—if it goes down, so does your application.

  2. Censorship Vulnerability: Applications could face transaction censorship without immediate recourse.

  3. Early-Stage Technology: MegaETH's novel architecture hasn't been battle-tested at scale with real value.

  4. Dependency on EigenDA: Using a newer data availability solution adds an additional trust assumption.

Infrastructure Requirements

Supporting MegaETH's throughput will require robust infrastructure:

  • High-capacity RPC nodes capable of handling the firehose of data
  • Advanced indexing solutions for real-time data access
  • Specialized monitoring for the unique architecture
  • Reliable bridge monitoring for cross-chain operations

Conclusion: Revolution or Compromise?

MegaETH represents a bold experiment in blockchain scaling—one that deliberately prioritizes performance over decentralization. Whether this approach succeeds depends on whether the market values speed more than decentralized execution.

The coming months will be critical as MegaETH transitions from testnet to mainnet. If it delivers on its performance promises while maintaining sufficient security, it could fundamentally reshape how we think about blockchain scaling. If it stumbles, it will reinforce why decentralization remains a core blockchain value.

For now, MegaETH stands as one of the most ambitious Ethereum scaling solutions to date. Its willingness to challenge orthodoxy has already sparked important conversations about what trade-offs are acceptable in pursuit of mainstream blockchain adoption.

At BlockEden.xyz, we're committed to supporting developers wherever they build, including high-performance networks like MegaETH. Our reliable node infrastructure and API services are designed to help applications thrive across the multi-chain ecosystem, regardless of which approach to scaling ultimately prevails.


Looking to build on MegaETH or need reliable node infrastructure for high-throughput applications? Contact Email: info@BlockEden.xyz to learn how we can support your development with our 99.9% uptime guarantee and specialized RPC services across 27+ blockchains.

Scaling Blockchains: How Caldera and the RaaS Revolution Are Shaping Web3's Future

· 7 min read

The Web3 Scaling Problem

The blockchain industry faces a persistent challenge: how do we scale to support millions of users without sacrificing security or decentralization?

Ethereum, the leading smart contract platform, processes roughly 15 transactions per second on its base layer. During periods of high demand, this limitation has led to exorbitant gas fees—sometimes exceeding $100 per transaction during NFT mints or DeFi farming frenzies.

This scaling bottleneck presents an existential threat to Web3 adoption. Users accustomed to the instant responsiveness of Web2 applications won't tolerate paying $50 and waiting 3 minutes just to swap tokens or mint an NFT.

Enter the solution that's rapidly reshaping blockchain architecture: Rollups-as-a-Service (RaaS).

Scaling Blockchains

Understanding Rollups-as-a-Service (RaaS)

RaaS platforms enable developers to deploy their own custom blockchain rollups without the complexity of building everything from scratch. These services transform what would normally require a specialized engineering team and months of development into a streamlined, sometimes one-click deployment process.

Why does this matter? Because rollups are the key to blockchain scaling.

Rollups work by:

  • Processing transactions off the main chain (Layer 1)
  • Batching these transactions together
  • Submitting compressed proofs of these transactions back to the main chain

The result? Drastically increased throughput and significantly reduced costs while inheriting security from the underlying Layer 1 blockchain (like Ethereum).

"Rollups don't compete with Ethereum—they extend it. They're like specialized Express lanes built on top of Ethereum's highway."

This approach to scaling is so promising that Ethereum officially adopted a "rollup-centric roadmap" in 2020, acknowledging that the future isn't a single monolithic chain, but rather an ecosystem of interconnected, purpose-built rollups.

Caldera: Leading the RaaS Revolution

Among the emerging RaaS providers, Caldera stands out as a frontrunner. Founded in 2023 and having raised $25M from prominent investors including Dragonfly, Sequoia Capital, and Lattice, Caldera has quickly positioned itself as a leading infrastructure provider in the rollup space.

What Makes Caldera Different?

Caldera distinguishes itself in several key ways:

  1. Multi-Framework Support: Unlike competitors who focus on a single rollup framework, Caldera supports major frameworks like Optimism's OP Stack and Arbitrum's Orbit/Nitro technology, giving developers flexibility in their technical approach.

  2. End-to-End Infrastructure: When you deploy with Caldera, you get a complete suite of components: reliable RPC nodes, block explorers, indexing services, and bridge interfaces.

  3. Rich Integration Ecosystem: Caldera comes pre-integrated with 40+ Web3 tools and services, including oracles, faucets, wallets, and cross-chain bridges (LayerZero, Axelar, Wormhole, Connext, and more).

  4. The Metalayer Network: Perhaps Caldera's most ambitious innovation is its Metalayer—a network that connects all Caldera-powered rollups into a unified ecosystem, allowing them to share liquidity and messages seamlessly.

  5. Multi-VM Support: In late 2024, Caldera became the first RaaS to support the Solana Virtual Machine (SVM) on Ethereum, enabling Solana-like high-performance chains that still settle to Ethereum's secure base layer.

Caldera's approach is creating what they call an "everything layer" for rollups—a cohesive network where different rollups can interoperate rather than exist as isolated islands.

Real-World Adoption: Who's Using Caldera?

Caldera has gained significant traction, with over 75 rollups in production as of late 2024. Some notable projects include:

  • Manta Pacific: A highly scalable network for deploying zero-knowledge applications that uses Caldera's OP Stack combined with Celestia for data availability.

  • RARI Chain: Rarible's NFT-focused rollup that processes transactions in under a second and enforces NFT royalties at the protocol level.

  • Kinto: A regulatory-compliant DeFi platform with on-chain KYC/AML and account abstraction capabilities.

  • Injective's inEVM: An EVM-compatible rollup that extends Injective's interoperability, connecting the Cosmos ecosystem with Ethereum-based dApps.

These projects highlight how application-specific rollups enable customization not possible on general-purpose Layer 1s. By late 2024, Caldera's collective rollups had reportedly processed over 300 million transactions for 6+ million unique wallets, with nearly $1 billion in total value locked (TVL).

How RaaS Compares: Caldera vs. Competitors

The RaaS landscape is becoming increasingly competitive, with several notable players:

Conduit

  • Focuses exclusively on Optimism and Arbitrum ecosystems
  • Emphasizes a fully self-serve, no-code experience
  • Powers approximately 20% of Ethereum's mainnet rollups, including Zora

AltLayer

  • Offers "Flashlayers"—disposable, on-demand rollups for temporary needs
  • Focuses on elastic scaling for specific events or high-traffic periods
  • Demonstrated impressive throughput during gaming events (180,000+ daily transactions)

Sovereign Labs

  • Building a Rollup SDK focused on zero-knowledge technologies
  • Aims to enable ZK-rollups on any base blockchain, not just Ethereum
  • Still in development, positioning for the next wave of multi-chain ZK deployment

While these competitors excel in specific niches, Caldera's comprehensive approach—combining a unified rollup network, multi-VM support, and a focus on developer experience—has helped establish it as a market leader.

The Future of RaaS and Blockchain Scaling

RaaS is poised to reshape the blockchain landscape in profound ways:

1. The Proliferation of App-Specific Chains

Industry research suggests we're moving toward a future with potentially millions of rollups, each serving specific applications or communities. With RaaS lowering deployment barriers, every significant dApp could have its own optimized chain.

2. Interoperability as the Critical Challenge

As rollups multiply, the ability to communicate and share value between them becomes crucial. Caldera's Metalayer represents an early attempt to solve this challenge—creating a unified experience across a web of rollups.

3. From Isolated Chains to Networked Ecosystems

The end goal is a seamless multi-chain experience where users hardly need to know which chain they're on. Value and data would flow freely through an interconnected web of specialized rollups, all secured by robust Layer 1 networks.

4. Cloud-Like Blockchain Infrastructure

RaaS is effectively turning blockchain infrastructure into a cloud-like service. Caldera's "Rollup Engine" allows dynamic upgrades and modular components, treating rollups like configurable cloud services that can scale on demand.

What This Means for Developers and BlockEden.xyz

At BlockEden.xyz, we see enormous potential in the RaaS revolution. As an infrastructure provider connecting developers to blockchain nodes securely, we're positioned to play a crucial role in this evolving landscape.

The proliferation of rollups means developers need reliable node infrastructure more than ever. A future with thousands of application-specific chains demands robust RPC services with high availability—precisely what BlockEden.xyz specializes in providing.

We're particularly excited about the opportunities in:

  1. Specialized RPC Services for Rollups: As rollups adopt unique features and optimizations, specialized infrastructure becomes crucial.

  2. Cross-Chain Data Indexing: With value flowing between multiple rollups, developers need tools to track and analyze cross-chain activities.

  3. Enhanced Developer Tools: As rollup deployment becomes simpler, the need for sophisticated monitoring, debugging, and analytics tools grows.

  4. Unified API Access: Developers working across multiple rollups need simplified, unified access to diverse blockchain networks.

Conclusion: The Modular Blockchain Future

The rise of Rollups-as-a-Service represents a fundamental shift in how we think about blockchain scaling. Rather than forcing all applications onto a single chain, we're moving toward a modular future with specialized chains for specific use cases, all interconnected and secured by robust Layer 1 networks.

Caldera's approach—creating a unified network of rollups with shared liquidity and seamless messaging—offers a glimpse of this future. By making rollup deployment as simple as spinning up a cloud server, RaaS providers are democratizing access to blockchain infrastructure.

At BlockEden.xyz, we're committed to supporting this evolution by providing the reliable node infrastructure and developer tools needed to build in this multi-chain future. As we often say, the future of Web3 isn't a single chain—it's thousands of specialized chains working together.


Looking to build on a rollup or need reliable node infrastructure for your blockchain project? Contact Email: info@BlockEden.xyz to learn how we can support your development with our 99.9% uptime guarantee and specialized RPC services across 27+ blockchains.