L2 Throughput: 200→4,800 TPS in Two Years—But Did We Trade Scaling for Composability?

Two years ago, Ethereum Layer 2s were processing around 200 TPS combined. Today? We’re hitting nearly 4,800 TPS across the L2 ecosystem. That’s a 24x increase, and it’s largely thanks to EIP-4844 (proto-danksharding) which launched with the Dencun upgrade in March 2024.

As someone who’s been building L2 infrastructure at Polygon Labs and Optimism Foundation before joining my current stealth startup, I’ve watched this transformation up close. The numbers are genuinely impressive—but I’m increasingly concerned we’ve traded one problem (throughput) for another (composability).

What EIP-4844 Actually Did

Proto-danksharding introduced blob transactions: temporary data blobs that L2s can use to post transaction data to Ethereum L1. Each blob is 128 KiB, and the key innovation is that this data lives in a separate fee market from regular transactions.

The impact on fees was immediate and dramatic:

  • L2 transaction costs dropped 81-90% for optimistic rollups
  • Base saw a 224% transaction volume increase post-Dencun
  • Blob transactions created a dedicated, cheaper data availability layer

From a pure throughput perspective, this worked brilliantly. We went from expensive L2 transactions ($1-5) to negligible costs ($0.10-$0.50). That’s what enabled the explosion in L2 activity we’re seeing today.

The Fragmentation Problem Nobody Talks About Enough

Here’s what I’m worried about: liquidity is now scattered across Arbitrum, Optimism, Base, zkSync, and a dozen other L2s instead of being unified on L1.

When I first started in this space, Ethereum L1 was slow (15 TPS), but it had something magical: atomic composability. A single transaction could interact with Uniswap, Aave, and Compound all at once. DeFi protocols composed like Lego blocks.

Now? If your tokens are on Arbitrum and the best yield is on Optimism, you need to:

  1. Bridge your assets (wait time: 7 days for optimistic rollups, or use a fast bridge with additional risk)
  2. Pay bridge fees
  3. Monitor bridge transaction state
  4. Hope nothing goes wrong mid-transfer

This isn’t just theoretical. I’m seeing:

  • Institutions bridging assets to L2s for custody but not deploying into DeFi because cross-L2 composability is too complex
  • Retail users confused by which L2 their tokens are on
  • Developers building separate L2-specific versions of their dApps rather than one unified experience

Is This Temporary or Fundamental?

The Ethereum Foundation’s recent blog post about L1/L2 roles explicitly acknowledges that fragmentation is the primary downside of a multichain ecosystem. They’re encouraging work on shared sequencers and synchronous composability.

In theory, shared sequencing could enable atomic actions across multiple L2s—swap on Base, add liquidity on Optimism, open a position on Mode, all in one transaction. The OP Superchain is working toward this with shared architecture and unified liquidity.

But here’s my concern: the Ethereum Foundation explicitly adopted a “neutral steward” mandate that avoids picking winners or coordinating L2 interoperability too directly. If the EF doesn’t lead the coordination effort, who does?

The Data: Scaling Success, Composability Question Mark

Let’s look at what actually happened:

  • EIP-4844 enabled 1,000 TPS for L2s with current blob capacity (0.375 MB/slot)
  • Full danksharding roadmap targets 100,000 TPS through increased blob capacity and data availability sampling
  • Over 950,000 blobs posted to Ethereum since March 2024 launch
  • But: TVL is split across 10+ major L2s with different bridges, different fee tokens, different developer experiences

Compare this to Ethereum L1 pre-L2s:

  • Unified liquidity pool where all protocols could compose
  • Single state machine where developers could reason about atomic operations
  • Slower and more expensive, but predictable and composable

What I’m Watching in 2026

PeerDAS (Peer Data Availability Sampling): This is Ethereum’s next DA upgrade. Instead of validators downloading entire blobs, they’ll sample small portions to verify availability. This should allow the network to safely increase blob capacity without overwhelming validators.

Shared sequencers: Projects like Espresso Systems are working on cross-chain synchronous composability. If this works, we could get the best of both worlds—L2 speed with near-L1 composability.

Superchain momentum: Optimism’s vision of a unified L2 network with shared standards and liquidity is gaining traction. Base’s success is proof that the model can work.

My Question for the Community

Did we make the right trade-off?

Should Ethereum have prioritized L1 scaling (full sharding) over the rollup-centric roadmap? Or is L2 fragmentation a temporary growing pain that shared sequencers and better bridging will solve?

As someone building in this space every day, I see both the incredible throughput gains AND the composability losses. I’m optimistic about solutions like shared sequencing, but I’m also pragmatic about how long it takes for these coordination mechanisms to mature.

What’s your experience? Are you building cross-L2? Frustrated by fragmentation? Or do you think this is just the natural evolution toward a multi-chain future where each L2 finds its niche?

TL;DR: EIP-4844 increased L2 throughput from 200 to 4,800 TPS (success!), but fractured liquidity across Arbitrum, Optimism, Base, and others. Atomic composability—DeFi’s superpower—is now broken unless you stay on one L2. Shared sequencers might fix this, but fragmentation could be the price we pay for scale.

This is spot on, and I want to dig deeper into the composability loss because it’s more fundamental than most people realize.

The Magic We Lost: Atomic Composability

Pre-L2 Ethereum had a property that made DeFi possible: synchronous composability. When you executed a transaction on L1, you could interact with multiple protocols in a single atomic operation. If any step failed, the entire transaction reverted.

This enabled:

  • Flash loans - borrow millions, arbitrage across DEXs, repay in one transaction
  • Complex DeFi strategies - swap on Uniswap, supply to Aave, stake receipt token, all atomic
  • Protocol composability - protocols could trustlessly interact without worrying about state divergence

Now? Each L2 is an isolated state machine. Cross-L2 interactions require asynchronous message passing through bridges, which means:

  1. No flash loans across L2s - you can’t atomically borrow on Arbitrum and arbitrage on Optimism
  2. Multi-block operations - what was 1 transaction becomes 3+ with bridge confirmations
  3. State uncertainty - between initiating a bridge and completion, markets can move against you
  4. Bridge risk - every cross-L2 operation introduces a new trust assumption

Shared Sequencing: A Real Solution or Coordination Nightmare?

The proposed fix is shared sequencing - a single sequencer (or decentralized sequencer network) that orders transactions across multiple L2s, enabling atomic cross-chain operations.

In theory, this is brilliant. Espresso Systems, Astria, and others are building this. But there are massive coordination challenges:

Technical:

  • How do you ensure safety if one L2 in the shared set gets compromised?
  • What happens when sequencer sets diverge? (L2s will want different security models)
  • Gas pricing becomes complex when one transaction spans multiple chains

Economic:

  • Who pays for cross-chain sequencing? L2s? Users?
  • What’s the incentive for established L2s to adopt shared sequencing vs. keeping users locked in?
  • Sequencer MEV gets more complex across multiple chains

Political:

  • The EF’s “neutral steward” stance means they won’t force standards
  • Each L2 has different priorities (Arbitrum’s fraud proofs vs. zkSync’s ZK proofs)
  • Superchain is OP Stack-specific - what about ZK rollups?

Why I’m Not Convinced L1 Sharding Was the Wrong Choice

Look, I contributed to the rollup-centric roadmap, but I’m increasingly wondering if we made a mistake. Full L1 sharding would have:

  • Kept unified liquidity on the consensus layer
  • Maintained atomic composability across shards (same consensus, same state transition)
  • Scaled L1 directly rather than pushing complexity to L2s

Yes, sharding is technically harder than rollups. But the coordination problem we’ve created with L2 fragmentation might be even harder to solve.

The counterargument: Rollups let “1000 flowers bloom” - each L2 can optimize for different use cases (gaming, DeFi, social). Sharding would have locked us into one design.

Fair point. But DeFi’s superpower was composability, and we’ve sacrificed that for throughput.

What Needs to Happen Next

If we’re committed to the L2 roadmap (and we are - there’s no going back), here’s what the ecosystem needs:

  1. Standardized bridge protocols - the current bridge landscape is a security nightmare
  2. ERC-7683 (Cross Chain Intents Standard) - let users express what they want, solvers figure out execution
  3. Native L2 interoperability in protocol design - new L2s MUST plan for cross-chain from day one
  4. Ethereum Foundation leadership - “neutral steward” can’t mean “hands off coordination”

The risk is that we end up with a few dominant L2s (Arbitrum, Base, Optimism) that capture most liquidity, while smaller L2s become ghost chains. That’s not the multi-chain future we promised.

TL;DR: Atomic composability made DeFi possible. L2s broke it. Shared sequencing could fix it, but coordination is hard and the EF isn’t leading. We might have been better off with L1 sharding, but it’s too late to go back now.