Two years ago, Ethereum Layer 2s were processing around 200 TPS combined. Today? We’re hitting nearly 4,800 TPS across the L2 ecosystem. That’s a 24x increase, and it’s largely thanks to EIP-4844 (proto-danksharding) which launched with the Dencun upgrade in March 2024.
As someone who’s been building L2 infrastructure at Polygon Labs and Optimism Foundation before joining my current stealth startup, I’ve watched this transformation up close. The numbers are genuinely impressive—but I’m increasingly concerned we’ve traded one problem (throughput) for another (composability).
What EIP-4844 Actually Did
Proto-danksharding introduced blob transactions: temporary data blobs that L2s can use to post transaction data to Ethereum L1. Each blob is 128 KiB, and the key innovation is that this data lives in a separate fee market from regular transactions.
The impact on fees was immediate and dramatic:
- L2 transaction costs dropped 81-90% for optimistic rollups
- Base saw a 224% transaction volume increase post-Dencun
- Blob transactions created a dedicated, cheaper data availability layer
From a pure throughput perspective, this worked brilliantly. We went from expensive L2 transactions ($1-5) to negligible costs ($0.10-$0.50). That’s what enabled the explosion in L2 activity we’re seeing today.
The Fragmentation Problem Nobody Talks About Enough
Here’s what I’m worried about: liquidity is now scattered across Arbitrum, Optimism, Base, zkSync, and a dozen other L2s instead of being unified on L1.
When I first started in this space, Ethereum L1 was slow (15 TPS), but it had something magical: atomic composability. A single transaction could interact with Uniswap, Aave, and Compound all at once. DeFi protocols composed like Lego blocks.
Now? If your tokens are on Arbitrum and the best yield is on Optimism, you need to:
- Bridge your assets (wait time: 7 days for optimistic rollups, or use a fast bridge with additional risk)
- Pay bridge fees
- Monitor bridge transaction state
- Hope nothing goes wrong mid-transfer
This isn’t just theoretical. I’m seeing:
- Institutions bridging assets to L2s for custody but not deploying into DeFi because cross-L2 composability is too complex
- Retail users confused by which L2 their tokens are on
- Developers building separate L2-specific versions of their dApps rather than one unified experience
Is This Temporary or Fundamental?
The Ethereum Foundation’s recent blog post about L1/L2 roles explicitly acknowledges that fragmentation is the primary downside of a multichain ecosystem. They’re encouraging work on shared sequencers and synchronous composability.
In theory, shared sequencing could enable atomic actions across multiple L2s—swap on Base, add liquidity on Optimism, open a position on Mode, all in one transaction. The OP Superchain is working toward this with shared architecture and unified liquidity.
But here’s my concern: the Ethereum Foundation explicitly adopted a “neutral steward” mandate that avoids picking winners or coordinating L2 interoperability too directly. If the EF doesn’t lead the coordination effort, who does?
The Data: Scaling Success, Composability Question Mark
Let’s look at what actually happened:
- EIP-4844 enabled 1,000 TPS for L2s with current blob capacity (0.375 MB/slot)
- Full danksharding roadmap targets 100,000 TPS through increased blob capacity and data availability sampling
- Over 950,000 blobs posted to Ethereum since March 2024 launch
- But: TVL is split across 10+ major L2s with different bridges, different fee tokens, different developer experiences
Compare this to Ethereum L1 pre-L2s:
- Unified liquidity pool where all protocols could compose
- Single state machine where developers could reason about atomic operations
- Slower and more expensive, but predictable and composable
What I’m Watching in 2026
PeerDAS (Peer Data Availability Sampling): This is Ethereum’s next DA upgrade. Instead of validators downloading entire blobs, they’ll sample small portions to verify availability. This should allow the network to safely increase blob capacity without overwhelming validators.
Shared sequencers: Projects like Espresso Systems are working on cross-chain synchronous composability. If this works, we could get the best of both worlds—L2 speed with near-L1 composability.
Superchain momentum: Optimism’s vision of a unified L2 network with shared standards and liquidity is gaining traction. Base’s success is proof that the model can work.
My Question for the Community
Did we make the right trade-off?
Should Ethereum have prioritized L1 scaling (full sharding) over the rollup-centric roadmap? Or is L2 fragmentation a temporary growing pain that shared sequencers and better bridging will solve?
As someone building in this space every day, I see both the incredible throughput gains AND the composability losses. I’m optimistic about solutions like shared sequencing, but I’m also pragmatic about how long it takes for these coordination mechanisms to mature.
What’s your experience? Are you building cross-L2? Frustrated by fragmentation? Or do you think this is just the natural evolution toward a multi-chain future where each L2 finds its niche?
TL;DR: EIP-4844 increased L2 throughput from 200 to 4,800 TPS (success!), but fractured liquidity across Arbitrum, Optimism, Base, and others. Atomic composability—DeFi’s superpower—is now broken unless you stay on one L2. Shared sequencers might fix this, but fragmentation could be the price we pay for scale.