Okay, I need to share something that’s been bugging me as I dig deeper into L2 infrastructure. I spent the last few weeks optimizing our DeFi protocol’s settlement costs, and I realized: we solved the execution bottleneck, but now data availability is the constraint.
Full disclosure: I’m still learning the nuances of DA layers (so feel free to correct me if I get something wrong!), but here’s what I’m seeing from the frontend/protocol development trenches.
Proto-Danksharding Was Huge, But…
When EIP-4844 launched in March 2024 with the Dencun upgrade, it was a game-changer. Transaction costs on Layer 2s dropped dramatically because blob transactions are 10-100x cheaper than calldata for posting data back to L1.
This made L2s actually viable for everyday users—no more $50 swap fees! Base, Optimism, Arbitrum all became profitable because settlement costs became manageable.
But here’s the thing: we’re still limited to 6 blobs per block. Right now, most blocks only use 2-3 blobs, so there’s plenty of capacity. But what happens when the next bull market hits and every rollup is competing for blob space?
Are Alternative DA Layers the Answer?
I’ve been reading about Celestia, EigenDA, and Avail. They all offer cheaper, higher-throughput data availability than Ethereum mainnet:
- Celestia: Already has 50% of the DA market share. Uses Data Availability Sampling (DAS) and fraud proofs. Seems really interesting for sovereign rollups.
- EigenDA: Uses restaking to inherit Ethereum’s economic security. Hitting 100MB/s throughput, which is wild.
- Avail: Positioning as a universal DA layer for multichain apps.
As a developer, I’m honestly tempted. If I can get cheaper DA with “good enough” security, why wouldn’t I use it? But then I wonder: are we fragmenting the L2 ecosystem? If some L2s use Ethereum DA and others use Celestia or EigenDA, does that hurt composability?
Full Danksharding Timeline Is… Unclear
The roadmap says full danksharding (expanding from 6 to 64 blobs per block) will support hundreds of rollups and millions of TPS. That sounds amazing! But the timeline is “several years away,” and implementation complexity is high.
Meanwhile, PeerDAS (from the Fusaka fork) reduces validator bandwidth requirements by 87.5%, which should help. But we’re still in a waiting game.
My Question: Did We Just Move the Problem?
Here’s what I can’t figure out: If rollups solved execution scaling but created a data availability bottleneck, did we just move the constraint from one layer to another?
Is this a predictable scaling curve where proto-danksharding buys us 2-3 years, and full sharding solves everything long-term? Or is it whack-a-mole scaling where each solution creates a new constraint?
I’m genuinely curious what more experienced folks think. Are you building with Ethereum DA exclusively? Exploring alternative DA layers? Waiting to see how it plays out?
I’m trying to make smart architectural decisions for our protocol, but the uncertainty around DA makes it hard to plan. Any insights would be super helpful!