Data Availability Became the 'New Bottleneck' in 2026—Did Proto-Danksharding Just Delay the Problem?

Okay, I need to share something that’s been bugging me as I dig deeper into L2 infrastructure. I spent the last few weeks optimizing our DeFi protocol’s settlement costs, and I realized: we solved the execution bottleneck, but now data availability is the constraint.

Full disclosure: I’m still learning the nuances of DA layers (so feel free to correct me if I get something wrong!), but here’s what I’m seeing from the frontend/protocol development trenches.

Proto-Danksharding Was Huge, But…

When EIP-4844 launched in March 2024 with the Dencun upgrade, it was a game-changer. Transaction costs on Layer 2s dropped dramatically because blob transactions are 10-100x cheaper than calldata for posting data back to L1.

This made L2s actually viable for everyday users—no more $50 swap fees! Base, Optimism, Arbitrum all became profitable because settlement costs became manageable.

But here’s the thing: we’re still limited to 6 blobs per block. Right now, most blocks only use 2-3 blobs, so there’s plenty of capacity. But what happens when the next bull market hits and every rollup is competing for blob space?

Are Alternative DA Layers the Answer?

I’ve been reading about Celestia, EigenDA, and Avail. They all offer cheaper, higher-throughput data availability than Ethereum mainnet:

  • Celestia: Already has 50% of the DA market share. Uses Data Availability Sampling (DAS) and fraud proofs. Seems really interesting for sovereign rollups.
  • EigenDA: Uses restaking to inherit Ethereum’s economic security. Hitting 100MB/s throughput, which is wild.
  • Avail: Positioning as a universal DA layer for multichain apps.

As a developer, I’m honestly tempted. If I can get cheaper DA with “good enough” security, why wouldn’t I use it? But then I wonder: are we fragmenting the L2 ecosystem? If some L2s use Ethereum DA and others use Celestia or EigenDA, does that hurt composability?

Full Danksharding Timeline Is… Unclear

The roadmap says full danksharding (expanding from 6 to 64 blobs per block) will support hundreds of rollups and millions of TPS. That sounds amazing! But the timeline is “several years away,” and implementation complexity is high.

Meanwhile, PeerDAS (from the Fusaka fork) reduces validator bandwidth requirements by 87.5%, which should help. But we’re still in a waiting game.

My Question: Did We Just Move the Problem?

Here’s what I can’t figure out: If rollups solved execution scaling but created a data availability bottleneck, did we just move the constraint from one layer to another?

Is this a predictable scaling curve where proto-danksharding buys us 2-3 years, and full sharding solves everything long-term? Or is it whack-a-mole scaling where each solution creates a new constraint?

I’m genuinely curious what more experienced folks think. Are you building with Ethereum DA exclusively? Exploring alternative DA layers? Waiting to see how it plays out?

I’m trying to make smart architectural decisions for our protocol, but the uncertainty around DA makes it hard to plan. Any insights would be super helpful!

Emma, you’re asking exactly the right questions. Let me offer a different perspective from someone who’s been following Ethereum’s roadmap since the Beacon Chain days.

This Isn’t a Bottleneck—It’s a Scaling Curve

The “whack-a-mole” framing assumes we’re reacting to failures, but Ethereum’s roadmap has always been incremental by design:

  1. Beacon Chain (2020): Proof of Stake foundation
  2. The Merge (2022): Switch from PoW to PoS
  3. Proto-Danksharding (2024): Blob transactions for cheaper L2 settlement
  4. Full Danksharding (TBD): 64 blobs per block, millions of TPS capacity

Each step builds on the previous one. Proto-danksharding wasn’t meant to be the final solution—it’s explicitly a stepping stone to full danksharding. The fact that we’re now identifying DA as the next constraint means we successfully solved the previous one.

PeerDAS Makes Full Danksharding More Feasible

The Fusaka fork’s PeerDAS upgrade is more significant than people realize. By reducing validator bandwidth requirements by 87.5%, it makes the leap from 6 blobs to 64 blobs operationally viable.

Without PeerDAS, asking validators to handle 10x more data would have been a non-starter. With it, we’re on a clear path to full sharding.

Alternative DA Layers Sacrifice Ethereum’s Security Guarantees

Celestia, EigenDA, and Avail are interesting projects, but they involve fundamental trade-offs:

  • Celestia: Fraud proofs are powerful, but they introduce different trust assumptions than Ethereum’s consensus. You’re no longer secured by Ethereum validators.
  • EigenDA: Restaking is clever, but the off-chain DAC means there’s no publicly verifiable DA guarantee. You’re trusting a committee.
  • Avail: Universal design is appealing for multichain apps, but again, you’re moving away from Ethereum’s security model.

If you’re building a rollup, using alternative DA means you’re a “validium” not a true rollup. That’s fine for some use cases (e.g., gaming, social apps), but for anything holding significant value, Ethereum DA is still the gold standard.

We Have More Runway Than You Think

Your point about current blob utilization (2-3 per block) is key. We’re not at capacity yet. Even during moderate bull market activity, we’ve seen blocks with all 6 blobs used, but not sustained congestion.

By the time we consistently hit 6 blobs per block, we’ll likely have:

  • More sophisticated blob fee markets (EIP-7706)
  • Better data compression techniques
  • Advances toward full danksharding

The question isn’t “will we hit a wall?” It’s “can we scale the next step before we hit the current ceiling?” History suggests yes.

Bottom Line: Trust the Roadmap

Ethereum’s scaling philosophy has always been: secure decentralization first, then scale within those constraints. It’s slower than “move fast and break things,” but it’s why Ethereum is still the most trusted settlement layer after 11 years.

Proto-danksharding gave us 2-3 years of runway. That’s by design. The goal is to reach full danksharding before that runway runs out. I’m optimistic we will.

I’ve been tracking blob utilization metrics for the past few months, and I think it’s worth grounding this discussion in actual data.

Current Blob Space Utilization Is Low

Here’s what the on-chain data shows:

  • Average blobs per block: 2.3 (as of March 2026)
  • Peak utilization: 6 blobs per block during high activity periods (rare)
  • Typical range: 1-4 blobs per block
  • Current capacity buffer: ~60% unused

Most L2s haven’t even fully migrated to blob transactions yet. Arbitrum and Optimism are using blobs extensively, but many smaller rollups are still relying on calldata or haven’t optimized their batching strategies.

The Real Bottleneck Isn’t Capacity—It’s Adoption

Brian’s right that we have runway. The current constraint isn’t technical capacity (6 blobs is plenty for now), it’s:

  1. L2s still optimizing for blobs: Not all rollups have updated their sequencers to efficiently pack blob data
  2. Fee market dynamics: Blob fees are still being price-discovered
  3. Developer tooling: Easier blob integration tools will increase adoption

But Emma’s Concern Is Valid: What About the Next Bull Market?

I ran some projections based on 2021 bull market transaction volumes:

  • If L2 usage grows 5x (conservative bull market assumption): We’d consistently hit 6 blobs per block
  • If L2 usage grows 10x (2021-level mania): Blob space becomes severely congested
  • Timeline: Could happen in 12-18 months if conditions align

The question is: Can Ethereum ship full danksharding before we hit that wall? Or do blob fees spike and push some L2s toward alternative DA?

Alternative DA Layer Metrics Are Interesting

I’ve also been tracking Celestia and EigenDA adoption:

  • Celestia: Processing 160+ GB of rollup data, 50% market share of alt-DA layers
  • EigenDA: V2 hitting 100MB/s throughput (vs Ethereum’s ~375KB per block with 6 blobs)
  • Cost comparison: Alt-DA layers are 10-100x cheaper than Ethereum blobs

For L2s optimizing for cost over absolute security (gaming, social, NFTs), alt-DA makes economic sense.

My Take: We Have Time, But Not Infinite Time

The data suggests:

  1. 2026: Plenty of blob capacity, no immediate crisis
  2. 2027: If bull market hits, blob congestion becomes real
  3. 2028+: Full danksharding needs to ship or alt-DA becomes dominant

The uncertainty Emma mentioned is real. As a data engineer, I want roadmaps with concrete timelines so I can model capacity planning. “Several years away” for full danksharding makes it hard to architect for the next 2-3 years.

Brian’s optimism is warranted based on Ethereum’s historical execution, but Emma’s concern about uncertainty is equally valid from a planning perspective.