Skip to main content

PeerDAS Explained: How Ethereum Verifies Data Without Downloading Everything

· 9 min read
Dora Noda
Software Engineer

What if you could verify a 500-page book exists without reading a single page? That's essentially what Ethereum just learned to do with PeerDAS—and it's quietly reshaping how blockchains can scale without sacrificing decentralization.

On December 3, 2025, Ethereum activated its Fusaka upgrade, introducing PeerDAS (Peer Data Availability Sampling) as the headline feature. While most headlines focused on the 40-60% fee reductions for Layer 2 networks, the underlying mechanism represents something far more significant: a fundamental shift in how blockchain nodes prove data exists without actually storing all of it.

The Problem: Everyone Can't Download Everything Forever

Before diving into PeerDAS, let's understand the problem it solves.

Blockchains face an inherent tension: the more data they process, the harder it becomes for regular people to run nodes. If running a node requires expensive hardware and massive bandwidth, the network centralizes around well-funded operators. But if you limit data throughput to keep nodes accessible, you can't scale.

This is the data availability problem—ensuring transaction data exists somewhere on the network so anyone can verify the blockchain's state, without requiring everyone to download everything.

When Ethereum introduced "blobs" in March 2024's Dencun upgrade, Layer 2 fees plummeted from $0.50-$3.00 to around $0.01-$0.10 per transaction. Blobs provided dedicated space for rollup data that could be pruned after a few weeks. But there was a catch: every node still had to download every blob to verify availability.

With 6 blobs per block and growing demand, Ethereum was already hitting capacity ceilings. The old model—"everyone downloads everything"—couldn't scale further without pricing out home validators.

Enter PeerDAS: Sampling Instead of Downloading

PeerDAS flips the verification model on its head. Instead of downloading full blobs to prove they exist, nodes download small random samples and use clever mathematics to verify the complete data is available.

Here's the intuition: imagine you want to verify a warehouse full of boxes actually contains products. The old approach would require inspecting every single box. PeerDAS is like randomly selecting a few boxes and using statistical guarantees that if your samples check out, the entire warehouse is legitimately stocked.

But random sampling alone isn't enough. What if someone stored only the boxes they knew you'd sample? This is where erasure coding enters the picture.

Erasure Coding: The Math That Makes Sampling Work

Erasure coding is borrowed from satellite communications and CD storage—technologies that needed to recover data even when parts got corrupted. The technique adds structured redundancy to data in a way that allows reconstruction from partial pieces.

With PeerDAS, Ethereum takes each blob and encodes it into 128 "columns" of data. Here's the key insight: any 64 of those 128 columns can reconstruct the original blob. The data is spread so evenly that hiding any portion becomes statistically impractical.

Think of it like a hologram—you can cut a holographic image in half, and each half still contains the complete picture. Erasure coding creates similar redundancy properties for data.

When a node randomly samples 8 columns out of 128, the probability of missing hidden data drops exponentially. If a malicious actor tries to hide even small portions of a blob, the statistical chance of detection becomes overwhelming as the network grows.

KZG Commitments: Compact Proofs of Consistency

The second mathematical ingredient is KZG polynomial commitments—a cryptographic technique that lets you create a small "fingerprint" of data that can verify individual pieces without revealing the whole.

KZG commitments treat data as coefficients of a mathematical polynomial. You can then prove any evaluation point on that polynomial is correct using a tiny proof. For PeerDAS, this means proving that sampled columns genuinely belong to the claimed blob without transmitting the entire blob.

The commitments themselves came from a massive ceremony in 2023 where over 141,000 participants contributed randomness. As long as a single participant honestly destroyed their contribution, the entire system remains secure—a "1-of-N" trust assumption.

How PeerDAS Actually Works

Let's trace through the technical flow:

Step 1: Blob Extension

When a rollup submits blob data, it starts as 64 columns. Erasure coding extends this to 128 columns—doubling the data with structured redundancy.

Step 2: Column Distribution

The 128 columns are distributed across the network through gossip protocols. Nodes subscribe to specific "column subnets" based on their identity.

Step 3: Sampling

Regular nodes subscribe to 8 randomly chosen column subnets out of 128. This means each node downloads only 1/16th of the extended data—or equivalently, 1/8th of the original blob data.

Step 4: Supernode Coverage

Nodes controlling validators with combined stake above 4,096 ETH become "supernodes" that subscribe to all 128 column subnets. These supernodes provide network-wide coverage and can heal data gaps.

Step 5: Verification

Nodes verify their sampled columns against KZG commitments included in block headers. If samples verify correctly, the node can be statistically confident the full blob is available.

Step 6: Reconstruction (if needed)

If any node needs the full blob, they can request columns from peers until they have 64+ verified columns, then reconstruct the original data.

Security: Defending Against Data Withholding

The primary attack PeerDAS must resist is "data withholding"—where a block producer publishes a block claiming data is available while secretly hiding portions.

PeerDAS defeats this through probabilistic guarantees:

  • With 128 columns and a 50% reconstruction threshold, an attacker must hide at least 65 columns (50.8%) to prevent reconstruction
  • But hiding 65 columns means 50.8% of random samples will hit hidden data
  • With thousands of nodes independently sampling, the probability of all nodes missing the hidden portions becomes astronomically small

The math scales favorably: as the network grows, security improves while per-node costs remain constant. A network of 10,000 nodes sampling 8 columns each provides far stronger guarantees than 1,000 nodes, without any individual node working harder.

Real-World Impact: L2 Fees and Throughput

The practical effects emerged immediately after Fusaka activated:

40-60% fee reduction on major Layer 2 networks including Arbitrum, Optimism, and Base within the first weeks.

Blob capacity scaling from 6 blobs per block to a planned 128+ over 2026, achieved through gradual increases: 10 blobs by December 9, 2025, and 14 by January 7, 2026.

80% bandwidth reduction for full nodes, making home validation more accessible.

100,000+ TPS theoretical capacity across the combined L2 ecosystem—exceeding Visa's average of 65,000 TPS.

The fee floor mechanism (EIP-7918) also addressed a quirk from Dencun: blob fees had collapsed to 1 wei (essentially zero), meaning rollups used Ethereum's data space nearly free. Fusaka ties the blob base fee to a fraction of L1 fees, creating a functional fee market.

PeerDAS vs. Full Danksharding

PeerDAS isn't Ethereum's final form—it's a stepping stone toward "full Danksharding," the complete data availability vision.

FeaturePeerDAS (Current)Full Danksharding (Future)
Erasure coding1D (per-blob)2D (across entire matrix)
Blob capacity8x current32+ MB per block
Sampling modelColumn-basedCell-based
TimelineLive (Dec 2025)~2027+

Full Danksharding will extend erasure coding across two dimensions—both within blobs and across the entire data matrix. This creates even stronger redundancy and enables more aggressive scaling.

Current research shows improved schemes could deliver 4.3x better node storage efficiency and 2x lower bandwidth compared to PeerDAS. But implementing these requires significant protocol changes, making PeerDAS the pragmatic near-term solution.

What This Means for Ethereum's Roadmap

PeerDAS validates a core thesis of Ethereum's scaling philosophy: you can dramatically increase throughput without centralizing the network.

The old assumption was that more data requires more powerful nodes. PeerDAS proves otherwise—through clever mathematics, you can scale data while actually reducing per-node requirements.

This unlocks the next phase of Ethereum's roadmap:

  • Glamsterdam (2026): EIP-7928 introduces parallel transaction execution, enabled by the data availability ceiling that PeerDAS raised
  • Block space allocation limits (BALs): Dynamic gas limits become feasible with better DA guarantees
  • Enshrined proposer-builder separation (ePBS): On-chain protocol for separating block-building roles

Vitalik Buterin has projected "large non-ZK-EVM-dependent gas limit increases" by late 2026, building on PeerDAS as the foundation.

For Developers: What Changes?

For most developers, PeerDAS is invisible—it's an infrastructure improvement that makes existing patterns cheaper and faster.

But some implications are worth noting:

Lower L2 costs: Applications requiring high throughput become economically viable. Games, social platforms, and high-frequency trading all benefit.

More blob space: Rollups can post more data per block, reducing compression requirements and enabling richer state proofs.

Improved finality: With faster data availability verification, optimistic rollups may reduce their challenge periods.

Decentralized sequencing: Lower DA costs make decentralized sequencer networks more practical.

The Bigger Picture

PeerDAS represents blockchain technology maturing beyond naive solutions. Early blockchains required every participant to validate everything—a pattern that fundamentally limited scale.

Data availability sampling breaks that constraint. It's the difference between a village where everyone attends every meeting versus a city where statistical sampling and institutional trust create efficient governance.

Ethereum isn't alone in pursuing this approach—Celestia, Avail, and EigenDA have built entire protocols around DA sampling. But Ethereum implementing PeerDAS natively validates the approach and brings it to the largest smart contract ecosystem.

The mathematical elegance is striking: by downloading less, nodes actually provide stronger availability guarantees. It's a reminder that computer science breakthroughs often look like counterintuitive trade-offs that turn out not to be trade-offs at all.


PeerDAS activated on Ethereum mainnet December 3, 2025, as part of the Fusaka upgrade. This article explains the technical architecture for non-specialists—for implementation details, see EIP-7594 and the Ethereum.org PeerDAS documentation.