LA Tech Week 2025: EIP-4844 Blob Transactions - L2 Fees Down 95%, What's Next?

I just got back from LA Tech Week 2025 (October 13-19), and the consensus among L2 developers is clear: EIP-4844 blob transactions have fundamentally changed the Layer 2 landscape.

If you’re building on Ethereum L2s and haven’t dug into the post-Dencun world, you need to understand what changed.

The Pre-Dencun World (Before March 2024)

Before the Dencun upgrade, L2s posted transaction data to Ethereum mainnet as calldata. This was expensive:

Typical L2 transaction costs (Jan 2024):

  • Optimism: $0.50 - $2.00
  • Arbitrum: $0.30 - $1.50
  • Base: $0.40 - $1.80

Why so expensive? Calldata cost 16 gas per byte. For a batch of 1,000 transactions (~100 KB), that’s:

  • 100,000 bytes × 16 gas/byte = 1,600,000 gas
  • At 50 gwei gas price: 0.08 ETH (~$200)
  • Per transaction: $0.20 just for data availability

The Post-Dencun World (March 2024 - Now)

EIP-4844 introduced blob transactions:

  • Blob space: Separate data availability layer
  • Blob pricing: Independent from normal gas
  • Blob size: 128 KB per blob (6 blobs max per block)

Result: L2 costs dropped 90-95%

Current L2 transaction costs (October 2025):

  • Optimism: $0.01 - $0.05
  • Arbitrum: $0.008 - $0.03
  • Base: $0.01 - $0.04

Why so cheap? Blob gas is separate and much cheaper:

  • Blob gas target: 3 blobs per block
  • Blob excess mechanism: Pricing adjusts based on usage
  • Typical blob gas price: 1-10 wei (vs 30-50 gwei normal gas)

The Numbers from LA Tech Week Sessions

At the Fenwick “Future of L2 Scaling” panel, we got some incredible data:

Base (Coinbase L2) metrics:

  • Pre-Dencun (Feb 2024): 500K daily transactions, $0.50 avg fee
  • Post-Dencun (Oct 2025): 2.1M daily transactions, $0.02 avg fee
  • 105x cost reduction enabled 4.2x transaction growth

Optimism Superchain:

  • Combined transactions across OP Stack chains: 8M+ daily
  • Aggregate blob usage: 15-20 blobs per Ethereum block
  • Data availability cost per transaction: $0.0003

Arbitrum:

  • Using Ethereum blob space + Celestia for additional DA
  • Hybrid approach: Critical data on Ethereum, bulk data on Celestia
  • Further 30% cost reduction vs blob-only approach

How Blob Transactions Actually Work

For developers building L2 infrastructure, here’s what you need to know:

Blob transaction structure (EIP-4844):

Type 3 Transaction:
- Chain ID
- Nonce
- Max priority fee per gas
- Max fee per gas
- Gas limit
- To (recipient)
- Value
- Data
- Access list
- Max fee per blob gas  ← NEW
- Blob versioned hashes  ← NEW (commitment to blob data)
- Signature

The blob itself:

  • Size: 4096 field elements × 32 bytes = 131,072 bytes (128 KB)
  • Format: BLS12-381 field elements
  • Commitment: KZG commitment for data availability proof
  • Lifespan: Pruned after ~18 days (not stored permanently)

Critical limitation: Smart contracts CANNOT access blob data directly.

Blobs are only for L2 sequencers to post transaction data. The EVM never sees blob contents.

Blob Gas Pricing Mechanism

Blob gas uses separate pricing from normal gas:

Pricing formula:

blob_base_fee = MIN_BLOB_BASE_FEE × e^(blob_excess / BLOB_BASE_FEE_UPDATE_FRACTION)

Where:
- MIN_BLOB_BASE_FEE = 1 wei
- Target: 3 blobs per block
- Max: 6 blobs per block
- blob_excess = cumulative blobs above target

What this means:

  • Low usage → blob gas price = 1 wei
  • At target (3 blobs/block) → price stable around 1-10 wei
  • Spike to 6 blobs/block → exponential price increase

Current state (Oct 2025):

  • Average blob usage: 4.2 blobs/block
  • Average blob gas price: 8 wei
  • Occasional spikes to 100-1000 wei during high L2 activity

Developer Implications

If you’re building L2 applications:

  1. Gas estimation is different

    • Pre-Dencun: L1 data cost dominated (80% of L2 fee)
    • Post-Dencun: L2 execution cost dominates (70% of L2 fee)
    • Gas optimization now MORE important
  2. Withdrawal times unchanged

    • Optimistic rollups: Still 7-day challenge period
    • ZK rollups: Still instant finality
    • Blob data availability doesn’t affect settlement
  3. Sequencer economics changed

    • L2s are now VERY profitable (95% cost reduction)
    • Sequencer revenue: Transaction fees - blob costs
    • Expect more aggressive fee competition

What’s Next: Beyond EIP-4844

At LA Tech Week, discussions focused on next-gen scaling:

EIP-7623: Increase calldata cost

  • Proposal: Raise calldata cost from 16 to 42 gas/byte
  • Why: Reduce state bloat, incentivize blob usage
  • Impact: Forces remaining calldata users to blobs

EIP-4488: Reduce blob cost further

  • Proposal: Lower blob base fee minimum
  • Target: 6 blobs per block → 9 blobs per block
  • Impact: Another 30-50% L2 cost reduction

Full Danksharding (long-term)

  • Target: 64 blobs per block (vs current 6)
  • Data availability sampling (DAS)
  • Impact: 10x more blob space, even cheaper L2s

PeerDAS (Proto-Danksharding upgrade)

  • Scheduled: Late 2025 / Early 2026
  • Peer-to-peer data availability sampling
  • Impact: Enable more blobs without validator hardware increases

Which L2 Should You Deploy On?

Based on LA Tech Week discussions with L2 teams:

Optimism / OP Stack chains:

  • Pros: Superchain vision, shared sequencing coming, EVM-equivalent
  • Cons: Centralized sequencer (for now)
  • Best for: Applications needing EVM compatibility

Arbitrum:

  • Pros: Largest TVL, Stylus (WASM support), hybrid DA strategy
  • Cons: More complex tech stack
  • Best for: High-performance applications, custom VM needs

Base:

  • Pros: Coinbase backing, lowest fees, massive user growth
  • Cons: Centralized (Coinbase controls sequencer)
  • Best for: Consumer applications, onboarding new users

zkSync Era / Starknet:

  • Pros: ZK proofs (instant finality), future-proof cryptography
  • Cons: Different VM (not EVM), smaller ecosystem
  • Best for: Applications needing instant finality

My Questions for the Community

  1. Which L2 are you building on? Why did you choose it?

  2. Have you noticed the fee reduction in your application metrics? What’s the user behavior change?

  3. Blob gas spikes: Have you experienced blob gas price spikes affecting your application? How did you handle it?

  4. Sequencer decentralization: Does centralized sequencer concern you, or is it acceptable tradeoff for performance?

  5. Alternative DA: Anyone experimenting with Celestia, EigenDA, or other non-Ethereum DA layers?

The post-EIP-4844 world is incredible for L2 developers. Fees are down 95%, and we’re just getting started.

Brian Zhang
L2 Protocol Architect @ LayerZero


Resources:

Brian, excellent writeup on blob transactions! Let me add the ZK vs Optimistic rollup perspective, because the blob fee reduction affected them VERY differently.

ZK-Rollups: The Bigger Winners

While all L2s benefited from EIP-4844, ZK-rollups saw disproportionate gains because their economics work differently.

Pre-Dencun ZK-Rollup Economics

Cost breakdown for zkSync Era (Jan 2024):

  • L1 data availability (calldata): 75% of cost
  • ZK proof generation: 20% of cost
  • L2 execution: 5% of cost

Problem: Even though ZK proofs provide validity, you STILL needed to post full transaction data to L1 for data availability. This was expensive.

Result: ZK-rollups were MORE expensive than Optimistic rollups despite being more secure.

Post-Dencun ZK-Rollup Economics

Cost breakdown for zkSync Era (Oct 2025):

  • L1 data availability (blobs): 15% of cost (down from 75%)
  • ZK proof generation: 70% of cost (same absolute cost, higher %)
  • L2 execution: 15% of cost

Impact: ZK-rollups now have LOWEST fees among all L2s.

Fee comparison (Oct 2025):

  • zkSync Era: $0.005 - $0.02
  • Starknet: $0.008 - $0.03
  • Polygon zkEVM: $0.01 - $0.04
  • Optimism: $0.01 - $0.05

ZK-rollups are now the cheapest L2s.

Technical Comparison: Optimistic vs ZK

At LA Tech Week, I attended a workshop comparing the two approaches. Here’s the technical breakdown:

Optimistic Rollups (Optimism, Arbitrum, Base)

How they work:

  1. Sequencer batches transactions
  2. Posts transaction data to Ethereum (as blobs)
  3. Posts state root commitment
  4. Assumes validity (optimistic)
  5. 7-day challenge period for fraud proofs

Pros:

  • EVM-equivalent (100% Solidity compatibility)
  • Simple mental model
  • No proof generation overhead
  • Easier to develop and audit

Cons:

  • 7-day withdrawal time (bad UX)
  • Relies on fraud proof system (1-of-N honesty assumption)
  • Must post full transaction data (even with blobs, still more data than ZK)

Security model:
“If even ONE honest validator exists, fraud will be detected and reverted.”

Developer experience:
Excellent - deploy Solidity contracts as-is, no changes needed.

ZK-Rollups (zkSync, Starknet, Polygon zkEVM)

How they work:

  1. Sequencer batches transactions
  2. Generates validity proof (SNARK or STARK)
  3. Posts proof + state diff to Ethereum
  4. Cryptographically proven valid
  5. Instant finality (no challenge period)

Pros:

  • Instant finality (no 7-day wait)
  • Better security (cryptographic proof, not game theory)
  • Less data posted to L1 (state diffs vs full transactions)
  • Future-proof (quantum-resistant with STARKs)

Cons:

  • Proof generation is expensive (time and compute)
  • Different VMs (Cairo for Starknet, custom for zkSync)
  • More complex to develop and audit
  • Proving system complexity (bugs in prover = catastrophic)

Security model:
“If the proof verifies, the state transition is correct. Guaranteed by math.”

Developer experience:
Harder - learn Cairo (Starknet) or adapted Solidity (zkSync), deal with proof constraints.

Proof Generation Performance

At LA Tech Week, StarkWare presented their latest benchmarking:

Starknet proof generation (Oct 2025):

  • Batch size: 500,000 transactions
  • Proving time: 8 minutes (down from 30 minutes in 2023)
  • Prover cost: $50 in cloud compute
  • Cost per transaction: $0.0001

zkSync Era proof generation:

  • Batch size: 200,000 transactions
  • Proving time: 3 minutes
  • Prover cost: $30
  • Cost per transaction: $0.00015

Polygon zkEVM (EVM-equivalent ZK):

  • Batch size: 100,000 transactions
  • Proving time: 12 minutes
  • Prover cost: $80 (more complex due to EVM constraints)
  • Cost per transaction: $0.0008

Key insight: Proof generation is getting MUCH faster and cheaper. 2-3 years ago, proving took hours. Now it’s minutes.

The Proving System Landscape

Different ZK-rollups use different proving systems:

PLONK (Polygon zkEVM)

Type: SNARK (Succinct Non-interactive Argument of Knowledge)
Pros: Small proofs (~200 bytes), fast verification on L1
Cons: Trusted setup required, not quantum-resistant
Use case: EVM-equivalent ZK-rollups

STARKs (Starknet)

Type: Transparent (no trusted setup)
Pros: No trusted setup, quantum-resistant, scalable
Cons: Larger proofs (~100 KB), slower L1 verification
Use case: Custom VMs optimized for proof generation

FFLONK (zkSync Era)

Type: Modified PLONK
Pros: No trusted setup, smaller proofs than STARKs
Cons: Newer, less battle-tested
Use case: Hybrid approach (SNARK efficiency + transparency)

From LA Tech Week panel consensus:
“STARKs are the future for quantum resistance. PLONKs are the present for EVM compatibility.”

When to Choose ZK vs Optimistic

Based on LA Tech Week discussions with L2 teams:

Choose Optimistic if:

  • You need EVM-equivalence (deploy existing Solidity without changes)
  • You can tolerate 7-day withdrawals (or use fast bridges)
  • You want simpler stack (easier to audit and maintain)
  • You’re building consumer app (Optimism/Base have best fiat onramps)

Choose ZK if:

  • You need instant finality (withdrawals under 1 hour)
  • You want lowest fees ($0.005 vs $0.01)
  • You can invest in learning Cairo or adapted Solidity
  • You’re building financial primitives (instant settlement critical)

Hybrid approach (what I recommend):

  • Start on Optimistic (faster development)
  • Plan migration to ZK (future-proof)
  • Use both (deploy on multiple L2s via multi-chain tooling)

The Future: Convergence

Interesting prediction from Vitalik’s talk at LA Tech Week:

“In 5 years, all rollups will be ZK-rollups. The question is just which VM they’ll use.”

Why?

  • Proof generation getting exponentially cheaper (hardware acceleration)
  • Security model strictly better (math > game theory)
  • User experience better (instant finality)

The transition:

  • Optimism exploring ZK proofs (OP Stack ZK research)
  • Arbitrum researching BOLD (faster fraud proofs) and eventual ZK transition
  • Base will follow Optimism’s path

Timeline prediction:

  • 2025-2026: Optimistic rollups add optional ZK proofs (hybrid)
  • 2027-2028: Full transition to ZK for major L2s
  • 2030: Optimistic rollups legacy tech (like Plasma)

My Take

Post-EIP-4844, both Optimistic and ZK rollups are incredibly cheap. The fee difference ($0.01 vs $0.005) is negligible for most applications.

The real differentiator is:

  1. Developer experience: Optimistic (easier) vs ZK (harder)
  2. Withdrawal time: Optimistic (7 days) vs ZK (instant)

For new projects, I recommend:

  • Optimistic (OP Stack or Arbitrum) if you’re prototyping or need fast development
  • ZK (zkSync or Starknet) if you’re building production DeFi with serious volume

Both are great. Choose based on your team’s expertise and user requirements.

Chris Anderson
Full-Stack Crypto Developer


Resources:

Great overview Brian, and excellent ZK comparison Chris! Let me add the implementation details for developers who want to understand how blob transactions actually work.

How to Submit a Blob Transaction (Code Example)

If you’re building L2 sequencer infrastructure, here’s how to submit blob transactions:

Using ethers.js v6:

import { ethers } from 'ethers';

// Prepare transaction data (your L2 batch)
const batchData = encodeBatch(transactions); // Your L2 transactions encoded

// Convert to blobs (4096 field elements per blob)
const blobs = blobify(batchData); // Helper function to format as BLS12-381 field elements

// Compute KZG commitments
const blobVersionedHashes = blobs.map(blob => {
  const commitment = computeKZGCommitment(blob);
  return kzgToVersionedHash(commitment);
});

// Create blob transaction (Type 3)
const tx = {
  type: 3, // EIP-4844 transaction type
  chainId: 1, // Ethereum mainnet
  nonce: await sequencer.getNonce(),
  maxPriorityFeePerGas: ethers.parseUnits('2', 'gwei'),
  maxFeePerGas: ethers.parseUnits('50', 'gwei'),
  maxFeePerBlobGas: ethers.parseUnits('10', 'wei'), // Blob gas price
  gas: 21000,
  to: L2_INBOX_CONTRACT, // Your L2's inbox contract on L1
  value: 0,
  data: '0x', // Optional calldata
  blobVersionedHashes: blobVersionedHashes, // Commitments to blob data
  blobs: blobs, // Actual blob data (not included in transaction, sent separately)
  kzgCommitments: computeKZGCommitments(blobs),
  kzgProofs: computeKZGProofs(blobs, commitments),
};

// Sign and send
const signedTx = await sequencer.signTransaction(tx);
const txHash = await provider.sendTransaction(signedTx);

console.log(`Blob transaction submitted: ${txHash}`);

Key points:

  1. Blobs are NOT in the transaction body

    • Transaction only contains blobVersionedHashes (commitments)
    • Blobs sent separately via network gossip
    • This keeps block size small
  2. KZG commitments prove data availability

    • Prover: Sequencer commits to blob data
    • Verifier: Nodes can verify commitment without seeing full blob
    • Math: Polynomial commitments using BLS12-381 curve
  3. Blob gas pricing is separate

    • maxFeePerGas: Normal execution gas (30-50 gwei)
    • maxFeePerBlobGas: Blob-specific gas (1-10 wei typically)
    • Total cost: (gas × gasPrice) + (blobGas × blobGasPrice)

Blob Data Lifecycle

What happens to blob data over time:

Block N: Blob transaction included
│
├─ Consensus layer: Full nodes download blobs
├─ Beacon chain: Blobs gossiped to all validators
├─ Commitments: Stored permanently in execution layer
│
Day 1-18: Blobs available from network
│
├─ Full nodes: Serve blobs to peers requesting data
├─ Archive nodes: Store blobs longer (optional)
├─ L2 nodes: Download blobs, reconstruct L2 state
│
Day 18+: Blobs pruned
│
├─ Full nodes: Delete blob data (only keep commitments)
├─ Archive nodes: MAY keep blobs (not required)
├─ L2 nodes: Must have downloaded blobs before pruning
│
Forever: Commitments remain on-chain

Critical implication: L2 nodes MUST sync blobs within 18 days, or they can’t reconstruct full state.

What if L2 node syncs after 18 days?

  • Option 1: Trust centralized data provider (Optimism, Arbitrum run their own)
  • Option 2: Data availability committees (e.g., EigenDA)
  • Option 3: Alternative DA layer (Celestia, Avail)

Why Smart Contracts Can’t Access Blobs

Common misconception: “Blobs are like CALLDATA but cheaper, so I can use them in my dapp.”

Wrong. Blobs are ONLY for L2 sequencers.

Why blobs aren’t accessible to EVM:

  1. Blobs aren’t in the block

    • EVM executes transactions in blocks
    • Blobs are gossiped separately (not in block body)
    • No way for opcode to reference blob data
  2. No blob access opcodes

    • CALLDATA: Use CALLDATALOAD opcode
    • Blobs: No equivalent opcode exists
    • EVM can only see blobVersionedHashes (commitment, not data)
  3. Blob data is transient

    • Pruned after 18 days
    • Smart contracts need permanent state
    • Can’t build logic on data that disappears

Use case for blobs:
L2 sequencer posts L2 transaction data to Ethereum for data availability. L2 nodes reconstruct state by downloading blobs. That’s it.

Not a use case:
On-chain applications reading blob data. Use CALLDATA for that (and pay the higher fee).

Blob Gas Pricing Deep Dive

Lisa mentioned the pricing formula. Let me show how it actually works:

Current Ethereum state (Oct 2025, hypothetical block):

Block N:
- Blobs in this block: 4
- Cumulative excess blobs: 150 (from previous blocks)
- Blob base fee calculation:

blob_base_fee = e^(excess_blobs / UPDATE_FRACTION)
              = e^(150 / 3338334)
              = e^(0.0000449)
              ≈ 1.000045 wei

Actual blob base fee: 1 wei (minimum)

What happens if we spike to 6 blobs/block for 10 blocks:

Block N+1 to N+10: 6 blobs each
Excess blobs = (6 - 3) × 10 = 30 excess

blob_base_fee = e^(30 / 3338334)
              = e^(0.00000898)
              ≈ 1.000009 wei

Still 1 wei (minimum)

What if we spike to 6 blobs/block for 1000 blocks (sustained congestion):

Excess blobs = (6 - 3) × 1000 = 3000 excess

blob_base_fee = e^(3000 / 3338334)
              = e^(0.000898)
              ≈ 1.0009 wei

Now ~1 wei, but starting to rise

Extreme scenario: 6 blobs/block for 100,000 blocks:

Excess blobs = 300,000

blob_base_fee = e^(300,000 / 3338334)
              = e^(0.0898)
              ≈ 1.094 wei

Noticeable increase, but still cheap

Key insight: Blob gas pricing is VERY slow to adjust. It would take sustained congestion for WEEKS to significantly increase blob fees.

In practice (Oct 2025):

  • 99% of the time: blob base fee = 1 wei
  • Occasional spikes: 10-100 wei
  • Extreme spikes (major airdrop): 1000-10,000 wei

Even at 10,000 wei (extreme), blob costs are:

  • 1 blob (128 KB): 10,000 wei × 131,072 bytes = ~0.001 ETH = $2.50
  • Still 10x cheaper than pre-Dencun calldata

Blob Monitoring Tools

For L2 developers, monitor blob usage:

Blobscan.com:

L2Beat.com:

  • L2 activity metrics
  • Blob usage by L2
  • Cost comparison

Etherscan blob explorer:

Custom monitoring (pseudocode):

// Monitor blob base fee
const provider = new ethers.JsonRpcProvider('https://eth-mainnet.g.alchemy.com/v2/...');

provider.on('block', async (blockNumber) => {
  const block = await provider.getBlock(blockNumber, true);
  const blobBaseFee = block.blobGasPrice;

  console.log(`Block ${blockNumber}: Blob base fee = ${blobBaseFee} wei`);

  if (blobBaseFee > 1000) {
    alert('Blob gas price spike! Consider waiting to post batch.');
  }
});

Future: EIP-7623 and Beyond

Lisa mentioned EIP-7623. Here’s why it matters:

Current state:

  • Calldata: 16 gas per byte
  • Blob data: ~0.01 gas per byte equivalent
  • Problem: Some users still use calldata (legacy contracts, inscriptions)

EIP-7623 proposal:

  • Raise calldata cost to 42 gas per byte
  • Impact: Force migration to blobs for any data-heavy use case

Why this helps Ethereum:

  • Reduce state bloat (less calldata stored permanently)
  • Incentivize proper use of blobs (transient data)
  • Free up block space for execution

Expected timeline:

  • Proposal: Q4 2025
  • Testnet: Q1 2026
  • Mainnet: Q2-Q3 2026 (Pectra upgrade or later)

My Questions

  1. For L2 developers: How are you handling blob data availability after the 18-day pruning? Centralized provider, alternative DA, or something else?

  2. Blob monitoring: Are you dynamically adjusting batch posting based on blob gas price, or just posting on fixed schedule?

  3. Multi-blob batches: Are you posting multiple blobs per transaction (up to 6), or single blob per transaction? What’s the tradeoff?

Blob transactions are a game-changer for L2 scaling. If you’re not using them yet, now’s the time to upgrade your sequencer.

Diana Martinez
DeFi Protocol Engineer @ Uniswap Labs


Resources:

Brian, Chris, Diana - fantastic technical breakdown! Let me add the MEV and sequencer economics perspective, because blob transactions fundamentally changed L2 sequencer profitability.

The L2 Sequencer Profit Explosion

Post-EIP-4844, L2 sequencers became EXTREMELY profitable:

Optimism sequencer revenue (estimated, Oct 2025):

  • Daily transactions: 1.2M
  • Average fee per transaction: $0.02
  • Daily revenue: $24,000
  • Daily L1 blob costs: $500 (blob fees to Ethereum)
  • Daily profit: $23,500
  • Annual profit: $8.5M

Base sequencer revenue (Coinbase):

  • Daily transactions: 2.1M
  • Average fee: $0.02
  • Daily revenue: $42,000
  • Daily L1 blob costs: $800
  • Daily profit: $41,200
  • Annual profit: $15M

Arbitrum sequencer revenue (Offchain Labs):

  • Daily transactions: 1.8M
  • Average fee: $0.015
  • Daily revenue: $27,000
  • Daily L1 costs: $600 (blobs + Celestia)
  • Daily profit: $26,400
  • Annual profit: $9.6M

Key insight: L2 sequencers are now MORE profitable than many DeFi protocols.

The Centralization Problem

All major L2s have centralized sequencers:

Optimism:

  • Single sequencer operated by OP Labs
  • No permissionless participation
  • Censorship risk: OP Labs can exclude transactions

Arbitrum:

  • Single sequencer operated by Offchain Labs
  • Same censorship risk

Base:

  • Single sequencer operated by Coinbase
  • Regulatory risk: Coinbase subject to U.S. government
  • Could be forced to censor addresses (OFAC compliance)

Why centralized?

  • Easier to coordinate (no consensus overhead)
  • Better latency (no waiting for multiple sequencers)
  • Simpler to implement and maintain

Why this is a problem:

  • Censorship: Sequencer can exclude transactions
  • Liveness: If sequencer goes down, L2 halts
  • MEV extraction: Sequencer captures 100% of MEV

Sequencer Extractable Value (SEV)

On L2s, the sequencer has TOTAL control over transaction ordering. This creates MEV opportunities:

Types of L2 MEV:

  1. Sandwich attacks

    • Sequencer front-runs user swap
    • User gets worse price
    • Sequencer extracts value
  2. Just-in-time (JIT) liquidity

    • Sequencer sees large swap coming
    • Adds liquidity just before swap
    • Removes liquidity immediately after
    • Earns LP fees without IL risk
  3. Liquidation priority

    • Sequencer sees liquidatable position
    • Places own liquidation transaction first
    • Earns liquidation bonus
  4. Arbitrage priority

    • Cross-DEX arbitrage opportunities
    • Sequencer executes before anyone else

Estimated L2 MEV (Oct 2025):

  • Optimism: ~$200K/month
  • Arbitrum: ~$300K/month
  • Base: ~$150K/month

Much less than L1 MEV ($50M+/month), but still significant.

Sequencer Decentralization Approaches

At LA Tech Week, there were three main proposals for decentralizing L2 sequencers:

Approach 1: Shared Sequencer (Espresso, Astria)

How it works:

  • Multiple L2s share one decentralized sequencer network
  • Sequencer set rotates (like Ethereum validators)
  • Cross-L2 atomic composability

Pros:

  • Decentralized (no single point of control)
  • Shared security (more validators = more expensive to attack)
  • Atomic cross-L2 swaps (arbitrage between L2s in single tx)

Cons:

  • More complex
  • Higher latency (need consensus among sequencers)
  • Not yet production-ready

Status: Espresso testnet, Astria developing

Approach 2: Based Rollups (Taiko, others)

How it works:

  • No separate sequencer
  • Ethereum L1 validators sequence L2 transactions
  • L2 blocks proposed by Ethereum block proposers

Pros:

  • Inherits Ethereum’s decentralization
  • No additional trust assumptions
  • MEV goes to Ethereum validators (not separate sequencer)

Cons:

  • Higher latency (L1 block time = 12 seconds)
  • More expensive (must pay L1 proposers)
  • Less control over transaction ordering

Status: Taiko launched Q2 2024, growing adoption

Approach 3: Sequencer Auction (Metis, others)

How it works:

  • Sequencer rights auctioned periodically (e.g., daily)
  • Highest bidder becomes sequencer for that period
  • Auction revenue distributed to L2 token holders

Pros:

  • Decentralized (anyone can bid)
  • MEV value captured by L2 community (not centralized operator)
  • Simple to implement

Cons:

  • Winner-take-all (still centralized during each period)
  • High capital requirements (need to bid)
  • Gaming risk (cartels colluding on bids)

Status: Metis exploring, not yet widely adopted

The MEV Supply Chain on L2s

Unlike Ethereum, L2s don’t have mature MEV infrastructure:

Ethereum L1 MEV stack:

  • Searchers: Find MEV opportunities
  • Builders: Package transactions into blocks
  • Relays: Connect builders to proposers (MEV-Boost)
  • Proposers: Include blocks with highest bid

L2 MEV stack (current):

  • Searchers: Limited (most MEV captured by sequencer)
  • Builders: Don’t exist (sequencer does everything)
  • Relays: Don’t exist
  • Proposers: Centralized sequencer

Why no MEV-Boost for L2s?

  • Sequencers are centralized (no competition)
  • No incentive to share MEV (sequencer keeps 100%)
  • Block production is fast (sub-second), harder to run auction

Future: As sequencers decentralize, we’ll see L2-specific MEV infrastructure emerge.

Private Mempools on L2s

Some L2s are experimenting with private transaction submission:

Why users want private mempools:

  • Avoid front-running on DEX swaps
  • Prevent sandwich attacks
  • Hide trading strategies

Options:

  1. Flashbots Protect for L2s

    • Send transactions privately to sequencer
    • Sequencer includes without revealing to public mempool
    • Status: Not yet available for L2s (Flashbots focusing on L1)
  2. Direct sequencer RPC

    • Some L2s offer private RPC endpoints
    • Transactions submitted directly to sequencer
    • Example: Arbitrum’s “private relay”
  3. Encrypted mempools

    • Transactions encrypted until included in block
    • Threshold decryption after inclusion
    • Example: Shutter Network (not yet deployed on L2s)

Current state: Most L2s have NO private mempool solution. This is a major UX problem for sophisticated traders.

The Forced Inclusion Mechanism

Even with centralized sequencers, users have an escape hatch:

Forced inclusion (on all major L2s):

  • User submits transaction to L1 inbox contract
  • L2 sequencer MUST include it within N blocks
  • If sequencer censors, user can prove censorship and sequencer is slashed (in theory)

Example (Optimism):

// On Ethereum L1
contract OptimismPortal {
  function depositTransaction(
    address _to,
    uint256 _value,
    uint64 _gasLimit,
    bool _isCreation,
    bytes memory _data
  ) public payable {
    // User deposits transaction to L1
    // Sequencer must include in L2 within sequencer_window
  }
}

Guaranteed inclusion time:

  • Optimism: Within 24 hours
  • Arbitrum: Within 24 hours
  • Base: Within 24 hours

Cost: Higher than normal L2 transaction (must pay L1 gas)

Use case: Censorship resistance, emergency withdrawals

My Take on Sequencer Decentralization

Short term (2025-2026):
Centralized sequencers are acceptable tradeoff for performance and simplicity. Users get sub-second confirmations and low fees.

Medium term (2026-2027):
Shared sequencers (Espresso) and based rollups (Taiko) will gain adoption. L2s will offer decentralization as competitive advantage.

Long term (2028+):
All major L2s will have decentralized sequencers. Centralized sequencing becomes liability (regulatory risk, censorship concerns).

What I’m watching:

  • Espresso mainnet launch (expected Q1 2026)
  • Taiko adoption growth
  • Ethereum’s enshrined PBS (could enable better based rollups)

Questions for the Community

  1. Does centralized sequencer concern you? Or is 24-hour forced inclusion sufficient censorship resistance?

  2. MEV on L2s: Should sequencers share MEV revenue with users (like Flashbots), or keep it (fund development)?

  3. Private mempools: Would you pay extra (e.g., +20% fee) for private transaction submission to avoid sandwich attacks?

  4. Based rollups: Would you accept 12-second block times (L1 speed) for fully decentralized sequencing?

The post-blob world is great for L2 fees, but sequencer centralization is the next major challenge to solve.

Mike Johnson
Data Engineer & MEV Researcher


Resources: