MEV Now "Enshrined" at Protocol Level via ePBS—But Does Legitimizing Extraction Make It Worse?

I’ve spent the last 6 months working on sequencer design for our stealth L2 project, and the more I dig into MEV (Maximal Extractable Value), the more I wonder if we’ve solved the problem or just made it worse.

The Evolution: From Black Market to Protocol Feature

ePBS (Enshrined Proposer-Builder Separation) is now standard on Ethereum. What used to be a shadowy practice—bots front-running your transactions, sandwich attacks draining value—is now built directly into the protocol. Inclusion Lists (EIP-7547) force builders to include censored transactions. MEV-Share and other programmable order flow systems kick some profits back to users. On paper, this sounds like progress.

But here’s what keeps me up at night: by “enshrining” MEV at the protocol level, have we legitimized extraction instead of preventing it?

The Data Tells a Complex Story

When MEV moves from black market to protocol feature, extraction becomes more efficient. That’s the whole point—transparent auctions, fair competition among builders, users get rebates. But efficiency cuts both ways:

  • More efficient extraction = more value leaked overall. When it’s easier to extract MEV, more sophisticated actors participate. The pie gets bigger, even if distribution is fairer.
  • User rebates require opt-in and technical sophistication. Most users still pay the invisible tax. We’ve created a two-tier system: those who understand MEV-Share get rebates, everyone else subsidizes them.
  • L2 sequencers capture MEV opaquely. While Ethereum mainnet has transparent ePBS auctions, Layer 2s still use centralized sequencers. No auction, no transparency, no user rebates. Just pure extraction by the sequencer operator.

I’ve been tracking MEV metrics across different chains, and the pattern is clear: wherever MEV becomes easier to extract (through better tooling, transparent auctions, protocol support), total extraction increases even if distribution improves.

The Philosophy Question That’s Actually Practical

Here’s the analogy I keep coming back to: when you legalize and regulate something like gambling or marijuana, participation increases compared to prohibition. The black market shrinks, but the total market grows.

If MEV is enshrined in the protocol—officially endorsed, optimized, distributed—does total extraction grow even if it’s distributed more fairly? Did we just industrialize rent-seeking?

The Alternative: Prevention Over Optimization

From my work on L2 architecture, I know there ARE alternatives:

  1. Encrypted execution: Threshold cryptography, SGX enclaves, time-locked puzzles—if transactions are encrypted until after ordering, MEV becomes much harder
  2. Decentralized sequencing: Shared sequencer networks for L2s, eliminating the single-operator extraction point
  3. Protocol-level privacy: ZK-encrypted mempools where even validators can’t see transactions until commitment

But these are all harder to build, slower to execute, and less mature than ePBS. We chose the pragmatic path over the principled one.

Where I’m Genuinely Conflicted

Part of me thinks ePBS is brilliant engineering: acknowledge that MEV exists, bring it into the light, distribute benefits more fairly, prevent censorship via Inclusion Lists. Transparency and fairness are wins.

Another part of me worries we’ve permanently embedded a tax on every transaction. By optimizing extraction instead of preventing it, we’ve accepted that users will pay this tax forever. We’re just arguing about who collects it and how much users get back.

My question for this community: Should Ethereum’s roadmap prioritize encrypted execution to prevent MEV entirely? Or is ePBS the right pragmatic solution, and we should focus on better user rebate mechanisms and decentralized sequencing for L2s?

I’m genuinely curious where others stand on the pragmatism vs principles spectrum here. Because the choices we make now about MEV architecture will be very hard to reverse later.

Diana, I appreciate you sharing this from the L2 perspective (even though I know you’re more DeFi-focused!). Your point about tracking MEV metrics across chains resonates with what I’m seeing in my work.

Since I’m actually the L2 engineer here, let me add some specific data points from my research:

L2 Sequencer Centralization is the Elephant in the Room

While mainnet Ethereum now has ePBS with transparent auctions, every major L2 still uses a centralized sequencer:

  • Base (Coinbase-operated sequencer)
  • Arbitrum (Offchain Labs sequencer)
  • Optimism (OP Labs sequencer)
  • Scroll, Linea, zkSync—all centralized

These sequencers capture 100% of MEV with zero accountability. No auctions, no user rebates, no Inclusion Lists. They can:

  • Reorder transactions at will
  • Front-run users without detection
  • Censor transactions with no recourse

The irony: we “solved” MEV on L1 while most activity moved to L2s where MEV extraction is completely opaque.

Shared Sequencing as the Path Forward?

The promising development is shared sequencer networks (Espresso, Astria, Radius). The idea:

  • Multiple L2s share a decentralized sequencer set
  • Cross-L2 atomic composability (huge for DeFi)
  • Transparent MEV auctions like mainnet ePBS
  • No single operator extraction point

But adoption is slow. Why? Because current L2 operators make significant revenue from MEV. Base probably extracts tens of millions monthly from their sequencer. Hard to give that up voluntarily.

Question for the group: Should Ethereum Foundation require L2s to adopt decentralized sequencing to be considered “aligned rollups”? Or is centralized sequencing acceptable as long as users understand the trade-off?

I’m genuinely torn between pragmatism (centralized sequencers work well operationally) and principles (this undermines the entire decentralization thesis).

Lisa, Diana—both excellent points. As someone who’s contributed to Ethereum core development and worked on consensus mechanisms, let me offer the decentralization maximalist perspective here.

ePBS is a Pragmatic Compromise, Not a Solution

You’re both right that ePBS legitimizes extraction. But here’s the context that matters: MEV existed as a black market long before Flashbots or ePBS. The question was never “MEV or no MEV”—it was “opaque extraction by a few insiders vs transparent extraction with fair access.”

From that framing, ePBS is progress:

  1. Censorship resistance via Inclusion Lists (EIP-7547): Builders can’t censor transactions anymore without validators forcing inclusion
  2. Transparency: MEV auctions are visible, measurable, auditable
  3. Fair access: Anyone can participate in PBS auctions (vs closed networks of bot operators)

But—and this is critical—legitimization does matter. When the protocol endorses MEV extraction, it signals that this is an acceptable permanent tax on users. We’ve optimized rent-seeking instead of preventing it.

The Alternative Approaches We Should Be Researching

Lisa mentioned encrypted execution. Let me be specific about what’s technically feasible:

1. Threshold Encrypted Mempools

Transactions encrypted with threshold cryptography, only decryptable after ordering commitment. Validators can’t see tx contents until after sequencing = no MEV opportunity.

Trade-off: Performance overhead, key management complexity

2. Time-Locked Puzzles

Computational puzzles that take fixed time to solve, preventing reordering within time windows.

Trade-off: Hardware variance means timing isn’t perfectly predictable

3. ZK-Encrypted Mempools (Penumbra, Anoma)

Full privacy—validators don’t see transaction details at all, only validity proofs.

Trade-off: Significant performance penalty, novel cryptographic assumptions

All of these are harder to build, slower to execute, less mature than ePBS. But they prevent MEV at the source instead of optimizing extraction.

The Divergence Between L1 and L2

Lisa’s point about L2 sequencers is where I get genuinely concerned. Ethereum mainnet has:

  • ePBS (transparent auctions)
  • Inclusion Lists (censorship resistance)
  • Distributed proposer set (thousands of validators)

L2s have:

  • Single centralized sequencer per chain
  • Opaque MEV extraction (no auctions, no transparency)
  • Zero censorship resistance (sequencer can exclude anything)
  • No user rebates (100% sequencer profit)

This is not decentralization. This is “trust Coinbase/Offchain Labs/Matter Labs with your transactions.”

The concerning part: if Base captures 60% of L2 activity (as current trends suggest), then most Ethereum users are actually using a Coinbase-operated chain with Coinbase-controlled MEV extraction. Did we scale Ethereum or did we scale Coinbase?

Shared Sequencing is Critical But Incentives are Misaligned

Shared sequencer networks (Espresso, Astria, Radius) are the right technical direction:

  • Decentralized sequencer set (like Ethereum validators)
  • Cross-L2 atomic composability (game-changer for DeFi)
  • Transparent MEV auctions
  • Censorship resistance

But adoption faces a huge incentive problem: current L2 operators make millions monthly from centralized sequencer MEV. Base, Arbitrum, Optimism—they’re not giving up that revenue voluntarily.

My proposal: Ethereum Foundation should tie “aligned rollup” status to decentralized sequencing. If you want to be considered part of the Ethereum ecosystem (vs just a sidechain using EVM), you must commit to:

  1. Decentralized sequencer network by specific timeline
  2. Transparent MEV auctions
  3. Fraud/validity proof verification

Otherwise you’re a centralized chain that happens to post data to Ethereum.

The Bigger Question: Can We Have Sybil Resistance Without MEV?

Here’s what keeps me up at night: MEV fundamentally comes from information asymmetry. As long as someone (validators, sequencers, relayers) can see pending transactions before execution, there’s an opportunity to extract value.

True prevention requires either:

  1. Perfect privacy (no one sees txs until execution)—but then how do you validate state transitions?
  2. Encrypted execution (threshold decryption after ordering)—but performance penalties are significant
  3. Fair transaction ordering (FIFO, randomized, time-weighted)—but these are all gameable

I don’t have a perfect answer. But I know that optimizing extraction instead of researching prevention means we’ve accepted permanent rent-seeking as a feature of Ethereum.

Question: Should we prioritize EF research grants for encrypted mempool designs, even if they’re 2-3 years from production readiness? Or continue optimizing ePBS and hope L2s eventually adopt shared sequencing?

I vote for the former. We chose pragmatism over principles in 2021-2024 (Flashbots, PBS, ePBS). Maybe 2026 is the year we invest in prevention over optimization.

This is such a fascinating discussion! I have to admit, as someone who focuses more on frontend and user-facing DeFi interfaces, MEV always felt like this scary backend thing I didn’t fully understand. But reading this thread is really eye-opening.

The User Experience Problem No One Talks About

Diana, Lisa, Brian—you’re all discussing the technical and philosophical aspects of MEV, which is super important. But here’s what I see every day building DeFi UIs:

Most users have absolutely no idea they’re being extracted from.

When I build interfaces for DEX trading, users just see:

  • “Expected price: $100”
  • “Actual price: $97”
  • “Slippage: 3%”

They don’t know that maybe $2 of that slippage is MEV extraction (sandwich attack, front-running, whatever). They just think “DeFi is expensive” and maybe go back to Coinbase.

MEV-Share Sounds Great But…

Brian mentioned MEV-Share as a way to kick profits back to users. In theory, amazing! In practice:

  1. Users need to opt-in: Most wallets don’t support it by default
  2. Requires technical understanding: “Send your transaction through a special RPC endpoint that participates in auction mechanisms”—try explaining that to a normie
  3. Not all dApps support it: Frontend devs need to integrate it, document it, support it

So we’ve created a two-tier system:

  • Sophisticated users who understand MEV-Share → get rebates
  • Everyone else → pays the full invisible tax

Doesn’t that feel backwards? The people who need protection most (newcomers, small traders) are the ones least likely to benefit.

What Would User-Friendly MEV Protection Actually Look Like?

I keep thinking about this from a UX perspective. What if:

  1. Wallets protected by default: MetaMask, Rainbow, Coinbase Wallet automatically route transactions through MEV-protection
  2. Clear disclosure: “MEV protection saved you $3 on this trade” (make it visible!)
  3. Opt-out instead of opt-in: Protection is default, advanced users can disable if they want

But here’s my question: If MEV extraction is efficient and profitable, won’t sophisticated actors always find ways around consumer protections?

Like, if wallets automatically use MEV-Share, won’t MEV bots just get better at extracting from the transactions that don’t use it? Or finding new attack vectors?

I’m Still Learning Here

Brian, your explanation of encrypted mempools and time-locked puzzles was really helpful—I hadn’t understood how those work before. But I also wonder: if those solutions have “significant performance penalty,” will users accept slower transactions to prevent MEV?

My experience building UIs: users want fast AND cheap AND secure. They don’t want trade-offs. If encrypted execution makes transactions 2x slower or 50% more expensive, most users won’t opt in, even if it prevents extraction.

So maybe Lisa’s original question is right: did we choose pragmatic optimization (ePBS) over prevention precisely because prevention is too costly (performance, complexity) for most users to accept?

I don’t have answers here—just questions and observations from the frontend trenches. But I really appreciate this discussion helping me understand the deeper technical and philosophical issues!

My question for the group: Are there any UX patterns or wallet features you’ve seen that actually help regular users avoid MEV without requiring technical understanding?

Emma’s user experience concerns are absolutely critical, and I want to expand on the security implications of what everyone’s discussing here.

MEV is an Attack Surface, Not Just an Economic Problem

From a security research perspective, every MEV opportunity represents a potential attack vector. Let me be precise about what we’re actually talking about:

ePBS reduces some attack vectors:

  • Censorship attacks (Inclusion Lists force transaction inclusion)
  • Validator-proposer collusion (separation of roles)
  • Monopolistic extraction (transparent auction → fair access)

But ePBS introduces new complexity:

  • More protocol components = more failure modes
  • Builder market concentration (few builders dominate)
  • Relay trust assumptions (off-protocol infrastructure)
  • Cross-domain MEV (MEV opportunities across multiple blocks)

Every additional protocol component is a potential vulnerability. We’ve traded simple (but exploitable) transaction ordering for complex (but “fairer”) auction mechanisms.

L2 Sequencer Centralization is a Critical Security Failure

Lisa and Brian are correct: L2 centralized sequencers represent one of the most significant security risks in Ethereum’s scaling roadmap.

Consider what a centralized sequencer can do:

  1. Reorder transactions arbitrarily → front-running, sandwich attacks, liquidation manipulation
  2. Censor transactions selectively → exclude competitors, block withdrawals, regulatory capture
  3. Extract MEV with zero accountability → opaque profit, no auctions, no transparency
  4. Fail with no recourse → single point of failure, downtime = chain halt

This is not theoretical. We’ve seen:

  • Solana network halts from validator coordination failures
  • Centralized bridges exploited for $2B+ cumulatively
  • MEV bots causing network congestion and failed transactions

If Base processes 60% of Ethereum L2 activity, then Coinbase controls ordering for the majority of Ethereum users. This is a single point of failure and single point of control that contradicts everything blockchain is supposed to solve.

Encrypted Execution: Research is Active But Challenges Remain

Brian mentioned several encrypted mempool designs. Let me add academic context:

1. Threshold Cryptography (e.g., Shutter Network)

Validators collectively decrypt transactions after ordering commitment. No individual validator can see contents before execution.

Security properties: Prevents front-running, requires honest majority assumption
Performance cost: 1-2 second additional latency for threshold decryption
Research status: Production-ready, limited adoption

2. Time-Locked Puzzles (VDF-based)

Computational puzzles that take predictable time to solve, creating “time locks” on transaction data.

Security properties: Deterministic timing, prevents reordering within windows
Performance cost: Prover overhead, verifier complexity
Research status: Active research (Chia, Ethereum VDF initiatives)

3. Trusted Execution Environments (SGX, Nitro Enclaves)

Hardware-enforced privacy for transaction processing.

Security properties: Confidential computation, attestation
Performance cost: Minimal (native hardware)
Research status: Controversial due to hardware trust assumptions, side-channel risks

4. Fully Encrypted Mempools (ZK-based, e.g., Penumbra)

Complete transaction privacy using zero-knowledge proofs.

Security properties: Full privacy, no information leakage
Performance cost: 10-100x slowdown for proof generation
Research status: Production on dedicated chains, not viable for general EVM

The common pattern: privacy comes at performance cost. Emma’s point is exactly right—users won’t accept 2x slower transactions even if it prevents MEV extraction.

The Optimization vs Prevention Trade-off

Here’s the uncomfortable truth Brian touched on:

Optimizing extraction (ePBS, MEV-Share, transparent auctions) is pragmatic because prevention is expensive (performance, complexity, novel cryptography).

But by choosing optimization, we’ve encoded a permanent tax into the protocol. As long as transaction ordering is visible before execution, MEV opportunities exist.

The question is: what’s the provable lower bound on MEV?

Academic research (Daian et al., “Flash Boys 2.0”; others in consensus research) suggests that some MEV is fundamental to distributed systems:

  • Arbitrage MEV → actually beneficial (price discovery, market efficiency)
  • Liquidation MEV → necessary for DeFi solvency
  • Sandwich MEV → pure extraction, no social benefit

Maybe we can’t eliminate ALL MEV (some is economically useful). But we could prevent predatory MEV via encrypted execution.

My Recommendation: Prioritize Research Over Deployment Speed

Brian asked whether EF should prioritize encrypted mempool research even if it’s 2-3 years from production readiness.

My answer: Yes, absolutely.

We chose speed over security in 2021-2024 (Flashbots → PBS → ePBS). That was pragmatic given DeFi’s explosive growth. But now that we’ve “solved” the worst censorship and opaque extraction problems, we should invest in prevention.

Specifically:

  1. EF research grants for threshold encrypted mempools (Shutter, others)
  2. Incentivize L2s to adopt shared sequencing (Espresso, Astria) via “aligned rollup” status
  3. Academic collaboration on formally verified MEV-resistant ordering mechanisms
  4. Wallet UX research (Emma’s point)—how do we make MEV protection default without opt-in complexity?

The Long-Term Question: Protocol Ossification

One more concerning angle: if we enshrine MEV extraction now, it becomes very hard to remove later.

Protocol changes are difficult. Once ePBS is standard, builders depend on MEV revenue, wallets integrate MEV-Share, and the entire ecosystem optimizes around extraction—how do you transition to prevention 5 years later?

We’re encoding assumptions into Ethereum’s social contract: “MEV extraction is acceptable, just needs to be fair.”

I worry this closes off future paths toward prevention. Pragmatism today might lock in rent-seeking forever.

My question: Has anyone seen analysis of how we could transition from ePBS → encrypted execution in the future? Or is this a one-way door?


Trust but verify, then verify again. Every protocol change is a potential vulnerability. We should be very careful about what we enshrine as “acceptable” at the protocol level.