After seven years in ZK cryptography research—from Zcash to StarkWare to my current role—I’ve watched 2026 become the year zkEVMs finally went from “someday” to “production-ready.” Proving times collapsed from 16 minutes to 16 seconds. Multiple implementations reached mainnet. The execution gap between zkEVMs and native EVM chains has nearly closed.
But here’s what keeps me up at night: we’ve created a fragmented landscape of incompatible security assumptions.
The Fragmentation We’ve Built
Let me break down what’s actually happening under the hood:
Polygon zkEVM uses the Hermez prover with FFLONK—no trusted setup required, cheaper on-chain verification than GROTH 16 (30% cheaper than PLONK). Their approach: ZK-verify their own internal zkASM language, then interpret EVM code through zkASM. Prover-friendly and efficient.
zkSync Era runs Boojum/Redshift (FRI + PLONK construction). But here’s the key difference: they compile Solidity → Yul → custom circuit-compatible bytecode. They’re operating at the IR (LLVM-IR) level, not pure bytecode.
Scroll takes the opposite approach: bytecode-level compatibility, built in collaboration with the Ethereum Foundation’s PSE group. The goal is to fully reuse Geth’s security model. You can literally copy-paste Ethereum code with zero changes.
Starknet breaks from the EVM entirely—Cairo VM with STARKs. Different language, different execution model, different everything.
Why This Matters for Security
Each proving system uses fundamentally different mathematics. That means:
-
Different security audits required. A formal verification of Polygon’s FFLONK system tells you nothing about zkSync’s Boojum implementation. The attack surfaces are entirely distinct.
-
Cross-chain bridges must trust multiple proof systems simultaneously. When you bridge assets between zkEVMs, you’re not just trusting one cryptographic construction—you’re trusting all of them.
-
Bug isolation is good; inconsistent security is bad. A bug in Polygon’s prover won’t affect Scroll (good!). But now developers building cross-chain contracts need to understand multiple proving systems to properly audit their code (bad!).
-
We’re still learning what’s secure. The Ethereum Foundation recently shifted focus from speed to security because researchers disproved the “proximity gap” conjectures that many hash-based SNARKs and STARKs were relying on. These mathematical assumptions we thought were solid? Turns out they don’t hold in all parameter ranges where teams were using them.
A soundness failure in a ZK system isn’t like other bugs. A forged proof could let an attacker create assets, rewrite state, or steal funds—all while the verification layer thinks everything is valid.
The Ethereum Foundation’s Response
The good news: the EF is taking this seriously. They’re targeting 100-bit provable security by May 2026 and full 128-bit security by year-end. 128-bit security is the gold standard—what financial systems and government communications use.
New constructions like WHIR (a Reed-Solomon proximity test) offer transparent, post-quantum security without trusted setup ceremonies. At 128-bit security, WHIR produces proofs ~1.95x smaller than older methods with several times faster verification.
But standardizing on new cryptography takes time. And it doesn’t solve the diversity problem.
The Core Dilemma
So here’s the tension: should we converge on one proving standard for consistency, or does diversity create resilience?
Arguments for convergence:
- Easier to audit (one math model to verify)
- Shared security improvements benefit everyone
- Simpler for developers building cross-chain applications
- Clear security benchmarks and standards
Arguments for diversity:
- Competition drove those 60x performance improvements
- Single cryptographic flaw wouldn’t break entire L2 ecosystem
- Different proving systems optimized for different use cases
- Innovation happens at the edges, not through committee standardization
My Questions for This Community
I don’t have answers here—just concerns worth discussing:
-
How do everyday users evaluate security when every L2 uses different cryptography? Most users don’t know what proving system their L2 runs on. They just want fast, cheap transactions.
-
What’s the developer expectation? Should Solidity devs deeply understand ZK mathematics, or is this infrastructure-level complexity they should be able to abstract away?
-
Are we optimizing for the wrong metrics? If proving time and cost come at the expense of auditability and security transparency, have we actually made progress?
-
What happens when (not if) we find a critical flaw in one of these proving systems? Do we have coordinated disclosure processes? Security standards? Or will it be chaos?
The math is beautiful. The engineering is impressive. But I worry we’re building incompatible security models that will be painful to reconcile later.
What do you all think? Am I being too paranoid, or is this fragmentation a real problem we need to address before these systems handle billions in TVL?