I’ve been working in L2 infrastructure for 6 years now, and 2026 feels like the year modular blockchain architecture went from niche concept to mainstream reality. But as I watch the ecosystem evolve, I can’t shake this nagging feeling: did we just recreate microservices hell?
What Changed in 2026
The numbers are striking. Ethereum L2s are collectively processing over 100,000 TPS. Celestia’s data availability layer is gaining serious traction, especially for gaming L3s. We’re seeing rollup-as-a-service platforms make it as easy to launch an L2 as deploying a smart contract.
The modularity thesis is playing out exactly as designed:
- Execution layer: Optimism, Arbitrum, Base, zkSync
- Data availability: Celestia, EigenDA, Ethereum blobs (EIP-4844)
- Settlement: Ethereum L1
- Interoperability: Hyperbridge, cross-chain messaging protocols
This separation of concerns unlocked massive innovation. Different layers can optimize independently. Gaming chains can prioritize throughput, DeFi chains can prioritize security, social chains can prioritize low costs.
The Microservices Parallel
But here’s what keeps me up at night. Software engineering went through this exact evolution 15 years ago:
Monoliths → Microservices
Gained: Flexibility, independent scaling, team autonomy
Lost: Simplicity, native composability, easy debugging
Monolithic chains → Modular stacks
Gained: Specialization, throughput, cost efficiency
Lost: Unified state, native composability, simple mental model
The patterns are eerily similar. And we’re already seeing the same problems that plagued early microservices architectures:
1. Orchestration Complexity
Building on Ethereum now means coordinating multiple layers. Which L2 for execution? Which DA layer for data? How do you handle cross-L2 transactions? It’s like distributed systems debugging—except your users lose money if you get it wrong.
2. Liquidity Fragmentation
The same asset exists on 10+ L2s with different prices. Users need bridges to move value. Each bridge is a new trust assumption, a new attack vector, a new UX hurdle.
3. State Synchronization
Maintaining consistency across modular layers is hard. Really hard. Cross-chain MEV is a nightmare. Atomic transactions across rollups? Still unsolved elegantly.
Solana’s Counterfactual
Meanwhile, Solana is over there processing 800-900 TPS sustained (with peaks at 5,200 TPS) on a single unified layer. No bridges. No cross-L2 headaches. All state in one place.
Yes, their theoretical max is 65K TPS, and they’re not hitting it. But here’s the thing: simplicity has value.
For payments, trading, consumer apps—monolithic might just be better. Users don’t care about architectural purity. They care about speed, cost, and not losing funds in a bridge hack.
The Real Question
I’m not anti-modular. I literally build this stuff for a living. But I think we need an honest conversation:
Are we building modular because it’s fundamentally superior? Or because Ethereum couldn’t scale L1 fast enough, so we retrofitted a solution?
If Ethereum had solved scaling at the base layer (like Solana attempted), would we still choose modularity? Or did we make modularity work because we had to?
Data Points to Consider
According to recent analysis, modular ecosystems lead in TVL growth and developer activity. But monolithic chains remain competitive on throughput, costs, and user experience. Research from financial institutions suggests both models will coexist—and honestly, that might be the right answer. Different use cases need different architectures.
My Concern: The Ghost Chain Apocalypse
Here’s my real fear. If “rollup-as-a-service” makes launching L2s trivial, we might get:
- Thousands of abandoned L2s (easy launch = low commitment)
- Liquidity fragmented beyond usability
- Users confused about which chain to use
- Projects launching their own L2 for ego/marketing, not technical reasons
We’ve already seen this with app-specific chains. How many Cosmos zones are actually used vs abandoned?
Questions for the Community
-
For developers: Are you building on modular stacks because they’re better, or because that’s where the funding/hype is?
-
For founders: Would you choose a monolithic chain for your next project if it had comparable security/decentralization?
-
For infrastructure builders: Can we solve interoperability elegantly, or is bridging inherently complex and risky?
-
For users: Do you actually care whether your dApp runs on an L2 or L1, as long as it’s fast and cheap?
I want to believe modularity is the future. But I also want us to be honest about the trade-offs. Complexity is real. Fragmentation is real. And “we’ll solve it with better tooling” is what the microservices people said too.
What am I missing here? Change my mind—or validate my concerns.