Modular Blockchains: Scaling Solution or Developer Nightmare?

I’ve been building on Layer 2s for the past six years, and I have to say—2026 feels like a turning point. We’ve gone from the ‘everything on Ethereum mainnet’ era to a world where choosing your stack means picking separate layers for execution, settlement, consensus, AND data availability. It’s powerful, but is it sustainable?

The Modular Promise

Let me start with the wins, because they’re real. I recently worked with a team that migrated their rollup from posting calldata to Ethereum L1 to using Celestia for data availability. The cost savings were dramatic—we’re talking a 50%+ reduction in DA costs, similar to what Optimism achieved when they switched to blob-based posting.

And that’s not even the most impressive example. Starknet reported a 95-100× reduction in L1 posting costs after adopting blobs with EIP-4844. When you’re a protocol handling thousands of transactions per second, those savings compound fast.

The architecture makes sense on paper:

  • Execution layer: Where users interact with dApps and smart contracts
  • Settlement layer: Where proofs get verified and disputes resolved
  • Data availability layer: Where transaction data lives and can be verified
  • Consensus layer: Where nodes agree on transaction ordering

Specialization means each layer can optimize for its specific function. Celestia can focus purely on making data availability cheap and reliable (they’re targeting 1GB blocks this year, with Fibre aiming for terabyte-scale throughput). Rollups can focus on execution. Everyone benefits.

The Developer Reality

But here’s where I start to worry. Every new project I consult on now faces this question: which DA layer do we use? Celestia? EigenDA? Avail? Ethereum blobs? What happens if we choose wrong and our DA layer becomes expensive or unreliable in two years?

Then there’s the technical complexity. Our team used to have expertise in Solidity, EVM, and Ethereum infrastructure. Now we need to understand:

  • Rollup framework nuances (Arbitrum Orbit vs OP Stack vs Polygon CDK)
  • DA layer sampling mechanisms and availability guarantees
  • Cross-layer communication protocols
  • Multiple different economic models and fee markets

I love deep technical work, but I wonder: are we making Web3 more accessible or less accessible to new developers?

The Data Tells a Mixed Story

According to recent reports, 56+ rollups are now using Celestia (37 on mainnet, 19 on testnet), and all three major rollup frameworks support it as a DA option. That’s real adoption.

Infrastructure cost savings can reach 80% long-term when you architect for modularity instead of monolithic chains. For projects with tight budgets, that’s the difference between sustainable and unsustainable.

But here’s the thing: I also see fragmentation. Every DA layer has different APIs, different availability guarantees, different trust assumptions. We don’t have standard metrics to compare them. Monitoring and debugging across multiple layers is genuinely harder than it was with a monolithic chain.

Where Do We Go From Here?

I’m genuinely torn on this. The cost savings and scalability improvements are undeniable. EIP-4844 proved that blob-based posting can reduce costs by orders of magnitude. Modular architecture enables innovation to happen independently on each layer—that’s powerful.

But I also see teams struggling with the complexity. Choosing a stack used to be ‘Ethereum or Solana.’ Now it’s ‘which execution framework, which DA layer, which settlement mechanism, how do they integrate, what are the failure modes?’

Maybe this is just growing pains. Cloud computing went through the same evolution—from bare metal to VMs to containers to serverless. Each step added abstraction layers that initially felt complex but eventually became standard.

Or maybe we’re overengineering. Maybe we should let L1s scale directly instead of fragmenting into dozens of incompatible layers.

What do you all think?

  • Are you building on modular stacks? What’s your experience been?
  • Do you think better tooling will solve the complexity problem, or is this fundamentally too complicated?
  • How do you choose between different DA layers? What metrics matter most?

I’d especially love to hear from founders and developers who are making these decisions right now. Are the cost savings worth the added complexity for your team?

Lisa, this is exactly the conversation we need to be having right now.

As someone who’s been contributing to Ethereum core development and building Layer 2 infrastructure, I have to respectfully push back on the ‘developer nightmare’ framing. What we’re seeing isn’t a bug—it’s a feature. And here’s why.

Separation of Concerns Is Fundamental Engineering

Think about every successful distributed system you’ve used: microservices, cloud infrastructure, the internet itself. They all succeeded precisely because they separated concerns into specialized layers. TCP/IP doesn’t try to also be HTTP. HTTP doesn’t try to also be TLS. Each layer does one thing well.

Monolithic blockchains force impossible trade-offs. You can’t simultaneously optimize for:

  • Security (decentralization, node requirements)
  • Speed (transaction throughput, finality time)
  • Cost (transaction fees, infrastructure expenses)

Pick two, sacrifice one. That’s the monolithic reality.

Modular architecture breaks this trilemma by letting each layer optimize for its specific function. Celestia can focus entirely on data availability and push toward 1GB blocks because that’s all it does. Execution layers can focus on VM optimization and throughput. Settlement layers can focus on proof verification and security.

The Numbers Don’t Lie

You mentioned Starknet’s 95-100× cost reduction—that’s not incremental improvement, that’s a paradigm shift. When EIP-4844 introduced blob-based data posting in March 2024, we saw costs drop by orders of magnitude across the entire L2 ecosystem.

This wasn’t possible with monolithic chains. Ethereum couldn’t just ‘optimize’ its way to 100× cheaper data availability while maintaining its security properties. It required architectural separation.

And it’s not just about cost. Celestia’s data availability sampling (DAS) lets light nodes verify data availability without downloading entire blocks. More light nodes sampling = safely larger block sizes = more throughput. That’s only possible because DA is separated from execution and state management.

Complexity Is Temporary, Architecture Is Permanent

I hear your concern about developer complexity, and it’s valid. Right now, choosing a stack is harder than it was in 2020. But remember what cloud computing looked like in 2008?

  • “Too complicated to deploy compared to bare metal”
  • “Too many choices—AWS vs Azure vs GCP”
  • “What happens if our cloud provider fails?”
  • “We need experts in networking, storage, compute, security…”

Sound familiar?

Fast forward to 2026: cloud abstraction layers (Kubernetes, serverless, infrastructure-as-code) made the complexity manageable. Developers don’t think about physical servers anymore. They think about services and abstractions.

The same will happen here. We’re already seeing it with rollup-as-a-service platforms. Developers will soon deploy a rollup by specifying high-level requirements (“I need EVM compatibility, Celestia for DA, optimistic fraud proofs”) and the tooling handles the integration.

We’re Building the Abstraction Layer Right Now

You asked: “are we making Web3 more or less accessible to new developers?”

Less accessible right now. More accessible in 12-18 months.

Every major rollup framework (Arbitrum Orbit, OP Stack, Polygon CDK) already supports multiple DA layers as plug-and-play options. That’s the abstraction layer forming. Soon you’ll choose a DA layer the same way you choose a cloud region today—based on cost/latency trade-offs, not deep protocol expertise.

The fact that 56+ rollups are already running on Celestia (37 mainnet, 19 testnet) shows this is working. These teams didn’t all become DA experts. They used frameworks that abstracted the complexity.

Fragmentation Is Market Discovery

You’re right that every DA layer has different APIs and trust assumptions. That’s not a failure—that’s a market finding optimal solutions.

Remember when there were dozens of competing L1s, all claiming to be the ‘Ethereum killer’? The market consolidated around a few patterns: EVM compatibility, WASM VMs, UTXO models. The same will happen with DA layers.

We don’t need standards before we build. We build, we learn, we standardize. That’s how the internet developed. That’s how cloud computing developed. That’s how modular blockchains will develop.

The Alternative Is Worse

The real question isn’t “is modular architecture complex?” The question is: “compared to what?”

Option A: Monolithic chains forever, with hard trade-offs between security/speed/cost
Option B: Modular architecture with temporary complexity that gets abstracted away

I’ll take Option B every time.

Yes, it’s harder to build right now. Yes, teams need to learn new patterns. But the alternative is accepting fundamental limitations that can’t be solved within a monolithic architecture.

Addressing Your Specific Concerns

> What happens if we choose wrong and our DA layer becomes expensive or unreliable in two years?

The same thing that happens if your cloud provider raises prices: you migrate. Major frameworks are already building DA layer abstraction so switching is possible. And competition between DA layers will keep pricing competitive.

> Monitoring and debugging across multiple layers is genuinely harder.

Absolutely true. This is where we need to invest in tooling. Block explorers, monitoring dashboards, and debugging tools that understand modular stacks. This is the infrastructure we’re building right now.

My Take

Modular blockchain architecture is the most significant structural innovation in crypto since smart contracts. The cost savings alone (50%+, sometimes 100×) justify the learning curve.

But more importantly, it’s the only path to blockchain scalability that doesn’t sacrifice security or decentralization. Every attempt to scale a monolithic chain eventually hits the same wall: you can’t optimize for everything at once.

The developer complexity you’re experiencing is real, but it’s temporary. In 18 months, we’ll have robust abstraction layers. In 36 months, choosing a modular stack will be as straightforward as deploying to AWS.

The architecture is sound. Now we build the tooling that makes it accessible.

Brian, I appreciate the optimism, but as someone who’s currently trying to ship a product with a small team and limited runway, I have to push back on the “complexity is temporary” argument.

Maybe I’m missing something, but from where I’m sitting as a founder, modular blockchain architecture feels like we’re asking startups to make expert-level infrastructure decisions before we even know if our product has market fit.

The Founder’s Dilemma

Here’s my reality check: I have 18 months of runway, a team of 4 developers (only one with deep blockchain experience), and we’re trying to build a Web3 app that solves a real problem for real users.

Our original plan was simple: build on Ethereum L2 (probably Base or Optimism), launch MVP, iterate based on user feedback.

Now we’re facing this:

  • Which rollup framework? (Arbitrum Orbit vs OP Stack vs Polygon CDK)
  • Which DA layer? (Celestia vs EigenDA vs Avail vs ETH blobs)
  • How do we evaluate trust assumptions across these layers?
  • What’s our migration strategy if we pick wrong?
  • How do we monitor performance and costs across multiple layers?

That’s weeks of research and technical due diligence before we write a single line of application code.

Developer Scarcity Is Real

You mentioned that 56+ rollups are running on Celestia. You know what I see? 56+ teams that could afford to hire blockchain infrastructure experts.

My reality:

  • Posted a job for a Web3 developer last month
  • Got 12 applicants
  • 3 had any L2 experience
  • Zero had experience with modular DA layers

The talent pool is already thin. Now we’re subdividing it into even more specialized niches. “Full-stack Web3 developer” used to mean Solidity + React. Now it means Solidity + React + L2 rollup expertise + DA layer knowledge + cross-chain bridge security + ???

We can’t compete with well-funded projects for specialized talent. So we’re stuck either:

  1. Spending months training our team (burning runway)
  2. Making uninformed infrastructure decisions (technical debt)
  3. Waiting for better tooling (losing first-mover advantage)

None of those options are great for a startup.

The Abstraction Isn’t Here Yet

Brian, you said: “In 12-18 months we’ll have robust abstraction layers.”

Cool. What do we do in the meantime?

I can’t tell our investors: “We’ll start building when the infrastructure matures in 18 months.” By then, the market opportunity might be gone. Someone with more resources will have shipped.

And honestly? I’ve heard the “tooling is coming” promise in Web3 for years now. Remember when wallets were going to “just work” by 2023? We’re in 2026 and onboarding is still a nightmare for normal users.

The Cloud Computing Comparison Doesn’t Quite Fit

You compared this to cloud computing in 2008, but there’s a key difference: cloud providers abstracted the complexity BEFORE mass adoption.

AWS didn’t tell early startups: “Here, you pick your own data center hardware, networking stack, storage architecture, and security layer. Don’t worry, we’ll build abstraction tools later.”

They launched with EC2 as a simple abstraction. Pick an instance type, deploy your code, done. The complexity was hidden from day one. That’s why startups adopted it.

With modular blockchains, we’re being asked to understand and choose the underlying architecture layers before abstraction exists. That’s backwards.

Vendor Lock-In Concerns

You said: “If your DA layer becomes expensive, you migrate.”

Have you ever migrated a production application between cloud providers? It’s not trivial. Now imagine doing it with:

  • Live user funds in smart contracts
  • Data spread across multiple layers
  • Different security assumptions
  • Potential downtime or state inconsistencies

That’s not “just migrate.” That’s a multi-month project with existential risk.

And what happens if Celestia (or any DA layer) decides to 10× their prices once you’re locked in? Or if they get regulated out of existence? Or if a critical vulnerability gets discovered?

For a startup, picking a DA layer is a multi-year commitment with significant switching costs. That’s a huge bet to make on day one.

We’re Solving Infrastructure Problems, Not User Problems

Here’s what bothers me most: my team is spending time on infrastructure decisions instead of user problems.

Our users don’t care about data availability layers. They don’t care if we use Celestia or EigenDA. They care whether our app is:

  • Fast
  • Cheap
  • Reliable
  • Easy to use

Every hour we spend researching DA layer economics is an hour we’re not spending on user research, product iteration, or building features people actually want.

The 80% cost savings you mentioned? That’s great for infrastructure companies. For application developers, it’s a distraction.

What I Actually Need

You know what would help startups like mine?

1. Opinionated defaults. Tell me: “If you’re building X type of app for Y type of users, use this stack.” I don’t need infinite flexibility—I need a good starting point.

2. Truly managed services. I want “blockchain-as-a-service” where I specify high-level requirements and someone else handles DA layers, sequencing, bridge security, etc. Vercel for Web3.

3. Proof of production stability. Show me 10 apps with real users running on modular stacks for 12+ months without major issues. Right now it feels like we’d be guinea pigs.

4. Clear migration paths. If I start on a simple stack and need to upgrade later, what does that look like? Can I start with ETH blobs and migrate to Celestia later without rewriting everything?

I’m Not Against Modular Architecture

To be clear: I believe the technical benefits are real. The cost savings are real. The scalability improvements are real.

But there’s a massive gap between “architecturally superior” and “practical for startups to adopt today.”

We need more than just good architecture. We need:

  • Stable, battle-tested implementations
  • Clear best practices and patterns
  • Abstraction layers that hide complexity
  • Economic certainty (not experimental pricing models)
  • Migration paths between different DA layers

Until we have those, asking startups to bet their company on modular architecture feels premature.

My Bet

Honestly? We’re probably just going to launch on Base (Optimism Superchain) and use whatever DA layer they use by default. We don’t have the resources to become infrastructure experts.

If that means we’re paying 2× more in the long run… fine. That’s still cheaper than burning 3 months of runway doing technical due diligence on DA layers.

Maybe in 2-3 years when abstraction layers exist, we’ll migrate to a more optimized stack. But right now? Simplicity > optimization.

Lisa asked if the cost savings are worth the added complexity. For well-funded infrastructure projects? Probably yes. For lean startups trying to achieve product-market fit? I’m not convinced yet.

Both of you make compelling points, so I did what I always do when there’s a technical debate: I looked at the actual data.

Spent the last few weeks building dashboards to track modular DA layer performance across different rollups. The numbers tell an interesting story—Lisa and Brian are both right, and Steve’s concerns are valid too.

The Cost Savings Are Real and Massive

Let me start with the hard numbers, because they’re striking:

Optimism’s Migration to Blobs:

  • Before (calldata): ~$50,000/day in L1 DA costs during peak usage
  • After (blobs): ~$20,000/day for same transaction volume
  • Reduction: 60% cost savings

Starknet’s Blob Adoption:

  • Before: ~$80,000/day posting validity proofs + state diffs to L1
  • After: ~$800-1,200/day using blob space
  • Reduction: 98-99% (essentially the 100× reduction Lisa mentioned)

Celestia-based Rollups (sample of 12 I’m tracking):

  • Average DA cost: $0.002 per 1MB of transaction data
  • Comparable Ethereum blob cost: $0.008 per 1MB (4× more expensive)
  • Traditional L1 calldata: $2.50 per 1MB (1,250× more expensive)

So Brian’s right—these aren’t incremental improvements. We’re talking order-of-magnitude cost reductions.

But the Operational Complexity Is Also Real

Steve, your concerns about operational overhead resonate with me as someone who builds data infrastructure. Here’s what I’ve observed:

Monitoring Challenges:

  • Monolithic L1: 1 RPC endpoint, 1 block explorer, 1 set of metrics
  • Modular L2: 3-5 different services to monitor (execution, DA layer, settlement, bridge)
  • Our monitoring stack went from 200 lines of config to 800+ lines
  • Alert fatigue: now getting notifications from multiple layers

Debugging Distributed Issues:
Last week tracked down a transaction that succeeded on execution layer but data wasn’t available on Celestia for 45 minutes due to DA batching delay. User saw confirmed but couldn’t verify it independently. Took me 3 hours to trace through multiple layers.

With a monolithic chain, that’s a 15-minute investigation: transaction in block X, here’s the data.

Data Pipeline Complexity:
I index blockchain data for analytics. My old Ethereum indexer:

  • Connect to RPC
  • Subscribe to new blocks
  • Parse events, update database
  • ~500 lines of code

My new modular L2 indexer:

  • Connect to execution RPC
  • Subscribe to sequencer for pending txs
  • Poll DA layer for data availability
  • Watch settlement layer for fraud proof windows
  • Handle reorgs across multiple layers
  • ~2,100 lines of code + 3 external dependencies

That’s 4× more complex for essentially the same end result.

The Fragmentation Problem Is Getting Worse

Here’s a chart I made comparing DA layer standardization:

Different APIs for same operation (posting 1MB data):

  • Celestia: blob.Submit with namespace + commitment
  • EigenDA: disperseBlob with quorum parameters
  • Avail: submit_data with app_id
  • ETH blobs: blobTransaction with versioned hashes

Every one has different:

  • Data formatting requirements
  • Availability guarantees (finality times range from 12s to 2min)
  • Cost structures (per-byte vs per-blob vs per-kb)
  • Query interfaces for data retrieval
  • Proof mechanisms for availability verification

Building abstraction over this is like trying to create a universal database driver that works across SQL, NoSQL, graph DBs, and time-series stores. Technically possible, but you lose the specific optimizations that make each one valuable.

Where I Think This Is Headed (Data-Driven Prediction)

I’ve been tracking adoption metrics across 85 different rollups and L2s. Here’s what the data suggests:

Current State (March 2026):

  • 22% using dedicated DA layers (Celestia, Avail, EigenDA)
  • 68% using Ethereum blobs
  • 10% still using L1 calldata (mostly older deployments)

Growth Trends:

  • Celestia adoption growing 15% month-over-month
  • EigenDA slowly gaining traction (8% MoM)
  • ETH blob usage stable (it’s the safe default)

Cost Efficiency by Type:

  • Dedicated DA layers: $0.002-0.005 per MB
  • ETH blobs: $0.008-0.015 per MB (varies with blob congestion)
  • L1 calldata: $2.00-3.50 per MB

But here’s the interesting part: 89% of new rollups in Q1 2026 launched using whatever the framework defaults to.

Steve, your plan to just use Base’s defaults? That’s what nearly 9 out of 10 teams are doing. The data supports your decision.

Three Different Realities

After looking at this data, I think there are three distinct use cases with different optimal strategies:

1. Infrastructure Projects (Brian’s World):

  • Building the rollup framework itself or DA layer
  • Need to optimize every basis point of cost
  • Have specialized teams and long timelines
  • Verdict: Modular architecture is essential
  • Cost savings justify complexity: Yes, 100%

2. Well-Funded dApp Projects:

  • Raised Series A+, have 5+ blockchain engineers
  • Processing high transaction volumes (1M+ txs/month)
  • Can afford to optimize infrastructure
  • Verdict: Modular architecture worth evaluating
  • Cost savings justify complexity: Probably yes if at scale

3. Early-Stage Startups (Steve’s World):

  • Pre-seed to seed stage, small team
  • Focused on product-market fit
  • Transaction volume uncertain
  • Verdict: Use framework defaults (Base, Optimism, Arbitrum)
  • Cost savings justify complexity: Not yet

The Abstraction Gap (And When It Closes)

I built a complexity index tracking how many distinct systems you need to understand for a modular deployment:

Q4 2024: Average complexity score: 8.2/10
Q1 2025: 7.8/10
Q4 2025: 6.9/10
Q1 2026: 6.1/10

It’s improving, but slowly. At current rate:

  • Mid-2027: Complexity score ~5/10 (manageable with good docs)
  • Mid-2028: Complexity score ~4/10 (comparable to deploying to cloud)

Brian’s 12-18 month timeline for robust abstraction might be optimistic. My data suggests more like 18-24 months.

What the Data Says About Migration

Steve asked about migration paths. I analyzed 15 rollups that changed DA layers:

Average migration timeline: 4-6 months
Required downtime: 0-8 hours (varies by approach)
Engineering cost: 3-5 engineer-months
Biggest risk: Data availability gaps during transition

Only 3 out of 15 migrations went smoothly. The other 12 had at least one significant issue (data loss, extended downtime, cost overruns, user confusion).

So yeah, Steve’s right—migration isn’t trivial. It’s a major project with real risks.

My Take: Different Answers for Different Teams

If you’re building infrastructure or high-throughput dApps:
The cost savings are real and massive. At scale (10M+ transactions/month), modular DA layers will save you $100K-500K annually compared to Ethereum blobs, and $5M+ compared to L1 calldata.

Complexity is real but manageable if you have specialized DevOps/infrastructure engineers.

If you’re a startup finding product-market fit:
Steve’s right—use the defaults. Base, Optimism, or Arbitrum with whatever DA they use.

The complexity isn’t worth it until you hit scale. Your first 100K users won’t care if DA costs are $5K/month vs $2K/month. They care if your app works.

The Missing Piece:
What we really need is an open-source monitoring/debugging toolkit for modular stacks. Something that:

  • Aggregates metrics across all layers
  • Traces transactions through the full stack
  • Provides unified alerting
  • Offers cost analytics across DA options

I’m actually thinking about building this. If anyone wants to collaborate, DM me.

Bottom Line (Based on Data, Not Ideology)

  • Cost savings: Real, massive, undeniable (50-100× in many cases)
  • Operational complexity: Also real, measurable, currently 4-6× more complex
  • Adoption trajectory: Growing steadily, but framework defaults dominate
  • Abstraction timeline: 18-24 months for easy mode tooling
  • Migration risk: Significant—not a trivial decision

Lisa asked if cost savings are worth the complexity. The data says: it depends on your scale, team size, and timeline.

For infrastructure projects: absolutely yes.
For startups: probably not yet.
For mid-stage projects: crunch the numbers on your specific transaction volume.

The architecture is sound. The cost benefits are real. But the tooling isn’t quite there yet for teams without specialized infrastructure expertise.

We’re in the awkward adolescent phase. It works, but it’s not elegant yet.

This is a fascinating discussion, and I want to add a perspective from the cryptography/ZK side that might reframe how we think about modular complexity.

As someone who works on zero-knowledge proof systems, I see modular blockchain architecture not just as an engineering trade-off—it’s actually a perfect match for how ZK-rollups fundamentally work.

Why ZK and Modular Architecture Are Natural Allies

Here’s what people often miss: ZK-rollups were already modular before “modular blockchains” became a buzzword.

Think about what a ZK-rollup does:

  1. Execution happens off-chain (the prover generates state transitions)
  2. Proof generation happens separately (computationally intensive, often on specialized hardware)
  3. Verification happens on settlement layer (L1 checks a compact proof)
  4. Data needs to be available somewhere for reconstruction

We’ve been separating these concerns since day one. The shift to dedicated DA layers like Celestia is just making explicit what was already implicit in the architecture.

The Performance Unlock You’re Not Seeing Yet

Mike’s data on cost savings is spot-on, but there’s another dimension: proof generation performance.

With modular DA layers, ZK-rollups can:

1. Post proof data and transaction data to different layers:

  • Proofs (small, ~200KB) go to L1 for settlement
  • Transaction data (large, MBs) goes to Celestia/DA layer
  • This was impossible with monolithic L1-only designs

2. Batch more aggressively:
Celestia’s planned 1GB blocks mean ZK-rollups can accumulate larger batches before posting, which dramatically improves proof efficiency. The math works like this:

  • Generating 1 proof for 10,000 transactions: ~60 seconds
  • Generating 10 proofs for 1,000 transactions each: ~600 seconds

Bigger batches = amortized fixed costs across more transactions = cheaper per-transaction costs.

3. Experiment with different DA guarantees:
Some ZK applications need immediate data availability (DeFi, payments). Others can tolerate 5-10 minute delays (social, gaming). Modular stacks let us choose DA layers with different trade-offs for different use cases.

The Privacy Dimension

Steve, you asked what users care about. Here’s one thing they’re starting to care about: privacy.

Modular architecture enables privacy in ways monolithic chains can’t:

Different layers can have different privacy guarantees:

  • Execution layer: fully encrypted state (like Aztec or Midnight)
  • Proof layer: ZK proofs reveal nothing about transactions
  • DA layer: only encrypted blobs are posted
  • Settlement layer: verifies proofs without seeing transaction details

This is only possible because the layers are separated. On a monolithic transparent chain, you can’t have selective privacy—everything is public or everything is private.

Addressing Complexity from a ZK Perspective

Brian is right that complexity gets abstracted. Let me give you a concrete example from my work:

2021: Building a simple ZK payment system required understanding:

  • Circuit design (R1CS, Plonk, or Halo2)
  • Proof generation optimization
  • Solidity verifier contracts
  • Gas optimization for on-chain verification
  • Merkle tree management for state

2026: Using modern ZK frameworks, you specify high-level logic and the framework handles:

  • Circuit compilation (auto-generated from high-level language)
  • Proof system selection (framework chooses based on requirements)
  • Verifier deployment (one-click deployment to any L2)
  • DA layer selection (configurable, abstracted)

This happened in 5 years. The same will happen for modular DA layer complexity.

The Tooling Gap (And Who’s Building It)

Mike mentioned wanting to build monitoring tools. Zoe here—I’m interested in collaborating.

What we need for ZK-specific modular stacks:

  • Proof status tracking across layers: “proof generated → posted to L1 → verified → data available on Celestia”
  • Performance analytics: proof generation time, verification gas costs, DA posting costs
  • State reconstruction tools: given DA layer data, reconstruct rollup state for verification
  • Cost modeling: predict total costs (proving + verification + DA) for different batch sizes

This is the infrastructure layer that will make Steve’s “Vercel for Web3” vision real.

Why I’m Optimistic (Despite Current Pain)

Lisa, you asked if this is sustainable. From a ZK research perspective, absolutely yes.

Here’s what’s coming in the next 12-24 months:

1. Proof aggregation across rollups:
Multiple ZK-rollups can aggregate their proofs into a single proof for L1 verification. This only works with modular DA—each rollup posts data to Celestia, generates local proof, then recursive aggregation combines them.

2. zkVM maturity:
zkVMs (RiscZero, SP1, Valida) are making ZK proving as easy as writing Rust. You won’t need to understand circuits—just write normal code, compile to ZK. This abstracts away the hardest part.

3. Hardware acceleration:
Proof generation moving to GPUs and dedicated hardware (FPGAs, ASICs). This turns a 60-second proving time into 5 seconds. Suddenly batch sizes can be smaller (better UX) while keeping costs low.

4. Standardized interfaces:
The community is converging on standards for proof systems (Ethereum’s EIP-4844 for blobs is just the start). We’ll have similar standards for DA layer interfaces.

The Migration Path for ZK Projects

Steve worried about vendor lock-in. For ZK-rollups, migration is actually easier than for optimistic rollups:

Why:

  • ZK proofs are self-contained (don’t depend on fraud proof windows)
  • State reconstruction only needs DA layer data (not execution layer history)
  • Can run dual posting (new DA + old DA) during transition for safety
  • Users don’t need to do anything (migration is purely infrastructure-side)

I helped a ZK-rollup migrate from ETH calldata to Celestia last quarter:

  • Timeline: 6 weeks (not 4-6 months)
  • Downtime: 0 (ran dual posting for 2 weeks)
  • Issues: 1 minor (batching configuration needed tuning)
  • Cost savings: 94% reduction in DA costs

For ZK systems, modular migration is surprisingly tractable.

Different Philosophies for Different Layers

One thing Brian touched on: we don’t need one-size-fits-all.

From a cryptographic perspective, I see the ecosystem evolving like this:

Privacy-first applications: Use dedicated DA layers with encryption (Celestia + private namespaces, or Avail with encrypted data)

High-value settlements: Use Ethereum blobs or L1 calldata (maximum security, worth the cost)

High-throughput gaming/social: Use the cheapest DA layer available (Celestia, eventually Fibre for terabyte-scale)

Hybrid applications: Post critical data to L1, bulk data to Celestia

The modularity lets us optimize for different security/cost/performance profiles. That’s a feature, not a bug.

What Founders Should Actually Do (ZK Edition)

Steve, if you’re building a ZK application, here’s my practical advice:

Phase 1 (MVP): Use a ZK framework (Starknet, zkSync, Polygon zkEVM, Scroll) with default DA settings. Focus on product.

Phase 2 (Scale): When you hit 100K+ transactions/month, evaluate DA costs. If they’re under $5K/month, don’t optimize yet.

Phase 3 (Optimize): At 1M+ transactions/month, the cost savings of custom DA layer selection pay for the engineering time. Hire or consult with a ZK infrastructure expert.

Don’t prematurely optimize. But know that the optimization path exists when you need it.

The Endgame: Privacy + Scalability + Modularity

Here’s why I’m bullish long-term:

Modular architecture is the only way to achieve:

  • Privacy (different layers for encrypted data, proofs, settlement)
  • Scalability (dedicated DA layers for throughput, ZK for compression)
  • Flexibility (choose layers based on application requirements)
  • Cost efficiency (optimize each layer independently)

Monolithic chains force you to pick two at most. Modular lets you have all four.

Yes, it’s more complex right now. Yes, the tooling is immature. But the fundamental architecture is sound, and the trajectory is clear.

Final Thought

Lisa asked: “Are we making Web3 more or less accessible?”

Short term (2026): Less accessible for small teams
Medium term (2027-2028): Comparable to current complexity (with better tools)
Long term (2029+): More accessible because applications can choose exactly the layers they need instead of compromising on a monolithic chain

The awkward adolescent phase Mike mentioned? That’s where every major architectural shift goes. We’re building the abstractions right now.

I’m happy to answer specific questions about ZK proof systems, privacy trade-offs, or DA layer cryptographic assumptions. And Mike, seriously, let’s talk about that monitoring toolkit—I have some ideas about proof verification dashboards.