96% of Blockchain RPC Calls Are Reads—Are We Building Databases or Decentralized Ledgers?

I’ve been analyzing RPC traffic patterns for the past few months, and I recently came across some data from Syndica that really made me pause and think about what we’re actually building here.

The stat that caught my attention: 96% of all calls made to Solana nodes are read operations. For every transaction submission (write), there are 25 read calls—balance checks, transaction lookups, account queries, program state reads.

This isn’t unique to Solana. When I looked at Ethereum and other chains, the pattern holds. The vast majority of blockchain infrastructure usage is reading data, not writing it.

So what does this mean?

On one hand, you could argue we’ve built the world’s most expensive, slowest database. Traditional databases are optimized for read-heavy workloads with techniques like caching, indexing, and read replicas. They handle billions of queries per second at millisecond latency.

Blockchains? We’re celebrating thousands of TPS and sub-second finality.

But here’s where it gets interesting: maybe that’s exactly the point.

The Infrastructure Response

What I find fascinating is how the industry is responding to this reality. Syndica built Sig, a read-optimized Solana validator client written in Zig specifically because they recognized this 96% read pattern. Early benchmarks show 50-70% performance improvements, with some operations running 1.5-4x faster than existing implementations.

RPC providers are splitting into specialized services:

  • Read-optimized nodes: Archive nodes, fast-path routing for balance checks and state queries
  • Write-optimized infrastructure: Focused on transaction submission, mempool optimization, MEV protection
  • Hybrid approaches: Trying to balance both

The Database Question

Here’s my question for the community: If blockchain usage is 96% reads, are we just building really slow, really expensive databases with consensus as a feature?

Or is the correct mental model something else entirely? Maybe blockchains are less like databases and more like:

  • Public ledgers where write scarcity is the feature, and reads are meant to be cheap/abundant
  • Shared state machines where consensus on writes is valuable, but reads are just local queries
  • Verification systems where anyone can independently verify writes, making reads inherently cheap

What I’m watching

As a data engineer, I’m tracking a few developments:

  1. Client specialization: Will we see more read-optimized clients like Sig? What about write-optimized consensus clients?

  2. Pricing models: Current RPC pricing often charges for read volume, but if reads are 96% of usage and should be cheap, does this pricing make sense?

  3. Infrastructure architecture: Should we separate read and write infrastructure entirely? What are the security implications?

  4. Developer experience: Most devs don’t think about read vs write optimization when building dApps—should they?

My take (tentative)

I’m increasingly thinking that the 96% read statistic isn’t a bug, it’s a feature. The value proposition of blockchain isn’t fast reads or writes—it’s verifiable, uncensorable writes with abundant, permissionless reads.

Traditional databases optimize for both read and write performance because both are expensive. Blockchains deliberately make writes expensive (consensus overhead) while making reads cheap (anyone can run a node).

The question is: are we building infrastructure that acknowledges this reality? Or are we still trying to make blockchains into better databases?

What do you think? Am I overthinking this? Is the 96% read pattern actually revealing something fundamental about blockchain architecture, or is it just a technical detail we need to optimize?


Data sources: Syndica’s Sig announcement, Solana Compass, personal analysis of RPC traffic patterns

Mike, this is actually one of my favorite topics to discuss because it reveals how many people fundamentally misunderstand what blockchains are designed to do.

The 96% read ratio isn’t a bug—it’s a feature, and it’s exactly what we should expect.

Blockchains Were Always Designed This Way

Think about the original Bitcoin whitepaper. Satoshi wasn’t trying to build a fast database—he was building a system where writes (transactions) are intentionally expensive because they require global consensus, but reads are cheap because anyone can verify the chain independently.

This is the entire point: write scarcity, read abundance.

The CQRS Pattern in Traditional Systems

In traditional distributed systems, we have a well-established pattern called CQRS (Command Query Responsibility Segregation). The idea is simple: separate your write path (commands) from your read path (queries) because they have different performance characteristics and optimization strategies.

Blockchains naturally implement CQRS at the protocol level:

  • Write path (commands): Consensus layer, validators, expensive, distributed, slow
  • Read path (queries): Local state queries, cheap, fast, no consensus needed

The 96% read ratio shows that this separation is working exactly as intended. Users submit transactions (writes) that go through consensus, then they query the results (reads) many times without needing validator agreement.

Why Consensus Makes Writes Expensive

Here’s the key insight: consensus is expensive by necessity, not by accident.

Every write to a blockchain requires:

  1. Broadcasting to the network
  2. Validation by multiple nodes
  3. Inclusion in a block
  4. Finalization through consensus
  5. Replication across thousands of nodes

This is computationally expensive, network-intensive, and time-consuming. But it’s the only way to achieve Byzantine Fault Tolerance in a decentralized system.

Reads, on the other hand, require none of this. You query your local node’s state, and you’re done. No coordination needed.

Blockchains vs Databases: Wrong Comparison

When people say “blockchains are slow databases,” they’re making a category error. Blockchains aren’t databases—they’re consensus mechanisms with database-like properties.

A better comparison:

  • Database: Optimized for read/write performance, assumes trusted coordinator
  • Blockchain: Optimized for trust minimization, performance is secondary

If you need fast reads and writes in a trusted environment, use PostgreSQL. If you need verifiable, uncensorable writes that anyone can read, use a blockchain.

The Sig Innovation Makes Perfect Sense

Syndica’s Sig client is brilliant precisely because it acknowledges this reality. If 96% of operations are reads and reads don’t require consensus, then optimize the hell out of read performance without touching the consensus layer.

This is the right architecture:

  • Consensus clients focus on write path: block production, validation, finalization
  • Read-optimized clients like Sig focus on query performance: account lookups, state reads, transaction history

In the Ethereum ecosystem, we’ve been moving in this direction with client diversity and specialized infrastructure.

Security Implications

One thing to watch: separating read and write infrastructure creates new attack vectors. If your read nodes are compromised, they could serve false data even though the chain itself is secure.

This is why light clients and verification are crucial. Users should be able to verify read responses cryptographically without trusting the RPC provider.

The takeaway: The 96% read statistic validates that blockchain architecture is working as designed. We should embrace specialization—consensus clients for writes, optimized clients for reads, and cryptographic verification to tie them together securely.

What we shouldn’t do is try to make blockchains into fast databases. That’s not the design goal, and it misses the entire value proposition.

Brian makes great technical points, but I want to bring this back to what actually matters for people building products: cost, performance, and user experience.

I’m running a Web3 startup, and here’s my reality check on the 96% read statistic.

The Infrastructure Cost Problem

Our current RPC provider charges us based on total request volume. Last month:

  • 2.3 million requests
  • ~2.2 million were reads (balance checks, NFT metadata, transaction history)
  • ~100k were writes (actual transactions)

We paid the same rate for reads and writes.

If reads are 96% of our traffic and should theoretically be cheap (no consensus needed), why are we paying the same price for them? That’s like charging the same for viewing a website as for posting content.

What Users Actually Care About

Here’s what our users complain about:

  1. Slow wallet balance updates (read operation)
  2. NFT images not loading (read operation)
  3. Transaction history taking forever (read operation)
  4. Pending transactions feeling scary (write operation)

Three out of four pain points are read performance. When reads are slow, the entire app feels broken—even if our smart contracts are lightning fast.

The Pricing Model Is Backwards

Brian’s right that consensus is expensive and reads should be cheap. But the current RPC pricing models don’t reflect this reality:

  • Alchemy, Infura, QuickNode: Tiered pricing based on total compute units or requests
  • Dedicated nodes: Flat monthly fee regardless of read/write ratio
  • Some providers: Actually charge more for archive node reads than for transaction submissions

If 96% of operations are reads that don’t require consensus, shouldn’t there be a massive price difference between read and write operations?

What I Want as a Builder

Honestly, I don’t care whether we call blockchains “databases” or “consensus mechanisms.” What I care about:

  1. Can I build a product that feels as fast as Web2?
  2. Will my infrastructure costs scale linearly or exponentially?
  3. Can I cache aggressively without sacrificing correctness?

Right now, the answers are:

  1. No (reads are still slower than traditional APIs)
  2. It depends (some providers penalize high read volumes)
  3. Maybe? (not sure when cached data becomes stale)

The Opportunity

Mike, you mentioned Sig showing 50-70% performance improvements for reads. This is huge from a product perspective.

If RPC providers adopted read-optimized infrastructure like Sig:

  • Our wallet UI could load 2x faster
  • We could reduce infrastructure costs significantly
  • User experience would improve immediately

But here’s my question: Why isn’t this the default? If 96% of blockchain usage is reads, why did it take until 2026 for someone to build a read-optimized client?

What Would Actually Help

I’d love to see:

  1. Tiered pricing that reflects the read/write split: Charge more for writes (which require consensus) and less for reads (which are local queries)

  2. Read-optimized RPC endpoints: Let me send balance checks to a fast read-only node and transactions to a consensus node

  3. Clear caching guidelines: When can I safely cache data? How do I know when to invalidate the cache?

  4. Developer tools that make this obvious: Most devs don’t think about read vs write optimization—the tools should make it easy to do the right thing

The Bottom Line

Brian’s technical explanation makes sense, but the infrastructure and pricing haven’t caught up to the reality that 96% of operations are reads.

For builders, this creates a weird situation where we’re paying for expensive consensus on operations that don’t need consensus, and we can’t optimize our way out of it because the RPC providers treat all requests the same.

The 96% read statistic should be an opportunity to make blockchain infrastructure 10x cheaper and faster. Instead, it feels like we’re still paying database prices for consensus we’re not using.

Am I missing something? Is there a provider that’s already solving this with specialized read/write pricing?

This conversation is fascinating because the read/write dynamic looks completely different when you zoom out to Layer 2 infrastructure. Let me share what we’re seeing in the L2 world.

L2s Amplify the Read/Write Split

On Ethereum mainnet, you might see a 96% read ratio. On Layer 2s, it’s even more extreme—often 98-99% reads.

Here’s why: L2s batch hundreds or thousands of transactions into a single L1 write. So from the L2’s perspective:

  • Writes on L2: Cheap, fast, frequent (thousands per second)
  • Writes to L1: Expensive, batched, rare (every few minutes)
  • Reads on L2: Extremely common (users checking balances, querying state)

The architecture naturally creates even more read-heavy traffic because the consensus happens locally on the L2 sequencer, but users are constantly querying the state.

How L2s Handle This Differently

We’ve actually designed L2 infrastructure to optimize for this reality:

1. Separate Read and Write Paths

Most L2s already separate infrastructure:

  • Sequencer: Handles transaction ordering and batching (write path)
  • RPC nodes: Serve read queries from local state (read path)
  • Batch submitter: Posts batches to L1 (expensive write path)

This is exactly the specialization Brian mentioned, but built into the architecture from day one.

2. Optimistic vs ZK Rollup Read Performance

There’s an interesting difference here:

Optimistic Rollups (Optimism, Arbitrum):

  • Reads are instant because state is assumed valid
  • No proof verification needed for queries
  • Extremely fast read performance

ZK Rollups (zkSync, StarkNet):

  • Reads still don’t require proof verification
  • State is cryptographically proven valid on L1
  • Similar read performance to Optimistic rollups

In both cases, reads are much faster than on L1 because the state is local to the L2 node.

3. The Data Availability Layer

Here’s where it gets really interesting for reads: Data Availability layers like Celestia or EigenDA could make historical reads even cheaper.

If L2s post their transaction data to a DA layer instead of Ethereum L1:

  • Lower costs for data storage
  • Potentially faster access to historical state
  • Cheaper archive node operations

This could make the read-optimized infrastructure Steve wants economically viable.

The Sequencer Centralization Trade-off

One thing to watch: most L2s currently have centralized sequencers. This makes the read/write split even more pronounced:

  • Writes: Centralized sequencer orders transactions (fast but centralized)
  • Reads: Anyone can run an RPC node and verify the state (decentralized)

As L2s move toward decentralized sequencers, the write path will become more expensive and slower. This might actually increase the read/write performance gap even further.

Performance Data from the Field

In our testnet for a new rollup implementation, we’re seeing:

  • Read latency: 10-50ms (local state query)
  • Write latency: 100-500ms (sequencer confirmation)
  • L1 settlement: 2-10 minutes (batch posting)
  • Read/write ratio: 98.7% reads

This matches the industry pattern. Users make tons of read queries between actual transaction submissions.

Does Specialized Read Infrastructure Make Sense for L2s?

Steve asked whether read-optimized clients make sense. For L2s, I think the answer is yes, but differently:

  1. L2 RPC nodes don’t need consensus logic: They just serve state queries from the sequencer’s output
  2. Historical data can be pruned: Most queries are for recent state
  3. Caching is safer: L2 state updates in predictable batches, making cache invalidation easier

In other words, L2 RPC providers could be much simpler and cheaper than L1 providers—they’re basically read-only state servers.

The L2 Opportunity

Mike’s right that the 96% read statistic reveals something fundamental. For L2s, it suggests:

  1. RPC providers should charge almost nothing for reads: The infrastructure is cheap
  2. Transaction fees should fund the entire system: Writes (consensus) are where the cost is
  3. Anyone should be able to run a cheap read-only node: No consensus participation required

This is actually how we’re thinking about our infrastructure pricing: free or very cheap reads, revenue from transaction fees.

The Cross-Chain Complication

One thing that complicates this: cross-chain reads introduce consensus requirements.

If you want to trustlessly read Ethereum state from an L2 (or vice versa), you need:

  • Light client proofs
  • Merkle proof verification
  • State root verification

Suddenly, what looks like a simple read operation requires cryptographic verification and becomes more expensive.

My Take

The 96% read statistic is telling us that blockchain infrastructure should look more like:

  • Write path: Expensive, consensus-driven, decentralized, slow
  • Read path: Cheap, verification-driven, easily parallelized, fast

L2s are already built this way. The question is whether L1 infrastructure will adopt similar separation, or if we’ll keep treating all operations equally.

Steve’s frustration about paying the same price for reads and writes makes perfect sense from an L2 engineer’s perspective. The costs are fundamentally different, and the pricing should reflect that.

My prediction: In 2-3 years, RPC providers will offer completely separate pricing for reads (nearly free) and writes (expensive). The 96% statistic is too stark to ignore, and competitive pressure will force the market to acknowledge it.

I love this discussion because it highlights something we deal with constantly in product design: the user has no idea whether they’re doing a read or a write, and they shouldn’t have to care.

But the 96% read statistic explains so many UX problems we encounter. Let me share the design perspective.

Most User Actions Are Reads

When we map out typical user flows in a DeFi app, the breakdown looks something like this:

Wallet Dashboard:

  • Check ETH balance (read)
  • Check token balances (read × number of tokens)
  • View transaction history (read)
  • View NFT gallery (read × number of NFTs)
  • Send transaction → write (1 action out of 20+)

NFT Marketplace:

  • Browse listings (read × 50+)
  • View NFT details (read)
  • Check floor price (read)
  • View seller history (read)
  • View collection stats (read)
  • Make an offer → write (1 action out of 100+)

DeFi Protocol:

  • Check available pools (read × 10+)
  • View APY data (read)
  • Check wallet holdings (read)
  • Estimate gas (read)
  • Approve token → write
  • Execute swap/stake/lend → write (2 actions out of 30+)

The pattern is clear: users spend most of their time consuming data, not transacting.

Why Slow Reads Kill UX

Here’s the problem: even though writes (transactions) are the critical action, slow reads make the entire experience feel broken.

Steve mentioned this earlier—when balance updates are slow or NFT images don’t load, users assume the app is broken, even if transaction execution is fast.

The Perception Problem

Users have been trained by Web2 that:

  • Reading data = instant (Google search, Twitter feed, Amazon browsing)
  • Writing data = slightly delayed (posting a tweet, submitting a form)

In Web3, we flip this:

  • Reading data = sometimes slow (RPC latency, rate limits, caching issues)
  • Writing data = slow + expensive + requires approval

This breaks user expectations in two ways instead of one.

Design Patterns That Work Around Read Latency

Because read performance is inconsistent, we’ve developed workarounds:

1. Aggressive Optimistic UI

Show the expected result immediately, then verify with actual blockchain reads:

  • Update balance display after transaction (optimistic)
  • Fetch real balance in background (read)
  • Handle discrepancies gracefully if they occur

Problem: This only works if we trust the user’s wallet to give us correct data. If the RPC read contradicts the optimistic UI, users get confused.

2. Caching + Polling

Cache read-heavy data and poll for updates:

  • Store balance, transaction history, NFT metadata locally
  • Poll RPC every N seconds for updates
  • Invalidate cache on user-initiated writes

Problem: Stale data creates trust issues. How do users know the data is current?

3. Progressive Disclosure

Load critical data first, defer secondary reads:

  • Show wallet balance immediately (1 read)
  • Load transaction history lazily (deferred reads)
  • Load NFT metadata as user scrolls (paginated reads)

Problem: This helps perceived performance but doesn’t solve actual latency.

What 96% Reads Means for Design

If blockchain infrastructure optimized for the 96% read use case, we could design better experiences:

What We Could Do With Fast Reads:

  1. Real-time data everywhere: No more loading spinners for balance checks
  2. Better search and filtering: Users could query historical data without pagination hacks
  3. Instant transaction status: No more “your transaction is pending” anxiety
  4. Rich data visualization: We could build dashboards that query state without performance penalties

What We Could Stop Doing:

  1. Stop compromising on data freshness: No more stale cache strategies
  2. Stop building complex optimistic UI: Just show real data
  3. Stop hiding features behind lazy loading: Make all data accessible
  4. Stop managing multiple data sources: Use blockchain data as source of truth

The User Mental Model Problem

Here’s something designers wrestle with: users don’t understand the read/write distinction.

When we tell users:

  • “Viewing your balance is fast and free” (read)
  • “Sending tokens is slow and costs gas” (write)

They understand the transaction part. But they don’t understand why:

  • Viewing transaction history is sometimes slow (read)
  • Checking NFT metadata sometimes fails (read)
  • Balance updates lag after a transaction (read)

From the user’s perspective, they’re just “using the app.” The technical distinction between consensus-requiring writes and local reads is invisible.

What Good Read Infrastructure Enables

Steve asked about caching guidelines. From a design perspective, what would actually help:

Clear Data Freshness Contracts

“This data was read from block #18,234,567, confirmed 12 seconds ago”

Why this matters: Users can see data is current without trusting the UI

Instant Read Queries

Balance checks, transaction history, and NFT metadata should feel instant

Why this matters: Matches Web2 expectations, makes apps feel responsive

Predictable Read Latency

Even if reads aren’t instant, they should be consistent

Why this matters: We can design better loading states and skeleton UIs

Separate Read/Write Infrastructure

Dedicated read endpoints that are optimized for speed, not consensus

Why this matters: We can route queries to fast read nodes and transactions to consensus nodes

The Pricing Impact on UX

Brian and Lisa mentioned that pricing should reflect the 96% read reality. From a product design perspective, this would be huge:

If reads were nearly free:

  • We could build richer dashboards without worrying about API costs
  • We could enable complex filtering and search without rate limit anxiety
  • We could show real-time data instead of cached approximations

If writes remained expensive but reads were cheap:

  • Users would understand that transactions cost money, but using the app doesn’t
  • The mental model would match their Web2 experience (browsing is free, actions cost)

My Take

The 96% read statistic reveals a fundamental UX challenge: we’re building apps where 96% of user interactions are unnecessarily slow.

Brian’s right that blockchains are designed this way. Lisa’s right that L2s are already optimizing for it. But from a design perspective, we need:

  1. Read infrastructure that matches user expectations: Instant, reliable, cheap
  2. Write infrastructure that matches user understanding: Intentional, confirmable, worth the cost
  3. Clear mental models: Users should understand when they’re transacting (write) vs. browsing (read)

The 96% statistic shouldn’t be a technical curiosity—it should drive infrastructure investment. If almost all user interactions are reads, reads should be the best-optimized part of the stack.

My hope: In 2-3 years, designers can stop building elaborate caching strategies and optimistic UIs to work around slow reads, and just show users real, fast, current blockchain data.