I’ve been digging through infrastructure data lately, and I came across something from Syndica that made me question a lot of assumptions about blockchain architecture. According to their analysis spanning over 2 years of production data across various DApps, 96.1% of all calls made to a Solana node are reads, not writes.
Let me repeat that: ninety-six percent reads.
The Database Question
As a data engineer, this immediately made me think: if blockchain usage is overwhelmingly read-heavy, are we essentially building really expensive, really slow distributed databases with consensus overhead?
Traditional databases have been optimized for decades around read/write patterns. We have read replicas, caching layers, materialized views—the whole works. But blockchain infrastructure has historically focused almost entirely on transaction throughput (writes) and consensus mechanisms. We measure TPS, finality time, and block production rates. But what about reads-per-second (RPS)?
The Infrastructure Mismatch
Think about what most blockchain infrastructure optimizes for:
- Validators: Optimized for transaction processing and consensus
- Benchmarks: TPS, finality time, throughput
- Scaling solutions: Layer 2s, sharding, parallel execution—all focused on write capacity
Meanwhile, the actual usage pattern is:
- Price feed queries for DeFi protocols
- Account balance checks for wallets
- Transaction history lookups
- State queries for dApps
- Analytics and monitoring
96% of the time, we’re just reading data.
Syndica’s Sig: Rethinking the Architecture
This is why Syndica’s approach with Sig caught my attention. They’re building a Solana validator client from scratch in Zig, specifically optimized for reads-per-second instead of transactions-per-second.
Early benchmarks show 50-70% performance improvements compared to existing solutions. They’re focusing on optimizing getProgramAccounts and other read-heavy queries that hammer RPC nodes constantly.
This feels like the kind of paradigm shift that’s obvious in retrospect: optimize for what people actually do, not what the whitepaper said they’d do.
So, Are We Just Building Databases?
Here’s where it gets philosophical. If 96% of blockchain operations are reads, and writes are the minority use case, what makes blockchains valuable?
I’d argue it’s the quality of those writes, not the quantity. That 4% of writes creates an immutable, verifiable history that makes the 96% of reads trustworthy. You’re not querying a database administrator—you’re querying cryptographic proof.
But from an infrastructure perspective, we’ve been overinvesting in write capacity and underinvesting in read optimization.
The Data Engineering Parallel
In traditional data engineering, we separate OLTP (transactional) from OLAP (analytical) workloads. We write to one system and read from another. We use data warehouses, read replicas, and caching layers.
Maybe blockchain infrastructure needs a similar split:
- Consensus layer: Optimized for secure, fast writes (the 4%)
- Data availability layer: Optimized for fast, scalable reads (the 96%)
Platforms like GetBlock and other RPC providers are already doing this—they’re essentially building read-optimized infrastructure on top of write-optimized blockchains.
Questions for the Community
- Should we rethink how we architect dApps around this 96/4 read/write split?
- Are read-optimized clients like Sig the future, or are we just patching over fundamental design issues?
- What does this mean for decentralization? If reads are centralized through RPC providers, does it matter that writes are decentralized?
- How should we price RPC services when read operations dominate usage?
I’m genuinely curious what others think. Are we building blockchains or just really expensive databases with really good audit logs?
Sources: