Maximizing Performance — A Guide to Efficient RPC Requests with BlockEden.xyz
BlockEden.xyz gives you high‑throughput, multi‑chain RPC endpoints backed by a 99.9% SLA and a transparent Compute Unit (CU) pricing model. This guide distills proven techniques for lighter, faster, and cheaper calls so your dApp can scale without surprises.
1. Make the Right Call
Choosing the correct method and environment from the start is the most effective way to manage costs and performance. Before writing a single line of code, strategize your approach.
Know Each` Method’s CU Cost
Every RPC method consumes a predetermined number of Compute Units (CUs). Heavy queries, such as historical log fetching with eth_getLogs
or account scanning with Solana's getProgramAccounts
, are significantly more expensive than simple state reads like eth_getBalance
.
Before implementing a feature, consult the Pricing page to understand the cost of each call. This allows you to budget your CU usage accurately and architect your application to favor more efficient methods, preventing unexpected bills as you scale.
Pick the Smallest Tool for the Job
Not every task requires a full node's power. Using the most direct and specialized tool for your data needs can dramatically reduce overhead.
- REST or GraphQL First: When you only need indexed or aggregated data (e.g., a user's token balances or transaction history), use BlockEden’s REST or GraphQL APIs. These are optimized for querying and are far more efficient than making multiple raw JSON-RPC calls and processing the results client-side.
- Use Chain-Specific SDKs: Prefer high-level Software Development Kits (SDKs) like
sui-ts-sdk
oraptos-sdk
over raw JSON-RPC requests where possible. These libraries often bundle multiple RPC calls into a single, optimized function, handle state management, and abstract away much of the complexity, leading to cleaner code and fewer requests.
Match Network and Environment
Maintain separate API keys for your mainnet, testnet, and staging environments. This simple practice offers critical benefits:
- Clean Logs: It keeps your production logs free from development and testing noise.
- Isolated Quotas: Rate limits and CU quotas for one environment won't impact another. A test script gone rogue won't throttle your production dApp.
- Enhanced Security: You can apply stricter security policies (like IP whitelisting) to your production key.
2. Optimise Every Request
Once you've chosen the right method, fine-tune the request itself to minimize its footprint. A well-crafted request is smaller, faster, and cheaper.
Filter Early and Aggressively
Always provide the most specific filters possible in your request parameters. Instead of fetching a large dataset and filtering it on the client, let the node do the work. This reduces data transfer, client-side memory usage, and CU consumption.
- For EVM chains: When using
eth_getLogs
, always specify a tight block range usingfromBlock
andtoBlock
. - For Solana: Use
memcmp
ordataSlice
filters withgetProgramAccounts
to narrow down results on the server.
Paginate and Chunk Large Scans
Never attempt to fetch unbounded data in a single request. Operations like crawling all NFTs in a collection, fetching a complete transaction history, or querying large tables over Aptos GraphQL must be broken into smaller pieces. Use pagination parameters like page
/limit
or cursor
/offset
to retrieve data in manageable chunks. This makes memory usage predictable and prevents request timeouts.
Cache Where Freshness Isn’t Critical
Many types of on-chain data change infrequently. Caching this data can eliminate a huge number of redundant RPC calls. Good candidates for caching include:
- Token metadata (name, symbol, decimals)
- Contract ABIs
- Resolved ENS names
Use a caching layer like Redis, Memcached, or even a simple in-browser localStorage
solution. Set a reasonable Time-to-Live (TTL) and consider invalidating the cache based on new block headers for a balance of freshness and performance.
Batch with Care
While JSON-RPC batching can reduce network round-trip latency by bundling multiple requests into one HTTP call, it's not a silver bullet. All requests in a batch are processed together, and the entire batch is blocked until the slowest request completes. Furthermore, each sub-call still consumes CUs individually.
For most read-heavy EVM workloads, opening parallel, single-shot requests over a persistent keep-alive connection often yields better overall throughput than batching. Always measure the performance of both approaches for your specific use case before committing to one.
3. Move from Polling to Push
Constantly asking "is it there yet?" is inefficient. A push-based model, where the server notifies you of changes, is superior for real-time applications.
WebSocket Subscriptions
Instead of polling a method like getEvents
or eth_getBlockByNumber
in a setInterval
loop, use WebSocket (WSS) endpoints. BlockEden exposes WSS for chains like Sui, Ethereum, and more. You can subscribe to new blocks, pending transactions, or specific log events. The server will push updates to your client as they happen, resulting in lower latency, reduced CU usage, and a more responsive user experience.