Rate Limits & Quotas
BlockEden.xyz implements intelligent rate limiting to ensure fair usage and optimal performance for all users. This guide explains how rate limits work, what limits apply to your tier, and best practices for staying within limits.
Overview
Our rate limiting system uses two types of limits:
- RPS (Requests Per Second): Controls how fast you can make API requests
- Daily Quota: Limits total compute units consumed per day
Both limits are enforced to ensure service quality and fair resource allocation across all users.
Understanding Rate Limits
How Rate Limiting Works
When you make an API request to BlockEden.xyz, the system checks:
- Authentication: Is a valid access key provided in the URL?
- IP-based limit: For FREE tier users, IP address is also rate limited
- User-based limit: Based on your pricing plan
- Daily quota: Total compute units consumed today
If any limit is exceeded, you'll receive a 429 Too Many Requests
response.
Rate Limit Architecture
Your Request → BlockEden.xyz API
↓
1. Extract access key from URL path
Example: /eth/{your-20-char-key}/v1/block
↓
2. Check pricing plan (FREE/BASIC/PRO/ENTERPRISE)
↓
3. Apply rate limit based on tier
↓
4. Check daily quota
↓
5. Process request or return 429
Pricing Tiers & Limits
FREE Tier
The FREE tier is designed for development, testing, and small-scale applications.
Metric | Limit | Notes |
---|---|---|
User RPS | 5 requests/second | Per user (shared across all access keys) |
IP RPS | 12 requests/second | Per IP address |
Enforcement | AND logic | Both limits must pass |
Daily Quota | Plan-specific | See compute units |
Important: FREE tier users must pass both the user RPS limit (5 req/s) and IP RPS limit (12 req/s). If either limit is exceeded, requests will be throttled.
Use case: Development, testing, hobby projects, small applications
BASIC Tier
The BASIC tier removes IP-based restrictions and provides higher throughput.
Metric | Limit | Notes |
---|---|---|
User RPS | 10 requests/second | Per access key |
IP RPS | None | No IP restrictions |
Daily Quota | Plan-specific | See compute units |
Use case: Small to medium production applications
PRO Tier
The PRO tier offers the same RPS as BASIC with enhanced quota and support.
Metric | Limit | Notes |
---|---|---|
User RPS | 10 requests/second | Per access key |
IP RPS | None | No IP restrictions |
Daily Quota | Plan-specific | Higher quota than BASIC |
Support | Priority support | Faster response times |
Use case: Production applications, growing businesses
ENTERPRISE_50 Tier
High-performance tier for demanding applications.
Metric | Limit | Notes |
---|---|---|
User RPS | 50 requests/second | Per access key |
IP RPS | None | No IP restrictions |
Daily Quota | Plan-specific | Enterprise-grade quota |
Support | Dedicated support | SLA available |
Use case: Large-scale applications, high-traffic DApps
ENTERPRISE_500 Tier
Maximum performance tier for mission-critical applications.
Metric | Limit | Notes |
---|---|---|
User RPS | 500 requests/second | Per access key |
IP RPS | None | No IP restrictions |
Daily Quota | Custom | Tailored to your needs |
Support | White-glove support | Custom SLA, dedicated account manager |
Use case: Enterprise applications, DEX platforms, high-frequency trading, analytics platforms
Comparison Table
Feature | FREE | BASIC | PRO | ENTERPRISE_50 | ENTERPRISE_500 |
---|---|---|---|---|---|
User RPS | 5 | 10 | 10 | 50 | 500 |
IP Limit | ✅ 12 req/s | ❌ None | ❌ None | ❌ None | ❌ None |
Both Limits | ✅ AND | ❌ User only | ❌ User only | ❌ User only | ❌ User only |
Daily Quota | Basic | Standard | Enhanced | High | Custom |
Support | Community | Priority | Dedicated | White-glove | |
SLA | None | None | Standard | Custom | Premium |
How Access Keys Work
BlockEden.xyz uses path-based authentication where your access key is embedded in the URL:
URL Format
https://api.blockeden.xyz/{chain}/{access-key}/{endpoint}
Examples
Ethereum:
https://api.blockeden.xyz/eth/abc123def456ghi789jk/v1/block
^^^^^^^^^^^^^^^^^^^^
Your 20-character access key
Solana:
https://api.blockeden.xyz/solana/abc123def456ghi789jk/
Sui:
https://api.blockeden.xyz/sui/abc123def456ghi789jk/
Access Key Properties
- Length: Exactly 20 characters
- Location: Always after the chain prefix
- Required: All requests must include a valid key (except public endpoints)
- Rate Limit Tracking: All access keys for the same user share one rate limit bucket (user-level enforcement)
Public Endpoints (No Authentication Required)
These endpoints do not require an access key and are not rate limited:
/health
- Health check endpoint/healthz
- Alternative health check/api/4489233/metrics
- Prometheus metrics
All other endpoints require a valid access key.
Understanding HTTP Status Codes
200 OK
Request succeeded. Your rate limits and quota were not exceeded.
401 Unauthorized
{
"error": "Unauthorized",
"message": "Please provide a valid access key"
}
Cause: No access key provided or invalid access key
Solution: Check your URL format includes the correct 20-character access key
429 Too Many Requests
{
"error": "Too Many Requests",
"message": "Rate limit exceeded"
}
Cause: You exceeded your RPS limit
Solution:
- Implement exponential backoff
- Reduce request frequency
- Upgrade to a higher tier
- For FREE tier: check if IP limit (12 req/s) is also being hit
403 Forbidden
{
"error": "Forbidden",
"message": "IP address blocked"
}
Cause: Your IP address is on the blocklist
Solution: Contact support if you believe this is an error
Rate Limit Headers
BlockEden.xyz returns the following headers with each response:
X-RateLimit-Limit: 50
X-RateLimit-Remaining: 48
X-RateLimit-Reset: 1704067200
Header | Description | Example |
---|---|---|
X-RateLimit-Limit | Your RPS limit | 50 |
X-RateLimit-Remaining | Requests remaining in current second | 48 |
X-RateLimit-Reset | Unix timestamp when limit resets | 1704067200 |
Best Practices
1. Implement Exponential Backoff
When you receive a 429
response, wait before retrying:
async function makeRequestWithBackoff(url, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
const response = await fetch(url);
if (response.status === 429) {
const waitTime = Math.pow(2, attempt) * 1000; // 1s, 2s, 4s
console.log(`Rate limited. Waiting ${waitTime}ms before retry...`);
await new Promise((resolve) => setTimeout(resolve, waitTime));
continue;
}
return response;
}
throw new Error("Max retries exceeded");
}
2. Client-Side Rate Limiting
Prevent hitting rate limits by implementing client-side throttling:
class RateLimiter {
private queue: number[] = [];
private limit: number;
private interval: number;
constructor(limit: number, interval: number = 1000) {
this.limit = limit;
this.interval = interval;
}
async throttle(): Promise<void> {
const now = Date.now();
// Remove timestamps older than interval
this.queue = this.queue.filter((time) => now - time < this.interval);
if (this.queue.length >= this.limit) {
const oldestRequest = this.queue[0];
const waitTime = this.interval - (now - oldestRequest);
await new Promise((resolve) => setTimeout(resolve, waitTime));
return this.throttle(); // Retry after waiting
}
this.queue.push(now);
}
}
// Usage
const limiter = new RateLimiter(5); // 5 requests per second
async function makeRequest(url: string) {
await limiter.throttle();
return fetch(url);
}
3. Batch Requests When Possible
Reduce request count by batching JSON-RPC calls:
// Instead of making 3 separate requests
const block1 = await fetch(url, {
body: JSON.stringify({
method: "eth_getBlockByNumber",
params: ["0x1", false],
}),
});
const block2 = await fetch(url, {
body: JSON.stringify({
method: "eth_getBlockByNumber",
params: ["0x2", false],
}),
});
const block3 = await fetch(url, {
body: JSON.stringify({
method: "eth_getBlockByNumber",
params: ["0x3", false],
}),
});
// Make 1 batched request
const batched = await fetch(url, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify([
{
jsonrpc: "2.0",
id: 1,
method: "eth_getBlockByNumber",
params: ["0x1", false],
},
{
jsonrpc: "2.0",
id: 2,
method: "eth_getBlockByNumber",
params: ["0x2", false],
},
{
jsonrpc: "2.0",
id: 3,
method: "eth_getBlockByNumber",
params: ["0x3", false],
},
]),
});
4. Cache Responses
Cache frequently accessed data to reduce API calls:
const cache = new Map();
const CACHE_TTL = 60000; // 1 minute
async function getCachedData(key, fetchFn) {
const cached = cache.get(key);
if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
return cached.data;
}
const data = await fetchFn();
cache.set(key, { data, timestamp: Date.now() });
return data;
}
// Usage
const blockNumber = await getCachedData("latest-block", async () => {
const response = await fetch(rpcUrl, {
method: "POST",
body: JSON.stringify({ method: "eth_blockNumber" }),
});
return response.json();
});
5. Monitor Your Usage
Track your request patterns to optimize usage:
class RequestMonitor {
private requests: { timestamp: number; status: number }[] = [];
logRequest(status: number) {
this.requests.push({ timestamp: Date.now(), status });
}
getStats(intervalMs: number = 60000) {
const now = Date.now();
const recent = this.requests.filter(r => now - r.timestamp < intervalMs);
return {
total: recent.length,
success: recent.filter(r => r.status === 200).length,
rateLimit: recent.filter(r => r.status === 429).length,
rps: recent.length / (intervalMs / 1000)
};
}
}
const monitor = new RequestMonitor();
// After each request
monitor.logRequest(response.status);
// Check stats
console.log(monitor.getStats()); // Stats for last 60 seconds
6. Distribute Load Across Multiple Keys
For high-volume applications, use multiple access keys:
class KeyRotator {
private keys: string[];
private currentIndex: number = 0;
constructor(keys: string[]) {
this.keys = keys;
}
getNextKey(): string {
const key = this.keys[this.currentIndex];
this.currentIndex = (this.currentIndex + 1) % this.keys.length;
return key;
}
}
const rotator = new KeyRotator([
'abc123def456ghi789jk',
'xyz789uvw456rst123ab',
'mno456jkl123ghi789cd'
]);
// Each request uses a different key
async function makeRequest(endpoint: string) {
const key = rotator.getNextKey();
return fetch(`https://api.blockeden.xyz/eth/${key}${endpoint}`);
}
WebSocket Rate Limits
WebSocket connections have the same rate limits as HTTP requests:
- Connection establishment counts as 1 request
- Each subscription (e.g.,
eth_subscribe
) counts as 1 request - Incoming messages from subscriptions do not count toward your limit
- Outgoing messages (your requests over the WebSocket) count toward your RPS limit
WebSocket Best Practices
const ws = new WebSocket(`wss://api.blockeden.xyz/eth/${accessKey}`);
// Reuse connections instead of creating new ones
let isConnected = false;
ws.on("open", () => {
isConnected = true;
// Subscribe once, receive many updates
ws.send(
JSON.stringify({
jsonrpc: "2.0",
id: 1,
method: "eth_subscribe",
params: ["newHeads"],
}),
);
});
ws.on("message", (data) => {
// Incoming messages don't count toward rate limit
console.log("New block:", data);
});
// Reuse connection for multiple subscriptions
if (isConnected) {
ws.send(
JSON.stringify({
jsonrpc: "2.0",
id: 2,
method: "eth_subscribe",
params: ["logs", { topics: ["0x..."] }],
}),
);
}
Upgrading Your Tier
If you're frequently hitting rate limits, consider upgrading:
When to Upgrade
- ✅ Receiving frequent
429
responses - ✅ FREE tier IP limit (12 req/s) is too restrictive
- ✅ Need higher throughput for production
- ✅ Require priority support
- ✅ Building high-frequency applications
How to Upgrade
- Visit BlockEden.xyz Dashboard
- Navigate to Billing & Plans
- Select your desired tier
- Complete payment
- Rate limits update immediately
Custom Rate Limits
For use cases requiring custom rate limits beyond ENTERPRISE_500:
- Dedicated infrastructure: Private nodes with custom configurations
- Custom SLA: Guaranteed uptime and response times
- Volume discounts: Reduced per-request costs at scale
- Multi-region deployment: Reduced latency globally
Contact our sales team: sales@blockeden.xyz
Troubleshooting
Problem: Getting 429 errors on FREE tier
Diagnosis: Check if you're hitting user limit (5 req/s) or IP limit (12 req/s)
Solutions:
- Implement client-side rate limiting (see examples above)
- Batch multiple requests into one
- Cache responses when possible
- Upgrade to BASIC tier to remove IP restrictions
Problem: Inconsistent 429 errors
Diagnosis: You might be bursting above your RPS limit
Solutions:
- Implement request queuing to smooth out bursts
- Use the sliding window rate limiter example above
- Monitor your request patterns with the RequestMonitor class
Problem: Need higher limits temporarily
Solutions:
- Upgrade to the next tier (instant activation)
- Contact support for temporary limit increase
- Use multiple access keys to distribute load
Problem: IP blocklist (403 errors)
Diagnosis: Your IP has been added to the blocklist (usually for abuse)
Solutions:
- Check if you're using a VPN or proxy
- Verify your application isn't making excessive requests
- Contact support: support@blockeden.xyz
FAQ
How is the rate limit calculated?
Rate limits use a token bucket algorithm:
- Your "bucket" holds tokens equal to your RPS limit
- Each request consumes 1 token
- Tokens refill smoothly at your RPS rate (not in bursts)
- Quantum is set to 1 for smooth refilling
Example (BASIC tier, 10 RPS):
- You can make 10 requests immediately (bucket full)
- After that, you get 1 token every 100ms
- No bursting allowed beyond your RPS limit
Can I exceed my RPS limit briefly?
No. The system uses quantum=1, which means smooth rate limiting without bursting. If your RPS is 10, you cannot make 20 requests at once, even if you wait 2 seconds.
Do failed requests count toward my limit?
Yes. All requests count toward your RPS limit, regardless of the response:
- ✅ 200 OK → counts
- ✅ 400 Bad Request → counts
- ✅ 429 Too Many Requests → counts
- ✅ 500 Internal Error → counts
How do compute units differ from RPS?
- RPS: Controls request frequency (requests per second)
- Compute Units: Controls daily quota (total computational cost)
Both limits are independent. You could hit your RPS limit without hitting your daily quota, or vice versa.
See Understanding Compute Units for more details.
Can I have multiple access keys with different rate limits?
No. Rate limiting is applied at the user level, not per access key:
- All access keys for the same user share the same rate limit bucket
- Rate limit is determined by your pricing plan (FREE, BASIC, PRO, ENTERPRISE_50, ENTERPRISE_500)
- This prevents circumventing rate limits by creating multiple keys
- Upgrading your plan increases the rate limit for all your access keys
What happens when I upgrade mid-day?
Your rate limit updates immediately. However:
- Daily quota resets at midnight UTC
- Current usage carries over until reset
- New RPS limit takes effect for the next request
Do GraphQL queries count as 1 request?
GraphQL queries count based on complexity:
- Simple queries: 1 compute unit
- Complex queries with multiple fields: 2-10+ compute units
- Mutations: 5-20+ compute units
For RPS purposes, each GraphQL query = 1 request.
How are WebSocket subscriptions counted?
- Initial connection: 1 request
- Each
subscribe
call: 1 request - Incoming subscription data: 0 requests (free)
- Outgoing requests over WS: 1 request each
Support
Need help with rate limiting?
- Documentation: https://blockeden.xyz/docs
- Community: Discord
- Email Support: support@blockeden.xyz
- Sales: sales@blockeden.xyz