Saltar al contenido principal

Rate Limits & Quotas

BlockEden.xyz implements intelligent rate limiting to ensure fair usage and optimal performance for all users. This guide explains how rate limits work, what limits apply to your tier, and best practices for staying within limits.

Overview

Our rate limiting system uses two types of limits:

  1. RPS (Requests Per Second): Controls how fast you can make API requests
  2. Daily Quota: Limits total compute units consumed per day

Both limits are enforced to ensure service quality and fair resource allocation across all users.

Understanding Rate Limits

How Rate Limiting Works

When you make an API request to BlockEden.xyz, the system checks:

  1. Authentication: Is a valid access key provided in the URL?
  2. IP-based limit: For FREE tier users, IP address is also rate limited
  3. User-based limit: Based on your pricing plan
  4. Daily quota: Total compute units consumed today

If any limit is exceeded, you'll receive a 429 Too Many Requests response.

Rate Limit Architecture

Your Request → BlockEden.xyz API

1. Extract access key from URL path
Example: /eth/{your-20-char-key}/v1/block

2. Check pricing plan (FREE/BASIC/PRO/ENTERPRISE)

3. Apply rate limit based on tier

4. Check daily quota

5. Process request or return 429

Pricing Tiers & Limits

FREE Tier

The FREE tier is designed for development, testing, and small-scale applications.

MetricLimitNotes
User RPS5 requests/secondPer user (shared across all access keys)
IP RPS12 requests/secondPer IP address
EnforcementAND logicBoth limits must pass
Daily QuotaPlan-specificSee compute units

Important: FREE tier users must pass both the user RPS limit (5 req/s) and IP RPS limit (12 req/s). If either limit is exceeded, requests will be throttled.

Use case: Development, testing, hobby projects, small applications


BASIC Tier

The BASIC tier removes IP-based restrictions and provides higher throughput.

MetricLimitNotes
User RPS10 requests/secondPer access key
IP RPSNoneNo IP restrictions
Daily QuotaPlan-specificSee compute units

Use case: Small to medium production applications


PRO Tier

The PRO tier offers the same RPS as BASIC with enhanced quota and support.

MetricLimitNotes
User RPS10 requests/secondPer access key
IP RPSNoneNo IP restrictions
Daily QuotaPlan-specificHigher quota than BASIC
SupportPriority supportFaster response times

Use case: Production applications, growing businesses


ENTERPRISE_50 Tier

High-performance tier for demanding applications.

MetricLimitNotes
User RPS50 requests/secondPer access key
IP RPSNoneNo IP restrictions
Daily QuotaPlan-specificEnterprise-grade quota
SupportDedicated supportSLA available

Use case: Large-scale applications, high-traffic DApps


ENTERPRISE_500 Tier

Maximum performance tier for mission-critical applications.

MetricLimitNotes
User RPS500 requests/secondPer access key
IP RPSNoneNo IP restrictions
Daily QuotaCustomTailored to your needs
SupportWhite-glove supportCustom SLA, dedicated account manager

Use case: Enterprise applications, DEX platforms, high-frequency trading, analytics platforms


Comparison Table

FeatureFREEBASICPROENTERPRISE_50ENTERPRISE_500
User RPS5101050500
IP Limit✅ 12 req/s❌ None❌ None❌ None❌ None
Both Limits✅ AND❌ User only❌ User only❌ User only❌ User only
Daily QuotaBasicStandardEnhancedHighCustom
SupportCommunityEmailPriorityDedicatedWhite-glove
SLANoneNoneStandardCustomPremium

How Access Keys Work

BlockEden.xyz uses path-based authentication where your access key is embedded in the URL:

URL Format

https://api.blockeden.xyz/{chain}/{access-key}/{endpoint}

Examples

Ethereum:

https://api.blockeden.xyz/eth/abc123def456ghi789jk/v1/block
^^^^^^^^^^^^^^^^^^^^
Your 20-character access key

Solana:

https://api.blockeden.xyz/solana/abc123def456ghi789jk/

Sui:

https://api.blockeden.xyz/sui/abc123def456ghi789jk/

Access Key Properties

  • Length: Exactly 20 characters
  • Location: Always after the chain prefix
  • Required: All requests must include a valid key (except public endpoints)
  • Rate Limit Tracking: All access keys for the same user share one rate limit bucket (user-level enforcement)

Public Endpoints (No Authentication Required)

These endpoints do not require an access key and are not rate limited:

  • /health - Health check endpoint
  • /healthz - Alternative health check
  • /api/4489233/metrics - Prometheus metrics

All other endpoints require a valid access key.

Understanding HTTP Status Codes

200 OK

Request succeeded. Your rate limits and quota were not exceeded.

401 Unauthorized

{
"error": "Unauthorized",
"message": "Please provide a valid access key"
}

Cause: No access key provided or invalid access key

Solution: Check your URL format includes the correct 20-character access key


429 Too Many Requests

{
"error": "Too Many Requests",
"message": "Rate limit exceeded"
}

Cause: You exceeded your RPS limit

Solution:

  • Implement exponential backoff
  • Reduce request frequency
  • Upgrade to a higher tier
  • For FREE tier: check if IP limit (12 req/s) is also being hit

403 Forbidden

{
"error": "Forbidden",
"message": "IP address blocked"
}

Cause: Your IP address is on the blocklist

Solution: Contact support if you believe this is an error

Rate Limit Headers

BlockEden.xyz returns the following headers with each response:

X-RateLimit-Limit: 50
X-RateLimit-Remaining: 48
X-RateLimit-Reset: 1704067200
HeaderDescriptionExample
X-RateLimit-LimitYour RPS limit50
X-RateLimit-RemainingRequests remaining in current second48
X-RateLimit-ResetUnix timestamp when limit resets1704067200

Best Practices

1. Implement Exponential Backoff

When you receive a 429 response, wait before retrying:

async function makeRequestWithBackoff(url, maxRetries = 3) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
const response = await fetch(url);

if (response.status === 429) {
const waitTime = Math.pow(2, attempt) * 1000; // 1s, 2s, 4s
console.log(`Rate limited. Waiting ${waitTime}ms before retry...`);
await new Promise((resolve) => setTimeout(resolve, waitTime));
continue;
}

return response;
}

throw new Error("Max retries exceeded");
}

2. Client-Side Rate Limiting

Prevent hitting rate limits by implementing client-side throttling:

class RateLimiter {
private queue: number[] = [];
private limit: number;
private interval: number;

constructor(limit: number, interval: number = 1000) {
this.limit = limit;
this.interval = interval;
}

async throttle(): Promise<void> {
const now = Date.now();

// Remove timestamps older than interval
this.queue = this.queue.filter((time) => now - time < this.interval);

if (this.queue.length >= this.limit) {
const oldestRequest = this.queue[0];
const waitTime = this.interval - (now - oldestRequest);
await new Promise((resolve) => setTimeout(resolve, waitTime));
return this.throttle(); // Retry after waiting
}

this.queue.push(now);
}
}

// Usage
const limiter = new RateLimiter(5); // 5 requests per second

async function makeRequest(url: string) {
await limiter.throttle();
return fetch(url);
}

3. Batch Requests When Possible

Reduce request count by batching JSON-RPC calls:

// Instead of making 3 separate requests
const block1 = await fetch(url, {
body: JSON.stringify({
method: "eth_getBlockByNumber",
params: ["0x1", false],
}),
});
const block2 = await fetch(url, {
body: JSON.stringify({
method: "eth_getBlockByNumber",
params: ["0x2", false],
}),
});
const block3 = await fetch(url, {
body: JSON.stringify({
method: "eth_getBlockByNumber",
params: ["0x3", false],
}),
});

// Make 1 batched request
const batched = await fetch(url, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify([
{
jsonrpc: "2.0",
id: 1,
method: "eth_getBlockByNumber",
params: ["0x1", false],
},
{
jsonrpc: "2.0",
id: 2,
method: "eth_getBlockByNumber",
params: ["0x2", false],
},
{
jsonrpc: "2.0",
id: 3,
method: "eth_getBlockByNumber",
params: ["0x3", false],
},
]),
});

4. Cache Responses

Cache frequently accessed data to reduce API calls:

const cache = new Map();
const CACHE_TTL = 60000; // 1 minute

async function getCachedData(key, fetchFn) {
const cached = cache.get(key);

if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
return cached.data;
}

const data = await fetchFn();
cache.set(key, { data, timestamp: Date.now() });
return data;
}

// Usage
const blockNumber = await getCachedData("latest-block", async () => {
const response = await fetch(rpcUrl, {
method: "POST",
body: JSON.stringify({ method: "eth_blockNumber" }),
});
return response.json();
});

5. Monitor Your Usage

Track your request patterns to optimize usage:

class RequestMonitor {
private requests: { timestamp: number; status: number }[] = [];

logRequest(status: number) {
this.requests.push({ timestamp: Date.now(), status });
}

getStats(intervalMs: number = 60000) {
const now = Date.now();
const recent = this.requests.filter(r => now - r.timestamp < intervalMs);

return {
total: recent.length,
success: recent.filter(r => r.status === 200).length,
rateLimit: recent.filter(r => r.status === 429).length,
rps: recent.length / (intervalMs / 1000)
};
}
}

const monitor = new RequestMonitor();

// After each request
monitor.logRequest(response.status);

// Check stats
console.log(monitor.getStats()); // Stats for last 60 seconds

6. Distribute Load Across Multiple Keys

For high-volume applications, use multiple access keys:

class KeyRotator {
private keys: string[];
private currentIndex: number = 0;

constructor(keys: string[]) {
this.keys = keys;
}

getNextKey(): string {
const key = this.keys[this.currentIndex];
this.currentIndex = (this.currentIndex + 1) % this.keys.length;
return key;
}
}

const rotator = new KeyRotator([
'abc123def456ghi789jk',
'xyz789uvw456rst123ab',
'mno456jkl123ghi789cd'
]);

// Each request uses a different key
async function makeRequest(endpoint: string) {
const key = rotator.getNextKey();
return fetch(`https://api.blockeden.xyz/eth/${key}${endpoint}`);
}

WebSocket Rate Limits

WebSocket connections have the same rate limits as HTTP requests:

  • Connection establishment counts as 1 request
  • Each subscription (e.g., eth_subscribe) counts as 1 request
  • Incoming messages from subscriptions do not count toward your limit
  • Outgoing messages (your requests over the WebSocket) count toward your RPS limit

WebSocket Best Practices

const ws = new WebSocket(`wss://api.blockeden.xyz/eth/${accessKey}`);

// Reuse connections instead of creating new ones
let isConnected = false;

ws.on("open", () => {
isConnected = true;

// Subscribe once, receive many updates
ws.send(
JSON.stringify({
jsonrpc: "2.0",
id: 1,
method: "eth_subscribe",
params: ["newHeads"],
}),
);
});

ws.on("message", (data) => {
// Incoming messages don't count toward rate limit
console.log("New block:", data);
});

// Reuse connection for multiple subscriptions
if (isConnected) {
ws.send(
JSON.stringify({
jsonrpc: "2.0",
id: 2,
method: "eth_subscribe",
params: ["logs", { topics: ["0x..."] }],
}),
);
}

Upgrading Your Tier

If you're frequently hitting rate limits, consider upgrading:

When to Upgrade

  • ✅ Receiving frequent 429 responses
  • ✅ FREE tier IP limit (12 req/s) is too restrictive
  • ✅ Need higher throughput for production
  • ✅ Require priority support
  • ✅ Building high-frequency applications

How to Upgrade

  1. Visit BlockEden.xyz Dashboard
  2. Navigate to Billing & Plans
  3. Select your desired tier
  4. Complete payment
  5. Rate limits update immediately

Custom Rate Limits

For use cases requiring custom rate limits beyond ENTERPRISE_500:

  • Dedicated infrastructure: Private nodes with custom configurations
  • Custom SLA: Guaranteed uptime and response times
  • Volume discounts: Reduced per-request costs at scale
  • Multi-region deployment: Reduced latency globally

Contact our sales team: sales@blockeden.xyz

Troubleshooting

Problem: Getting 429 errors on FREE tier

Diagnosis: Check if you're hitting user limit (5 req/s) or IP limit (12 req/s)

Solutions:

  1. Implement client-side rate limiting (see examples above)
  2. Batch multiple requests into one
  3. Cache responses when possible
  4. Upgrade to BASIC tier to remove IP restrictions

Problem: Inconsistent 429 errors

Diagnosis: You might be bursting above your RPS limit

Solutions:

  1. Implement request queuing to smooth out bursts
  2. Use the sliding window rate limiter example above
  3. Monitor your request patterns with the RequestMonitor class

Problem: Need higher limits temporarily

Solutions:

  1. Upgrade to the next tier (instant activation)
  2. Contact support for temporary limit increase
  3. Use multiple access keys to distribute load

Problem: IP blocklist (403 errors)

Diagnosis: Your IP has been added to the blocklist (usually for abuse)

Solutions:

  1. Check if you're using a VPN or proxy
  2. Verify your application isn't making excessive requests
  3. Contact support: support@blockeden.xyz

FAQ

How is the rate limit calculated?

Rate limits use a token bucket algorithm:

  • Your "bucket" holds tokens equal to your RPS limit
  • Each request consumes 1 token
  • Tokens refill smoothly at your RPS rate (not in bursts)
  • Quantum is set to 1 for smooth refilling

Example (BASIC tier, 10 RPS):

  • You can make 10 requests immediately (bucket full)
  • After that, you get 1 token every 100ms
  • No bursting allowed beyond your RPS limit

Can I exceed my RPS limit briefly?

No. The system uses quantum=1, which means smooth rate limiting without bursting. If your RPS is 10, you cannot make 20 requests at once, even if you wait 2 seconds.


Do failed requests count toward my limit?

Yes. All requests count toward your RPS limit, regardless of the response:

  • ✅ 200 OK → counts
  • ✅ 400 Bad Request → counts
  • ✅ 429 Too Many Requests → counts
  • ✅ 500 Internal Error → counts

How do compute units differ from RPS?

  • RPS: Controls request frequency (requests per second)
  • Compute Units: Controls daily quota (total computational cost)

Both limits are independent. You could hit your RPS limit without hitting your daily quota, or vice versa.

See Understanding Compute Units for more details.


Can I have multiple access keys with different rate limits?

No. Rate limiting is applied at the user level, not per access key:

  • All access keys for the same user share the same rate limit bucket
  • Rate limit is determined by your pricing plan (FREE, BASIC, PRO, ENTERPRISE_50, ENTERPRISE_500)
  • This prevents circumventing rate limits by creating multiple keys
  • Upgrading your plan increases the rate limit for all your access keys

What happens when I upgrade mid-day?

Your rate limit updates immediately. However:

  • Daily quota resets at midnight UTC
  • Current usage carries over until reset
  • New RPS limit takes effect for the next request

Do GraphQL queries count as 1 request?

GraphQL queries count based on complexity:

  • Simple queries: 1 compute unit
  • Complex queries with multiple fields: 2-10+ compute units
  • Mutations: 5-20+ compute units

For RPS purposes, each GraphQL query = 1 request.


How are WebSocket subscriptions counted?

  • Initial connection: 1 request
  • Each subscribe call: 1 request
  • Incoming subscription data: 0 requests (free)
  • Outgoing requests over WS: 1 request each

Support

Need help with rate limiting?