Saltar al contenido principal

Rate Limits Quick Reference

Quick lookup tables and decision trees for choosing the right tier and staying within limits.

📊 Tier Comparison at a Glance

TierRPSIP LimitBest ForMonthly Price*
FREE5✅ 12 req/sDevelopment, testingFree
BASIC10❌ NoneSmall apps$
PRO10❌ NoneProduction apps$$
ENTERPRISE_5050❌ NoneHigh-traffic apps$$$
ENTERPRISE_500500❌ NoneMission-criticalContact us

Note: RPS is enforced per user (shared across all access keys), not per access key.

*See pricing page for current rates

🎯 Which Tier Do I Need?

Decision Tree

How many requests per second do you need?

├─ Less than 5 req/s
│ └─ Do you need to bypass IP restrictions?
│ ├─ No → ✅ FREE tier
│ └─ Yes → ✅ BASIC tier

├─ Between 5-10 req/s
│ └─ Do you need priority support?
│ ├─ No → ✅ BASIC tier
│ └─ Yes → ✅ PRO tier

├─ Between 10-50 req/s
│ └─ ✅ ENTERPRISE_50 tier

└─ More than 50 req/s
└─ ✅ ENTERPRISE_500 tier or custom

By Use Case

Use CaseRecommended TierWhy
Local developmentFREECost-effective, sufficient for testing
Hobby projectFREE or BASICLow traffic, minimal cost
Startup MVPBASIC or PRORoom to grow, professional support
Production DAppPRO or ENTERPRISE_50Reliable, scalable
DEX / High-frequencyENTERPRISE_500Maximum throughput
Analytics platformENTERPRISE_50/500Sustained high RPS
Wallet applicationPRO or ENTERPRISE_50Depends on user base
NFT marketplaceENTERPRISE_50Burst traffic handling

🚦 Rate Limit Behavior

FREE Tier: Dual Limits (AND Logic)

Request arrives

Check User RPS (5 req/s) ──── FAIL → 429 Too Many Requests
↓ PASS

Check IP RPS (12 req/s) ──── FAIL → 429 Too Many Requests
↓ PASS

Process Request → 200 OK

Key Point: Both limits must pass for FREE tier.

Request arrives

Check User RPS (tier-specific) ──── FAIL → 429 Too Many Requests
↓ PASS

Process Request → 200 OK

Key Point: No IP-based restrictions on paid tiers.

📈 Scaling Strategies

Vertical Scaling (Upgrade Tier)

Current TierNext TierRPS IncreaseIP Restriction Removed
FREE → BASIC+5 req/s5 → 10
BASIC → PRO0 req/s10 → 10Already removed
PRO → ENTERPRISE_50+40 req/s10 → 50Already removed
ENTERPRISE_50 → ENTERPRISE_500+450 req/s50 → 500Already removed

Horizontal Scaling (Multiple Keys)

StrategySetupTotal RPSCost
1 FREE keySimple5Free
3 FREE keysKey rotation15Free
1 BASIC keySimple10$
3 BASIC keysKey rotation30$$$
1 ENTERPRISE_50Simple50$$$

Note: Using multiple keys requires key rotation logic in your application.

⚡ Request Budget Calculator

FREE Tier

ScenarioMathDaily Requests
Constant rate at limit5 req/s × 86,400s432,000
Constant rate at 50%2.5 req/s × 86,400s216,000
Bursts (10 req every 2s)Averages to 5 req/s432,000

BASIC/PRO Tier

ScenarioMathDaily Requests
Constant rate at limit10 req/s × 86,400s864,000
Constant rate at 50%5 req/s × 86,400s432,000

ENTERPRISE_50 Tier

ScenarioMathDaily Requests
Constant rate at limit50 req/s × 86,400s4,320,000
Constant rate at 50%25 req/s × 86,400s2,160,000

ENTERPRISE_500 Tier

ScenarioMathDaily Requests
Constant rate at limit500 req/s × 86,400s43,200,000
Constant rate at 50%250 req/s × 86,400s21,600,000

Important: These are RPS limits. Your daily quota (compute units) may be lower. Check both limits in your dashboard.

🔧 Common Optimization Scenarios

Scenario 1: Getting 429 on FREE Tier

Symptoms:

Response: 429 Too Many Requests
Current RPS: ~6 requests/second
Tier: FREE

Diagnosis: Exceeding 5 req/s user limit

Solutions (ranked by effort):

  1. ⚡ Add client-side throttling → Immediate
  2. ⚡ Implement request queuing → 1-2 hours
  3. 💰 Upgrade to BASIC tier → Instant (2x RPS)
  4. 🔧 Batch requests → 2-4 hours of refactoring

Scenario 2: Intermittent 429 on BASIC Tier

Symptoms:

Response: Mostly 200, occasional 429
Average RPS: ~8 requests/second
Peak RPS: ~15 requests/second
Tier: BASIC (10 req/s limit)

Diagnosis: Burst traffic exceeding 10 req/s

Solutions:

  1. ⚡ Implement request queue with max rate → 2 hours
  2. ⚡ Add caching for repeated requests → 4 hours
  3. 💰 Upgrade to ENTERPRISE_50 (5x RPS) → Instant

Scenario 3: Need to Handle Spikes

Symptoms:

Normal traffic: 30 req/s
Peak traffic: 200 req/s (during NFT drops, events)
Current tier: ENTERPRISE_50 (50 req/s)

Solutions:

  1. 💰 Upgrade to ENTERPRISE_500 → Instant
  2. 🔧 Implement request queue + retry logic → 1 day
  3. 🔧 Use multiple access keys with load balancing → 2 days
  4. 💰 Contact sales for burst pricing → Custom

Scenario 4: Multi-Region Application

Symptoms:

Users: Global (US, EU, Asia)
Requirements: Low latency, high availability
Current setup: Single key, single region

Solutions:

  1. 🔧 Use multiple regional keys → 1 day
  2. 💰 Enterprise plan with multi-region → Contact sales
  3. 🔧 Implement geo-based key selection → 2 days

📋 Error Response Cheat Sheet

StatusErrorCauseImmediate FixLong-term Fix
401UnauthorizedMissing/invalid keyCheck URL formatRegenerate key in dashboard
403ForbiddenIP blockedContact supportReview traffic patterns
429Too Many RequestsRPS limit exceededWait 1 secondAdd rate limiting
429Too Many RequestsQuota exceededWait until tomorrowUpgrade plan or optimize

🎓 Implementation Cheat Sheet

Minimal Rate Limiter (JavaScript)

// Simple delay-based limiter
class SimpleLimiter {
constructor(rps) {
this.delay = 1000 / rps;
this.lastCall = 0;
}

async wait() {
const now = Date.now();
const timeSinceLastCall = now - this.lastCall;

if (timeSinceLastCall < this.delay) {
await new Promise((r) => setTimeout(r, this.delay - timeSinceLastCall));
}

this.lastCall = Date.now();
}
}

// Usage (BASIC tier = 10 RPS)
const limiter = new SimpleLimiter(10);

async function makeRequest(url) {
await limiter.wait();
return fetch(url);
}

Production Rate Limiter (TypeScript)

// Sliding window limiter with queue
class ProductionLimiter {
private queue: Array<{ timestamp: number; resolve: () => void }> = [];
private processing = false;

constructor(
private rps: number,
private windowMs: number = 1000,
) {}

async acquire(): Promise<void> {
return new Promise((resolve) => {
this.queue.push({ timestamp: Date.now(), resolve });
this.processQueue();
});
}

private processQueue() {
if (this.processing) return;
this.processing = true;

const process = () => {
if (this.queue.length === 0) {
this.processing = false;
return;
}

const now = Date.now();
const windowStart = now - this.windowMs;

// Remove old entries
this.queue = this.queue.filter((item) => item.timestamp > windowStart);

// Check if we can process the next request
if (this.queue.length <= this.rps) {
const item = this.queue.shift()!;
item.resolve();
setTimeout(process, this.windowMs / this.rps);
} else {
setTimeout(process, 100); // Check again soon
}
};

process();
}
}

// Usage
const limiter = new ProductionLimiter(50); // ENTERPRISE_50

async function makeRequest(url: string) {
await limiter.acquire();
return fetch(url);
}

Batch Request Helper

// Automatically batches JSON-RPC requests
class BatchHelper {
constructor(url, maxBatchSize = 10, flushIntervalMs = 100) {
this.url = url;
this.maxBatchSize = maxBatchSize;
this.flushIntervalMs = flushIntervalMs;
this.pendingRequests = [];
this.timer = null;
}

request(method, params) {
return new Promise((resolve, reject) => {
this.pendingRequests.push({
method,
params,
resolve,
reject,
id: Date.now() + Math.random(),
});

if (this.pendingRequests.length >= this.maxBatchSize) {
this.flush();
} else if (!this.timer) {
this.timer = setTimeout(() => this.flush(), this.flushIntervalMs);
}
});
}

async flush() {
if (this.timer) {
clearTimeout(this.timer);
this.timer = null;
}

if (this.pendingRequests.length === 0) return;

const batch = this.pendingRequests.splice(0, this.maxBatchSize);

const payload = batch.map((req) => ({
jsonrpc: "2.0",
id: req.id,
method: req.method,
params: req.params,
}));

try {
const response = await fetch(this.url, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(payload),
});

const results = await response.json();

results.forEach((result) => {
const req = batch.find((r) => r.id === result.id);
if (req) {
if (result.error) {
req.reject(result.error);
} else {
req.resolve(result.result);
}
}
});
} catch (error) {
batch.forEach((req) => req.reject(error));
}
}
}

// Usage
const batcher = new BatchHelper("https://api.blockeden.xyz/eth/YOUR_KEY");

// These 3 calls will be batched into 1 HTTP request
const [block1, block2, block3] = await Promise.all([
batcher.request("eth_getBlockByNumber", ["0x1", false]),
batcher.request("eth_getBlockByNumber", ["0x2", false]),
batcher.request("eth_getBlockByNumber", ["0x3", false]),
]);

📞 Quick Support Matrix

IssueSeverityFREEBASICPROENTERPRISE_50ENTERPRISE_500
Rate limit questionsLowCommunity/DocsEmail 48hEmail 24hEmail 12hPhone/Slack
Account issuesMediumEmail 72hEmail 48hEmail 24hEmail 12hPhone 2h
Service outageHighStatus pageEmail 24hEmail 12hEmail 4hPhone 1h
Custom integration-Not availableEmail quotePriority queueDedicated engineerWhite-glove

Contact:


See also: