Rate Limits
WebPeek API implements rate limiting to ensure fair usage and maintain service quality for all users. This guide explains how rate limits work and how to handle them effectively.
Pricing & Rate Limit Tiers
WebPeek offers flexible pricing with both monthly and yearly billing options. Save ~17% with yearly plans (approximately 2 months free).
| Plan | Monthly | Yearly | Credits/Month | Rate Limit | Projects | Seats | Storage |
|---|---|---|---|---|---|---|---|
| Free | $0 | $0 | 100 | 5/min | 1 | 1 | 100 MB |
| Starter | $9/mo | $90/yr | 1,000 | 20/min | 3 | 3 | 1 GB |
| Pro | $29/mo | $290/yr | 10,000 | 100/min | 10 | 10 | 10 GB |
| Scale | $99/mo | $990/yr | 100,000 | 500/min | 50 | 25 | 100 GB |
| Enterprise | Custom | Custom | Custom | Custom | Unlimited | Unlimited | Custom |
What's Included in All Plans
- ✓ Access to all endpoints: /metadata, /seo-audit, /performance, /snapshot
- ✓ 80% warning threshold with notifications
- ✓ Hard cap enforcement to prevent overages
- ✓ Organization-based billing with Stripe
- ✓ Real-time usage tracking and analytics
- ✓ Standard support via email
Additional Features by Tier
Starter & Above:
• Analytics dashboard
• Team collaboration
• Multiple projects
Pro & Above:
• Webhook notifications
• Priority support
• Advanced analytics
Scale:
• Dedicated resources
• Custom integrations
• 99.9% uptime SLA
Enterprise:
• Custom SLA agreements
• Dedicated Slack channel
• On-premise deployment options
All Endpoints Included
All subscription tiers include access to all core endpoints. The monthly request limit applies across all endpoints combined:
Metadata API
Fast metadata extraction including Open Graph, Twitter Cards, and Schema.org data.
Endpoint: /metadata
Processing time: 200-500ms
All tiers: Included in base limit
SEO Audit API
Comprehensive SEO analysis with actionable recommendations.
Endpoint: /seo-audit
Processing time: 2-5 seconds
All tiers: Included in base limit
Snapshot API
High-quality screenshots and full-page snapshots with advanced options.
Endpoint: /snapshot
Processing time: 3-10 seconds
Note: May count as multiple requests for full-page captures
Performance API
Core Web Vitals and performance metrics powered by Lighthouse.
Endpoint: /performance
Processing time: 10-30 seconds
All tiers: Included in base limit
Rate Limit Headers
Every API response includes headers with rate limit information:
HTTP/1.1 200 OK
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 87
X-RateLimit-Reset: 1699564800
X-RateLimit-Reset-After: 3600
Retry-After: 3600| Header | Description |
|---|---|
| X-RateLimit-Limit | Maximum number of requests allowed in the current time window |
| X-RateLimit-Remaining | Number of requests remaining in the current time window |
| X-RateLimit-Reset | Unix timestamp when the rate limit resets |
| X-RateLimit-Reset-After | Seconds until the rate limit resets |
| Retry-After | Seconds to wait before making another request (only on 429 errors) |
Rate Limit Exceeded
When you exceed the rate limit, the API returns a 429 Too Many Requests response:
HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1699564800
Retry-After: 3600
Content-Type: application/json{
"success": false,
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "Rate limit exceeded. Try again in 3600 seconds.",
"details": {
"limit": 100,
"remaining": 0,
"reset": 1699564800,
"reset_after": 3600
}
}
}Handling Rate Limits
Best practices for handling rate limits in your application:
JavaScript / Node.js
async function fetchWithRateLimit(url, options = {}) {
const maxRetries = 3;
let retries = 0;
while (retries < maxRetries) {
try {
const response = await fetch(url, options);
// Check rate limit headers
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining') || '0');
const resetAfter = parseInt(response.headers.get('X-RateLimit-Reset-After') || '0');
// Log rate limit status
console.log(`Rate limit remaining: ${remaining}`);
// Handle rate limit exceeded
if (response.status === 429) {
const retryAfter = parseInt(response.headers.get('Retry-After') || '60');
console.log(`Rate limit exceeded. Retrying after ${retryAfter} seconds...`);
// Wait before retrying
await new Promise(resolve => setTimeout(resolve, retryAfter * 1000));
retries++;
continue;
}
// Success - return response
if (response.ok) {
return await response.json();
}
// Other error
throw new Error(`HTTP error! status: ${response.status}`);
} catch (error) {
if (retries >= maxRetries - 1) {
throw error;
}
retries++;
await new Promise(resolve => setTimeout(resolve, 1000));
}
}
throw new Error('Max retries exceeded');
}
// Usage example
const data = await fetchWithRateLimit('https://api.webpeek.dev/metadata?url=https://github.com');
console.log(data);Python
import requests
import time
from typing import Dict, Any
def fetch_with_rate_limit(url: str, max_retries: int = 3) -> Dict[str, Any]:
"""Fetch data from API with automatic rate limit handling."""
retries = 0
while retries < max_retries:
try:
response = requests.get(url)
# Check rate limit headers
remaining = int(response.headers.get('X-RateLimit-Remaining', 0))
reset_after = int(response.headers.get('X-RateLimit-Reset-After', 0))
print(f"Rate limit remaining: {remaining}")
# Handle rate limit exceeded
if response.status_code == 429:
retry_after = int(response.headers.get('Retry-After', 60))
print(f"Rate limit exceeded. Retrying after {retry_after} seconds...")
time.sleep(retry_after)
retries += 1
continue
# Raise for other HTTP errors
response.raise_for_status()
# Success
return response.json()
except requests.exceptions.RequestException as e:
if retries >= max_retries - 1:
raise e
retries += 1
time.sleep(1)
raise Exception('Max retries exceeded')
# Usage example
try:
data = fetch_with_rate_limit('https://api.webpeek.dev/metadata?url=https://github.com')
print(data)
except Exception as e:
print(f"Error: {e}")Rate Limiter with Queue
class RateLimiter {
constructor(requestsPerMinute = 60) {
this.requestsPerMinute = requestsPerMinute;
this.queue = [];
this.processing = false;
this.requestTimes = [];
}
async add(fn) {
return new Promise((resolve, reject) => {
this.queue.push({ fn, resolve, reject });
this.process();
});
}
async process() {
if (this.processing || this.queue.length === 0) return;
this.processing = true;
while (this.queue.length > 0) {
// Clean old request times
const now = Date.now();
this.requestTimes = this.requestTimes.filter(
time => now - time < 60000
);
// Check if we can make a request
if (this.requestTimes.length >= this.requestsPerMinute) {
const oldestRequest = Math.min(...this.requestTimes);
const waitTime = 60000 - (now - oldestRequest);
console.log(`Rate limit reached. Waiting ${waitTime}ms...`);
await new Promise(resolve => setTimeout(resolve, waitTime));
continue;
}
// Process next request
const { fn, resolve, reject } = this.queue.shift();
this.requestTimes.push(Date.now());
try {
const result = await fn();
resolve(result);
} catch (error) {
reject(error);
}
// Small delay between requests
await new Promise(resolve => setTimeout(resolve, 100));
}
this.processing = false;
}
}
// Usage example
const limiter = new RateLimiter(30); // 30 requests per minute
async function fetchMetadata(url) {
return limiter.add(async () => {
const response = await fetch(`https://api.webpeek.dev/metadata?url=${encodeURIComponent(url)}`);
return response.json();
});
}
// Batch process URLs
const urls = [
'https://github.com',
'https://stripe.com',
'https://vercel.com',
// ... more URLs
];
for (const url of urls) {
const data = await fetchMetadata(url);
console.log(`Processed: ${url}`);
}Best Practices
Monitor Rate Limit Headers
Always check X-RateLimit-Remaining before making requests. If it's getting low, implement backoff strategies or queue requests.
Implement Exponential Backoff
When you receive a 429 response, wait the time specified in the Retry-After header. For subsequent failures, use exponential backoff (2x, 4x, 8x, etc.).
Use Request Queues
For batch processing, implement a queue system that respects rate limits. Process requests sequentially with appropriate delays between them.
Leverage Caching
Cache API responses on your end to reduce the number of requests. WebPeek also caches responses server-side, indicated by the cached field in responses.
Distribute Load
If processing large batches, spread requests over time rather than sending them all at once. Consider processing during off-peak hours.
Use Webhooks for Long Tasks
For resource-intensive endpoints like Performance and Snapshot, consider using webhooks (if available) to receive results asynchronously instead of polling.
Upgrade When Needed
If you consistently hit rate limits, consider upgrading to a higher tier. Enterprise plans offer custom limits tailored to your needs.
Quota Management
Track your API usage and remaining quota through the dashboard or API:
curl "https://api.webpeek.dev/usage" \
-H "X-API-Key: your_api_key_here"{
"success": true,
"data": {
"plan": "pro",
"billing_interval": "year",
"period": {
"start": "2025-11-01T00:00:00Z",
"end": "2025-12-01T00:00:00Z"
},
"usage": {
"credits_used": 4532,
"credits_limit": 10000,
"credits_remaining": 5468,
"percentage_used": 45.32,
"warning_threshold": 8000
},
"rate_limits": {
"requests_per_minute": 100
},
"quota": {
"projects": { "used": 3, "limit": 10 },
"seats": { "used": 5, "limit": 10 },
"storage_mb": { "used": 2847, "limit": 10240 }
}
}
}Upgrading Limits
Need higher limits? Here's how to upgrade:
Common Questions
Do cached responses count toward my limit?
Yes, all API requests count toward your rate limit, even if served from cache. However, cached responses are much faster and don't consume processing resources.
What happens when I exceed my monthly quota?
When you reach 80% of your monthly credits, you'll receive a warning notification. At 100%, hard cap enforcement prevents additional requests until your quota resets at the start of your next billing cycle. The X-RateLimit-Reset header indicates exactly when your quota will refresh.
Can I purchase additional credits?
Currently, we offer tiered plans only. If you need more credits, upgrade to the next tier. Enterprise customers can negotiate custom pricing and credit packages.
How do yearly plans work with monthly credits?
Yearly plans provide the same monthly credit allocation as monthly plans (e.g., Pro gets 10,000 credits/month). Credits reset monthly, but your billing occurs annually at a discounted rate (~17% off monthly pricing).
Are credits shared across endpoints?
Yes, the monthly credit quota and per-minute rate limits apply to all endpoints combined. All API requests count toward your total credit limit regardless of which endpoint you use (/metadata, /seo-audit, /performance, or /snapshot).