Skip to main content

Rate limiting

Traceable enforces rate limits on all API endpoints. Rate limits protect platform stability and ensure fair access across all integrations.

Current limits

Endpoint typeKeyLimit
Public endpoints (GET /api/dpp/*, GET /api/health)Per IP address60 requests / minute
Authenticated endpoints (POST /api/poli/access, GET /api/poli/verify)Per API key120 requests / minute

Limits are applied on a sliding window basis — the window rolls continuously, not on a fixed clock minute.

Rate limit response headers

Every API response includes the following headers to allow your integration to track its usage:

HeaderTypeDescription
X-RateLimit-LimitintegerThe maximum number of requests allowed in the current window
X-RateLimit-RemainingintegerThe number of requests remaining in the current window
X-RateLimit-ResetintegerUnix timestamp (seconds) when the current window resets

Example headers on a healthy response:

HTTP/2 200 OK
Content-Type: application/json
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 47
X-RateLimit-Reset: 1712501520

Monitor X-RateLimit-Remaining in your integration. When it approaches zero, slow your request rate proactively before hitting the limit.

429 Too Many Requests

When a rate limit is exceeded, the API returns:

HTTP Status: 429 Too Many Requests

Response headers:

HTTP/2 429 Too Many Requests
Content-Type: application/json
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1712501520
Retry-After: 23

Response body:

{
"error": "Rate limit exceeded",
"code": "RATE_LIMITED",
"retryAfter": 23
}

The retryAfter value in the body and the Retry-After header are identical — both indicate the number of seconds until the rate limit window resets and requests will be accepted again.

For integrations that may hit rate limits, implement exponential backoff with jitter:

async function fetchWithBackoff(
url: string,
options?: RequestInit,
maxRetries = 4
): Promise<Response> {
for (let attempt = 0; attempt <= maxRetries; attempt++) {
const response = await fetch(url, options);

if (response.status !== 429) {
return response; // success or non-rate-limit error
}

if (attempt === maxRetries) {
return response; // exhausted retries, return the 429
}

// Read retry-after from the response
const body = await response.json().catch(() => ({}));
const retryAfter = body.retryAfter ?? 60;

// Exponential backoff: 1s, 2s, 4s, 8s...
const baseDelay = Math.pow(2, attempt) * 1000;
// Add jitter: random additional 0–1000ms to avoid thundering herd
const jitter = Math.random() * 1000;
// Respect the server's retryAfter if it's longer than our computed delay
const delay = Math.max(baseDelay + jitter, retryAfter * 1000);

console.warn(
`Rate limited (attempt ${attempt + 1}/${maxRetries}). ` +
`Retrying in ${Math.round(delay / 1000)}s...`
);

await new Promise(resolve => setTimeout(resolve, delay));
}

throw new Error('Should not reach here');
}

// Usage
const response = await fetchWithBackoff(
'https://app.traceable.digital/api/dpp/swiftvolt-48v-100ah-ev-pack'
);
import time
import random
import requests


def fetch_with_backoff(url: str, max_retries: int = 4, **kwargs) -> requests.Response:
"""
Fetch a URL with exponential backoff on 429 responses.

Args:
url: The URL to request.
max_retries: Maximum number of retries on rate limit. Default 4.
**kwargs: Additional arguments passed to requests.get().

Returns:
The final requests.Response object.
"""
for attempt in range(max_retries + 1):
response = requests.get(url, timeout=10, **kwargs)

if response.status_code != 429:
return response

if attempt == max_retries:
return response # exhausted retries

# Parse retry-after from response
try:
body = response.json()
retry_after = body.get("retryAfter", 60)
except ValueError:
retry_after = 60

# Exponential backoff with jitter
base_delay = (2 ** attempt) # seconds: 1, 2, 4, 8
jitter = random.uniform(0, 1)
delay = max(base_delay + jitter, retry_after)

print(
f"Rate limited (attempt {attempt + 1}/{max_retries}). "
f"Retrying in {delay:.1f}s..."
)
time.sleep(delay)

# unreachable
raise RuntimeError("fetch_with_backoff: should not reach here")

Bulk operations

If you need to fetch a large number of DPPs (for example, populating a product registry or running a compliance audit), follow these guidelines:

  1. Space requests — stay at or below 50 requests/minute to give yourself headroom below the 60/minute limit
  2. Use caching — DPPs change infrequently. Cache responses with a TTL of at least 5 minutes and check updatedAt to invalidate when needed. This dramatically reduces the requests needed for repeat access patterns.
  3. Parallelise conservatively — use a concurrency limit of 3–5 simultaneous requests rather than firing all requests at once
  4. Contact support for higher limits — if your use case legitimately requires more than 60 requests/minute (for example, a national product registry that needs to bulk-sync all battery DPPs), contact support@traceable.digital with details of your use case and volume requirements. Higher limits are available for approved integrations.

Example of rate-limited bulk fetching in JavaScript:

import PQueue from 'p-queue'; // npm install p-queue

const queue = new PQueue({
concurrency: 3, // max 3 concurrent in-flight requests
interval: 1000, // per interval (ms)
intervalCap: 1, // 1 request/second = 60/minute — matches public endpoint rate limit
});

async function bulkFetchDpps(slugs: string[]): Promise<Map<string, unknown>> {
const results = new Map<string, unknown>();

await Promise.all(
slugs.map(slug =>
queue.add(async () => {
try {
const response = await fetchWithBackoff(
`https://app.traceable.digital/api/dpp/${slug}`
);
if (response.ok) {
results.set(slug, await response.json());
} else {
console.warn(`Failed to fetch DPP for ${slug}: ${response.status}`);
}
} catch (err) {
console.error(`Error fetching DPP for ${slug}:`, err);
}
})
)
);

return results;
}

Rate limit key behaviour

  • Public endpoints are rate-limited per IP address. Changing your IP address does not reset an API key's rate limit for authenticated endpoints.
  • Authenticated endpoints are rate-limited per API key. Multiple integrations using different API keys have independent rate limit counters.
  • IPv6 addresses are normalised to their /64 prefix for rate limiting purposes, preventing circumvention via address cycling within a single /64 block.