Skip to main content

Rate Limiting

The Okasie Partner API implements rate limiting to ensure fair usage and system stability.

Default Limits

Limit TypeValue
Requests per minute120
Window size60 seconds
Burst allowanceUp to 5 req/s
Need higher limits? Contact [email protected] to discuss custom rate limits for your integration.

Rate Limit Headers

Every API response includes rate limit information:
HeaderDescription
X-RateLimit-LimitMaximum requests per window
X-RateLimit-RemainingRemaining requests in current window
X-RateLimit-ResetUnix timestamp when the window resets

Example Response Headers

X-RateLimit-Limit: 120
X-RateLimit-Remaining: 115
X-RateLimit-Reset: 1699876543

Handling Rate Limits

When you exceed the rate limit, you’ll receive a 429 Too Many Requests response:
{
  "error": {
    "code": "RATE_LIMITED",
    "message": "Too many requests",
    "retryAt": "2024-10-05T10:15:30Z"
  }
}
The response includes a Retry-After header indicating seconds to wait:
HTTP/1.1 429 Too Many Requests
Retry-After: 15
X-RateLimit-Limit: 120
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1699876543

Best Practices

When receiving 429 errors, wait and retry with increasing delays:
async function fetchWithRetry(url, options, maxRetries = 3) {
  for (let i = 0; i < maxRetries; i++) {
    const response = await fetch(url, options);

    if (response.status === 429) {
      const retryAfter = response.headers.get('Retry-After') || Math.pow(2, i);
      await new Promise(resolve => setTimeout(resolve, retryAfter * 1000));
      continue;
    }

    return response;
  }
  throw new Error('Max retries exceeded');
}
Monitor headers to avoid hitting limits:
const response = await fetch(url, options);
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));

if (remaining < 10) {
  console.warn('Approaching rate limit, slowing down...');
  await new Promise(resolve => setTimeout(resolve, 1000));
}
For large data sets, use /listings/bulk-upsert to process up to 100 items per request instead of individual calls.
Cache responses where appropriate to reduce API calls.
Use updatedSince parameter to only fetch changed data instead of full syncs.

Sync Scheduling Recommendations

Sync TypeRecommended IntervalNotes
Full syncDaily (off-peak)Use pagination with pageSize=200
Incremental syncEvery 5-15 minutesUse updatedSince parameter
Real-time updatesAs neededUse individual PUT/DELETE calls

Example: Rate-Limited Sync Script

import os
import time
import requests

def sync_listings():
    base_url = "https://www.okasie.be/api/external/v1/listings"
    headers = {"Authorization": f"Bearer {os.getenv('PARTNER_SECRET')}"}

    page = 1
    while True:
        response = requests.get(
            base_url,
            params={"page": page, "pageSize": 200},
            headers=headers
        )

        # Check rate limits
        remaining = int(response.headers.get('X-RateLimit-Remaining', 100))
        if remaining < 5:
            reset_time = int(response.headers.get('X-RateLimit-Reset', 0))
            wait_time = max(reset_time - time.time(), 1)
            print(f"Rate limit low, waiting {wait_time}s...")
            time.sleep(wait_time)

        # Handle 429
        if response.status_code == 429:
            retry_after = int(response.headers.get('Retry-After', 60))
            print(f"Rate limited, waiting {retry_after}s...")
            time.sleep(retry_after)
            continue

        data = response.json()
        process_listings(data['data'])

        if page >= data['pagination']['totalPages']:
            break

        page += 1
        time.sleep(0.5)  # Be nice to the API

def process_listings(listings):
    # Your processing logic here
    pass

Next Steps