Rate Limiting
To ensure fair usage and maintain the stability and availability of our services, all API requests are subject to rate limiting. This mechanism prevents abuse and ensures that all users can access the service reliably.
Understanding Rate Limits
Rate limits define the maximum number of requests a user or application can make to our API within a specified time window. When you exceed these limits, your requests will be temporarily denied.
Key Concepts
- Requests per Second (RPS): The maximum number of requests allowed per second.
- Requests per Minute (RPM): The maximum number of requests allowed per minute.
- Requests per Hour (RPH): The maximum number of requests allowed per hour.
- Time Window: The period over which the request count is measured.
Current Rate Limits
The following rate limits apply to all API consumers:
- General API: 100 requests per minute
- Bulk Endpoints: 10 requests per minute
Handling Rate Limit Exceeded Errors
When you exceed the rate limits, the API will respond with an HTTP status code 429 Too Many Requests
. The response headers will provide additional information about your current rate limit status:
Response Headers
X-RateLimit-Limit
: The total number of requests allowed in the current window.X-RateLimit-Remaining
: The number of requests remaining in the current window.X-RateLimit-Reset
: The time (in UTC epoch seconds) when the current window resets.
Example of a 429 Response
If you make too many requests, you might receive a response like this:
HTTP/1.1 429 Too Many Requests
Content-Type: application/json
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1678886400
{
"error": "You have exceeded your rate limit. Please try again later."
}
Best Practices for Rate Limiting
To avoid hitting rate limits and ensure smooth operation of your applications, consider the following strategies:
- Implement Exponential Backoff: When you receive a
429
response, wait for a duration specified by theX-RateLimit-Reset
header before retrying. Consider increasing the wait time exponentially for subsequent retries if the problem persists. - Batch Requests: If possible, group multiple operations into a single API call to reduce the total number of requests.
- Cache Data: Cache frequently accessed data locally to avoid unnecessary API calls.
- Monitor Usage: Keep track of your API usage and set up alerts if you approach the rate limits.
- Request Limit Increases: If your application requires higher rate limits, please contact our support team to discuss potential options.
Example of Exponential Backoff Logic (Conceptual)
// Assume 'response' is the API response object
const remainingRequests = parseInt(response.headers['x-ratelimit-remaining']);
const resetTime = parseInt(response.headers['x-ratelimit-reset']);
if (remainingRequests === 0) {
const currentTime = Math.floor(Date.now() / 1000);
const delay = (resetTime - currentTime + 5) * 1000; // Add a buffer
setTimeout(() => {
// Retry the request
makeApiRequest();
}, delay);
} else {
// Process the successful response
}
By adhering to these guidelines, you can ensure a stable and efficient integration with our API.