Rate Limiting
To ensure fair usage and maintain service stability, our API implements rate limiting. This mechanism restricts the number of requests a client can make within a specific time window. Understanding and adhering to these limits is crucial for a smooth integration.
Understanding Rate Limits
Rate limits are typically defined by a combination of:
- Requests per second (RPS): The maximum number of requests allowed per second.
- Requests per minute (RPM): The maximum number of requests allowed per minute.
- Requests per hour (RPH): The maximum number of requests allowed per hour.
The most restrictive limit will apply.
Identifying Your Current Limits
When you make a request to our API, we include rate limit information in the response headers. These headers provide real-time data about your usage:
Header Name | Description |
---|---|
X-RateLimit-Limit |
The total number of requests allowed in the current window. |
X-RateLimit-Remaining |
The number of requests remaining in the current window. |
X-RateLimit-Reset |
The time (in UTC epoch seconds) when the current window resets. |
Handling Rate Limit Exceeded Errors
If you exceed the rate limit, the API will respond with an HTTP status code 429 Too Many Requests
. The response body will typically contain more details about the error.
429
response, you should pause your requests and wait until the X-RateLimit-Reset
time indicated in the previous (non-429) response has passed. Aggressively retrying without backoff can lead to further throttling.
Best Practices for Rate Limiting
- Monitor Headers: Regularly check the
X-RateLimit-*
headers to understand your usage and avoid hitting the limits. - Implement Backoff Strategy: When a
429
error occurs, use an exponential backoff strategy for retries. Start with a short delay and gradually increase it for subsequent retries. - Optimize Requests: Batch requests where possible, use efficient queries, and avoid unnecessary calls to reduce overall API consumption.
- Use Webhooks (if available): For real-time data updates, consider using webhooks instead of frequent polling, which can significantly reduce your request count.
- Distribute Load: If you have multiple applications or services consuming the API, consider how their requests are aggregated. You might need to manage rate limits on a per-application or per-user basis depending on your API access keys.
Example Rate Limit Response
A typical successful request might include these headers:
HTTP/1.1 200 OK
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 998
X-RateLimit-Reset: 1678886400
Content-Type: application/json
{
"data": [...]
}
And a rate-limited response:
HTTP/1.1 429 Too Many Requests
X-RateLimit-Limit: 1000
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1678885800
Retry-After: 60
Content-Type: application/json
{
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "You have exceeded your allowed request rate. Please try again later."
}
}
Note: The Retry-After
header (if present) suggests the number of seconds to wait before making another request.