Rate limits
Rise enforces rate limits to keep the platform responsive for everyone. This page covers the current limits, the response headers you should read, and the backoff behaviour we recommend.
Limits
Two limits apply to every request:
| Scope | Default | Applies to |
|---|---|---|
Per client_id | 1,000 requests / 60 seconds | Every authenticated OAuth app |
| Per IP | 10,000 requests / 60 seconds | All traffic, including unauthenticated endpoints like /oauth/token |
Both use a sliding window: your effective count is a weighted blend of the previous window and the current one, so there's no "reset at the top of the minute" spike. Whichever limit is tighter drives the response headers.
A burst above either cap returns 429 Too Many Requests. Need more headroom for a certified integration or migration? Email developers@risepeople.com — we can raise the cap per client_id.
Response headers
Every response — successful or not — includes:
| Header | Example | Meaning |
|---|---|---|
X-RateLimit-Limit | 1000 | Your per-minute cap (reflects whichever of the two limits is tighter) |
X-RateLimit-Remaining | 73 | Requests left in the current window |
X-RateLimit-Reset | 1713820860 | Unix timestamp when the window resets |
Retry-After | 42 | (429 responses only) Seconds to wait before retrying |
Read X-RateLimit-Remaining on every response and throttle yourself before you hit zero. Waiting for a 429 is more expensive than pacing.
429 response
The 429 uses the RFC 7807 problem-details format (see Errors → Problem details):
HTTP/1.1 429 Too Many Requests
Content-Type: application/problem+json
Retry-After: 42
{
"type": "https://developer.risepeople.com/errors/rate-limit-exceeded",
"title": "Too Many Requests",
"status": 429,
"detail": "Rate limit exceeded. Try again in 42 seconds.",
"instance": "/v1/employees"
}
The Retry-After header tells you how many seconds to wait.
Recommended backoff
For batch work and polling:
- Read
X-RateLimit-Remainingon every response. If it drops below 10% of the limit, pause. - Respect
Retry-Afteron 429 exactly — don't retry sooner. - Exponential backoff with jitter on unexpected 5xx responses (start at 1s, cap at 60s, add ±25% jitter per retry, give up after 5 attempts).
- Never retry a 4xx other than 429 — those are your bug, retrying won't fix it.
A reference implementation in Node:
async function rateLimitedFetch(url, options, attempt = 0) {
const res = await fetch(url, options);
if (res.status === 429) {
const retryAfter = Number(res.headers.get("Retry-After") ?? 1);
await new Promise((r) => setTimeout(r, retryAfter * 1000));
return rateLimitedFetch(url, options, attempt + 1);
}
if (res.status >= 500 && attempt < 5) {
const backoff = Math.min(60_000, 1000 * 2 ** attempt);
const jitter = backoff * (0.75 + Math.random() * 0.5);
await new Promise((r) => setTimeout(r, jitter));
return rateLimitedFetch(url, options, attempt + 1);
}
return res;
}
Bulk operations
For very large reads (>10K records), prefer cursor pagination over parallelism. Fetching pages serially at 10 requests/second finishes before you hit any cap; parallelizing across many workers will burn through your quota in seconds.
For very large writes, batch into sub-1,000-record chunks and pace your requests using X-RateLimit-Remaining. If you need to move more data than that in a single sitting, email developers@risepeople.com — for certified partners, we can temporarily raise the cap or provide an async bulk export/import endpoint.
Monitoring
If you're hitting 429s consistently, check:
- Are you parallelizing unnecessarily? Serialize and see if the problem goes away.
- Is your app making duplicate calls? Cache what doesn't change.
- Do your integration volumes need a raised cap? Email
developers@risepeople.comwith yourclient_id, expected sustained RPS, and use case.
If the 429s are from a single burst (scheduled job kicks off every hour), staggering the schedule or adding a warm-up with X-RateLimit-Remaining checks is usually enough.