Rate Limits
API rate limiting and best practices
Rate Limits
The Octopost API enforces rate limits to ensure fair usage and protect service stability. Rate limits are applied per API key.
Rate Limit Headers
Every API response includes headers indicating your current rate limit status:
| Header | Description |
|---|---|
X-RateLimit-Limit | Maximum number of requests allowed per window. |
X-RateLimit-Remaining | Number of requests remaining in the current window. |
X-RateLimit-Reset | Unix timestamp (in seconds) when the rate limit window resets. |
Example Response Headers
HTTP/1.1 200 OK
X-RateLimit-Limit: 300
X-RateLimit-Remaining: 297
X-RateLimit-Reset: 1712150460
Content-Type: application/json
Limits by Tier
Rate limits are based on your account tier:
| Tier | Requests per minute | Publish requests per hour |
|---|---|---|
| Free | 60 | 10 |
| Starter | 300 | 60 |
| Pro | 1,000 | 300 |
Publish endpoints (POST /posts/:id/publish) have a separate, lower limit because each publish request triggers external API calls to social media platforms.
Limits by Endpoint
| Endpoint | Rate Limit |
|---|---|
GET /posts | Per-tier limit (see above) |
POST /posts | Per-tier limit |
PUT /posts/:id | Per-tier limit |
DELETE /posts/:id | Per-tier limit |
POST /posts/:id/publish | Publish limit (see above) |
GET /accounts | Per-tier limit |
GET /presets | Per-tier limit |
POST /presets | Per-tier limit |
GET /webhooks | Per-tier limit |
POST /webhooks | Per-tier limit |
429 Too Many Requests
When you exceed a rate limit, the API responds with HTTP 429:
{
"error": "Rate limit exceeded. Try again in 45 seconds.",
"code": "rate_limited",
"details": {
"limit": 300,
"remaining": 0,
"reset_at": "2026-04-03T12:01:00Z",
"retry_after": 45
}
}The response also includes a Retry-After header with the number of seconds to wait:
HTTP/1.1 429 Too Many Requests
Retry-After: 45
X-RateLimit-Limit: 300
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1712150460
Backoff Strategy
When you receive a 429 response, use this strategy:
- Read the
Retry-Afterheader. Wait at least that many seconds before retrying. - If no
Retry-Afterheader is present, use exponential backoff: wait 1s, then 2s, then 4s, then 8s, up to a maximum of 60s. - Add jitter. Add a random delay of 0-1 seconds to prevent all clients from retrying simultaneously.
Example: TypeScript
async function fetchWithBackoff(
url: string,
options: RequestInit,
maxRetries = 5
): Promise<Response> {
for (let attempt = 0; attempt <= maxRetries; attempt++) {
const response = await fetch(url, options);
if (response.status !== 429) {
return response;
}
if (attempt === maxRetries) {
throw new Error("Rate limit exceeded after maximum retries");
}
const retryAfter = response.headers.get("Retry-After");
const delaySeconds = retryAfter
? parseInt(retryAfter, 10)
: Math.min(Math.pow(2, attempt), 60);
const jitter = Math.random();
await new Promise((r) => setTimeout(r, (delaySeconds + jitter) * 1000));
}
throw new Error("Unreachable");
}Best Practices
- Monitor rate limit headers. Check
X-RateLimit-Remainingproactively and slow down before hitting the limit. - Batch where possible. Create multiple posts in sequence rather than in a burst.
- Cache responses. Cache
GETresponses to avoid unnecessary requests. Account and preset data changes infrequently. - Use webhooks instead of polling. Subscribe to webhooks for events like
post.publishedinstead of repeatedly pollingGET /posts/:id. - Respect
Retry-After. Always honor theRetry-Afterheader. Clients that continue to send requests during a rate limit window may be temporarily blocked.