Docs
Docs/API Reference/Rate Limits

Rate Limits

API rate limiting and best practices

Rate Limits

The Octopost API enforces rate limits to ensure fair usage and protect service stability. Rate limits are applied per API key.

Rate Limit Headers

Every API response includes headers indicating your current rate limit status:

HeaderDescription
X-RateLimit-LimitMaximum number of requests allowed per window.
X-RateLimit-RemainingNumber of requests remaining in the current window.
X-RateLimit-ResetUnix timestamp (in seconds) when the rate limit window resets.

Example Response Headers

HTTP/1.1 200 OK X-RateLimit-Limit: 300 X-RateLimit-Remaining: 297 X-RateLimit-Reset: 1712150460 Content-Type: application/json

Limits by Tier

Rate limits are based on your account tier:

TierRequests per minutePublish requests per hour
Free6010
Starter30060
Pro1,000300

Publish endpoints (POST /posts/:id/publish) have a separate, lower limit because each publish request triggers external API calls to social media platforms.

Limits by Endpoint

EndpointRate Limit
GET /postsPer-tier limit (see above)
POST /postsPer-tier limit
PUT /posts/:idPer-tier limit
DELETE /posts/:idPer-tier limit
POST /posts/:id/publishPublish limit (see above)
GET /accountsPer-tier limit
GET /presetsPer-tier limit
POST /presetsPer-tier limit
GET /webhooksPer-tier limit
POST /webhooksPer-tier limit

429 Too Many Requests

When you exceed a rate limit, the API responds with HTTP 429:

{
  "error": "Rate limit exceeded. Try again in 45 seconds.",
  "code": "rate_limited",
  "details": {
    "limit": 300,
    "remaining": 0,
    "reset_at": "2026-04-03T12:01:00Z",
    "retry_after": 45
  }
}

The response also includes a Retry-After header with the number of seconds to wait:

HTTP/1.1 429 Too Many Requests Retry-After: 45 X-RateLimit-Limit: 300 X-RateLimit-Remaining: 0 X-RateLimit-Reset: 1712150460

Backoff Strategy

When you receive a 429 response, use this strategy:

  1. Read the Retry-After header. Wait at least that many seconds before retrying.
  2. If no Retry-After header is present, use exponential backoff: wait 1s, then 2s, then 4s, then 8s, up to a maximum of 60s.
  3. Add jitter. Add a random delay of 0-1 seconds to prevent all clients from retrying simultaneously.

Example: TypeScript

async function fetchWithBackoff(
  url: string,
  options: RequestInit,
  maxRetries = 5
): Promise<Response> {
  for (let attempt = 0; attempt <= maxRetries; attempt++) {
    const response = await fetch(url, options);

    if (response.status !== 429) {
      return response;
    }

    if (attempt === maxRetries) {
      throw new Error("Rate limit exceeded after maximum retries");
    }

    const retryAfter = response.headers.get("Retry-After");
    const delaySeconds = retryAfter
      ? parseInt(retryAfter, 10)
      : Math.min(Math.pow(2, attempt), 60);

    const jitter = Math.random();
    await new Promise((r) => setTimeout(r, (delaySeconds + jitter) * 1000));
  }

  throw new Error("Unreachable");
}

Best Practices

  • Monitor rate limit headers. Check X-RateLimit-Remaining proactively and slow down before hitting the limit.
  • Batch where possible. Create multiple posts in sequence rather than in a burst.
  • Cache responses. Cache GET responses to avoid unnecessary requests. Account and preset data changes infrequently.
  • Use webhooks instead of polling. Subscribe to webhooks for events like post.published instead of repeatedly polling GET /posts/:id.
  • Respect Retry-After. Always honor the Retry-After header. Clients that continue to send requests during a rate limit window may be temporarily blocked.