Blazelock

Rate Limits

Understand request limits, throttling behavior, and how to avoid rejected requests.

The Blazelock API uses rate limits to protect platform stability and keep scanning performance predictable under load. The limits are intentionally generous and are designed not only for normal production traffic, but also for higher-throughput workloads.

If a request exceeds the currently allowed rate, the API responds with 429 Too Many Requests. This is a temporary response that tells your client to slow down and retry after the current window has reset.

Understanding Rate Limits

The current default rate limit is configured as follows:

ScopeRate limit
All API endpoints160 requests per minute

Requests are limited based on the sender, not the API key itself. In practice, this means the rate limit is tied to where the requests come from, rather than which API key is used to authorize them.

Rate limit exceeded

When you exceed the rate limit, you'll receive an HTTP 429 status code. Monitor the rate limit headers (see below) to avoid hitting this limit.

Rate Limit Headers

Each API response includes these headers to help you monitor your usage:

Responses that stay within the limit include X-RateLimit-Limit and X-RateLimit-Remaining. When a request is rejected because the rate limit was exceeded, the response also includes X-RateLimit-Reset and Retry-After.

HeaderDescription
X-RateLimit-LimitThe maximum number of requests allowed in the current rate limit window.
X-RateLimit-RemainingThe number of requests remaining in the current rate limit window.
X-RateLimit-ResetThe Unix timestamp for when the current rate limit window resets.
Retry-AfterThe number of seconds to wait before retrying after a 429 Too Many Requests response.

Higher Throughput Requirements

If you expect unusually high request volumes or you are hitting the limits in a legitimate production use case, contact the Blazelock support. We review these cases individually and check whether an adjustment or a different integration pattern makes sense.

To evaluate the request, we need:

  • A short description of how you integrated the API.
  • Your expected request volume.
  • Whether the traffic is steady or burst-heavy.
  • Whether the workload is production, staging, or another environment.

On this page