What is Rate Limiting?
When you or your systems connect to the Scorebuddy API, there is a limit on how many requests can be made within a given time window. This is called rate limiting.
Think of it like a revolving door — only so many people can pass through per hour. Rate limiting exists to keep the platform stable and performant for everyone, ensuring no single integration can accidentally (or otherwise) overwhelm the system.
How Does It Work?
The Scorebuddy API enforces a per-hour request limit. Every request your system makes counts against this limit, regardless of whether it succeeds or fails.
Each API response includes three headers that tell you exactly where you stand:
Header | What it means |
X-RateLimit-Limit | The total number of requests allowed per hour |
X-RateLimit-Remaining | How many requests you have left in the current window |
X-RateLimit-Reset | The date and time (UTC) when your limit resets |
For developers: These headers are present on every response, so you can monitor your consumption in real time and back off proactively before hitting the limit.
For non-developers: In plain terms — your integration has a budget of requests per hour. These headers are like a running balance, telling you how much of that budget remains.
What Happens If You Hit the Limit?
If your integration exceeds the allowed number of requests, the API will respond with an HTTP 429 — Too Many Requests error. At this point, further requests will be rejected until the window resets.
The 429 response includes a Retry-After header, which tells you the exact UTC date and time at which you can safely resume making requests.
In short: stop, wait, and retry after the time given.
Checking Your Rate Limit Status
You can check your current rate limit status at any time — without consuming a meaningful amount of your quota — by calling the /ping endpoint:
GET /{instance_id}/api/v1/pingThis endpoint requires no authentication and returns your current limit statistics in the response headers described above. It's a useful tool for monitoring or health-checking your integration.
Best Practices
Whether you're a developer building an integration or a team managing an existing one, keeping the following in mind will help you avoid hitting rate limits unexpectedly:
Poll efficiently. For bulk data extraction (e.g. Scores), use date range filters like from_last_edit_date and to_last_edit_date to fetch only what has changed since your last request, rather than re-fetching everything.
Monitor your headers. Don't wait for a 429 — watch X-RateLimit-Remaining and slow down your requests as you approach zero.
Respect the Retry-After header. If you do receive a 429, do not immediately retry. Wait until the time indicated in the header before resuming.
Implement a backoff strategy. If the server returns a 500 error, the API documentation advises following a backoff algorithm — meaning you should wait progressively longer between retries rather than hammering the API repeatedly.
Avoid unnecessary calls. Don't poll the API more frequently than your use case requires. Hourly or event-driven syncs are typically far more efficient than continuous polling.
Common Mistakes
Repeatedly Looking Up the Same Reference Data
The /scores endpoint returns a number of IDs alongside each score — group_id, team_id, scorecard_id, staff_id, and others. A very common pattern we see is integrations that, for every score returned, fire off a follow-up request to resolve each of those IDs:
GET /scores → returns score with group_id: 35 GET /groups/35 → what is group 35? GET /scores → next score, also has group_id: 35 GET /groups/35 → what is group 35? (again)
If you're pulling hundreds or thousands of scores, this multiplies your request count dramatically — and most of those calls are asking for information you've already received.
The fix is simple: cache reference data locally.
Groups, teams, scorecards, staff members, and users change infrequently. Once you've resolved a given ID, store the result in memory or in your own data store, and check there first before making another API call. A lookup table keyed by ID works well for this:
if group_id not in local_cache: fetch from /groups/{group_id} and store result else: use cached valueThis single habit can reduce your API call volume significantly, particularly for high-volume score exports. As a rule of thumb — if it has its own dedicated endpoint and doesn't change often, it's a good candidate for caching.
Data that is generally safe to cache for extended periods includes: Groups, Teams, Scorecards and their Versions, Events and Sub Events, Custom Objects, and Data Tags. Staff and User records change more frequently (due to onboarding, offboarding, team moves, etc.), so if accuracy matters for those, you may want to refresh your cache periodically rather than caching indefinitely.
Summary
Scenario | What to expect |
Normal operation | X-RateLimit-Remaining decreases with each request |
Approaching the limit | Monitor headers and reduce request frequency |
Limit exceeded | HTTP 429 returned; check Retry-After before retrying |
Server error | HTTP 500 returned; apply a backoff strategy |
Checking your status | Call GET /ping to inspect limit headers without side effects |
If you have questions about your specific rate limit allocation or need a higher limit for your use case, please reach out to Scorebuddy Support.
