Skip to main content

Scrunch AI: Early Access Responses API Documentation

Basic documentation on authenticating and integrating Scrunch Responses data programmatically.

Updated yesterday

Last Updated: September 22nd, 2025

Overview

The Responses API is designed to allow customers to ingest AI responses captured by Scrunch for further analysis, to power aggregated data marts for BI purposes, or to support UI for customers building custom interfaces to Scrunch data where the full response text and citations are needed.

Suggested Use

For ETL scenarios, you can use the start_date and end_date parameters to implement date filtering or "high watermarks". Note that currently these parameters only accept dates, not full granularity timestamps. You can use the created_at date of the most recently retrieved response as a high watermark or simply load the previous day's data after midnight UTC. Each Response has a unique ID that can be used for deduplication in case of data loads that overlap with previous captured data.

The primary data in Responses – response text, citations – is immutable. The stage, branded, tags, key_topics dimensional fields may change as a result of user action in the Scrunch UI or updates to prompt metadata via API. The metric fields – brand_present, brand_sentiment, brand_position, competitors_present are normally immutable but may be re-evaluated if the brand's configuration changes and re-evaluation is requested.

Background

Please see the general API documentation Scrunch AI: Early Access API Documentation [EXTERNAL] for basic information about authentication, API key scopes, obtaining and specifying brand IDs, and other available APIs.

API Details

Endpoint

Request Parameters

All parameters are optional.

Parameter

Type

Description

Notes

platform

String (Enum)

AI platforms to retrieve responses for

Valid options are (subject to expansion)

chatgpt,

claude, google_ai_overviews, perplexity,

meta, google_ai_mode,

google_gemini

prompt_id

Number

Specific prompt ID to retrieve responses for

start_date

Date (YYYY-MM-DD)

Start date of responses to retrieve (inclusive)

end_date

Date (YYYY-MM-DD)

End date of responses to retrieve (exclusive)

limit

Number

Limit of responses to return

Must be greater than 1, max 1000

offset

Number

Offset to paginate into

Should be a multiple of limit

Response Format

The Responses API returns Scrunch's standard format for paginated collections

Collection<Response>

Field

Type

Description

Notes

total

Number

Total number of responses available for current parameters

offset

Number

Current offset retrieved – add limit to this value to retrieve the next page

limit

Number

Limit of responses to retrieve for current request

items

Response[]

List of Response objects

Response

Field

Type

Description

Notes

id

Number

Unique ID for Response

In normal operations, Responses are immutable; you can safely deduplicate or upsert on id

created_at

Timestamp (UTC)

Granular timestamp response was collected at

prompt_id

Number

Unique ID for prompt. Can be retrieved from Prompts API.

prompt

String

Prompt text

persona_id

Number (Nullable)

Persona ID attached to prompt

persona_name

String (Nullable)

Persona name attached to prompt

country

String (Nullable)

2-character ISO country code response was retrieved for

stage

String (Enum)

Stage of the customer journey

Advice, Awareness, Comparison, Evaluation, Other

tags

String[]

Tags attached to prompt

key_topics

String[]

Key topics attached to prompt

platform

String (Enum)

AI platform response was retrieved from

brand_present

Boolean

brand_sentiment

String (Enum) (Nullable)

Null when brand not present

Positive, Mixed, Negative, None

brand_position

String (Enum) (Nullable)

Null when brand not present

Top, Middle, Bottom

competitors_present

String[]

List of competitor names found in response

response_text

String

Markdown format

citations

Citation[]

See definition below

competitors

CompetitorEvaluation[]

See definition below

Presents a superset of the information in competitors_present.

Citation

Field

Type

Description

Notes

url

String

URL of citation

title

String (Nullable)

Title tag of citation

Not exposed by all platforms

snippet

String (Nullable)

Search engine snippet (description) of citation – often but not always from meta description tag

Not exposed by all platforms

CompetitorEvaluation

Field

Type

Description

Notes

name

String

Name of competitor

id

Number

present

Boolean

Whether competitor is present in response

position

String (Enum) (Nullable)

Null when present is false

Top, Middle, Bottom

sentiment

String (Enum) (Nullable)

Null when present is false

Positive, Mixed, Negative, None

ERD

The Responses API effectively denormalizes the Prompt, Tag, Topic, Persona, and Variant (via the "Platform" field) dimensions onto Response. Unlike the Query API, one-to-many or many-to-many relationships from Prompts to other dimensions are represented at lists (e.g. key_topics) rather than producing additional rows.

If customers wish to build dimension tables in a data mart loaded from the Responses API, they must create or "fan out" these values themselves.

Did this answer your question?