You can use either Customer Journey Analytics (CJA) via Adobe Experience Platform (AEP) or the classic Adobe Analytics report suite via Bulk Data Insertion API (BDIA).
Scrunch does not yet offer a native Adobe connector, but our public APIs make it easy to automate with a lightweight ETL script.
Prerequisites
Before you begin:
Requirement | Description |
Scrunch API access | API key from your Scrunch account (contact your account admin if unsure). |
Adobe access | Either (a) AEP + CJA access with rights to create Datasets and Data Views, or (b) a classic Adobe Analytics account with permission for BDIA uploads. |
ETL environment | Any environment that can run Python, Node.js, or another scripting language to fetch and push data. |
Schedule | We recommend running daily or weekly pulls depending on your prompt volume. |
Choose your Adobe destination
Option | When to use | Data flow overview |
Recommended for most users; supports any structured dataset | Scrunch API → transform to XDM → batch upload to AEP → connect in CJA | |
If your organization hasn’t migrated to CJA yet | Scrunch API → transform to CSV → upload via BDIA or FTP Data Sources |
Scrunch API Overview
Scrunch provides two key APIs that you can use for Adobe ingestion.
Important: All API calls require a Brand ID. To find your Brand ID, use the List Brands endpoint:
GET https://api.scrunchai.com/v1/brands Authorization: Bearer YOUR_API_KEY
This returns all brands your API key has access to. Note the id field for the brand you want to integrate.
Data limitation: The Scrunch API retains the last 90 days of data. Plan your ETL schedule accordingly and store historical data in your own system if longer retention is needed.
a. Query API
Returns aggregated metrics by date, prompt, and platform.
Endpoint: GET https://api.scrunchai.com/v1/{brand_id}/query
Example fields
Field | Description |
| Start/end date of query (YYYY-MM-DD) Limited to last 90 days for date fields |
| ChatGPT, Gemini, Perplexity, etc. |
| Unique Scrunch prompt identifier |
| Prompt text |
| Persona used |
| Geo context |
| Count of collected AI responses |
| % of responses mentioning your brand |
| Normalized ranking/visibility score |
| Sentiment derived from AI response content |
Note: The Query API auto-aggregates metrics based on dimensions specified. For example, requesting `date,ai_platform,brand_presence_percentage` returns average brand presence per platform.
b. Responses API
Returns row-level data for each individual AI response (useful for text analysis, citation details, or domain influence).
Endpoint: GET https://api.scrunchai.com/v1/{brand_id}/responses
Pagination: Maximum 1000 records per request. Use `limit` and `offset` parameters for larger datasets.
Example:
GET https://api.scrunchai.com/v1/{brand_id}/responses?limit=1000&offset=0&start_date=2025-10-20&end_date=2025-10-27
Key fields
Field | Description |
| Unique response ID (for deduplication) |
| Timestamp when the response was collected (UTC) |
| AI platform name (e.g., ChatGPT, Perplexity, Google AI Overviews, Meta, Gemini) |
| ID of the related prompt |
| Full text of the prompt used to collect the response |
| Array of citation objects containing |
| Boolean indicating if the brand is mentioned in the response |
| “Top,” “Middle,” or “Bottom” (null if brand not present) |
| “Positive,” “Mixed,” “Negative,” or “None” (null if brand not present) |
| Array of competitor names detected in the response |
| Country code associated with the persona (ISO format) |
| Full text of the AI response in markdown format |
Step-by-Step: Load into Adobe Experience Platform (AEP + CJA)
Step 1: Define XDM schema
Create a schema in AEP based on ExperienceEvent.
Example fields
timestamp : dateTime
_id : string (required for ExperienceEvent)
eventType : string (e.g., "scrunch.ai.response")
aiPlatform : string
promptId : string
promptText : string
personaName : string
country : string
brandPresencePct : double
brandPositionScore : double
brandSentimentScore : double
responses : long
```
**For Responses API integration, add these additional fields**:
```
responseId : string
createdAt : dateTime
brandPresent : boolean
brandPosition : string (enum: Top, Middle, Bottom)
brandSentiment : string (enum: Positive, Mixed, Negative, None)
competitorsPresent : string[] (array of competitor names)
responseText : string
citations : object[] (array of citation objects)
├─ url : string
├─ title : string
├─ snippet : string
├─ sourceType : string (enum: Brand, Competitor, Other)
└─ domain : string
Field notes:
_id: Required unique identifier for each ExperienceEvent record. Use a combination like{brand_id}_{prompt_id}_{date}_{platform}for Query API data, orresponse_{id}for Responses API data.eventType: Helps distinguish Scrunch events from other data sources in AEP. Use values like"scrunch.ai.query"or"scrunch.ai.response".timestamp: Use thedatefield from Query API orcreated_atfrom Responses API.Nullable fields:
brandPositionScore,brandSentimentScore,personaName, andbrandPosition/brandSentimentcan be null when brand is not present.citations: If using Responses API, structure as an array of objects with the subfields shown above.
Step 2: Create dataset
In AEP:
Navigate to Datasets → Create Dataset from Schema.
Select the schema above and name it “Scrunch_AI_Metrics”.
Note the dataset ID for later use.
Step 3: Build a small ETL script
Example Python outline:
import requests
import json
import datetime
# Configuration
SCRUNCH_API_KEY = "YOUR_SCRUNCH_API_KEY"
SCRUNCH_BRAND_ID = "YOUR_BRAND_ID" # Get from GET /v1/brands endpoint
AEP_INGEST_URL = "https://platform.adobe.io/data/foundation/import/batches"
AEP_DATASET_ID = "YOUR_AEP_DATASET_ID"
# Set date range (last 7 days)
# Note: API only retains last 90 days of data
end_date = datetime.date.today()
start_date = end_date - datetime.timedelta(days=7)
# 1. Fetch data from Scrunch Query API
# Must specify fields parameter for dimensions and metrics you want
resp = requests.get(
f"https://api.scrunchai.com/v1/{SCRUNCH_BRAND_ID}/query",
headers={"Authorization": f"Bearer {SCRUNCH_API_KEY}"},
params={
"start_date": start_date.strftime("%Y-%m-%d"),
"end_date": end_date.strftime("%Y-%m-%d"),
"fields": "date,ai_platform,prompt_id,prompt,persona_name,country,responses,brand_presence_percentage,brand_position_score,brand_sentiment_score"
}
)
if resp.status_code != 200:
print(f"Error fetching data: {resp.status_code}")
print(resp.text)
exit(1)
# Parse response - Query API returns array directly
data = resp.json()
# Verify data structure
if not isinstance(data, list):
print(f"Unexpected response format: {type(data)}")
print("Response:", json.dumps(data, indent=2)[:500])
exit(1)
print(f"Retrieved {len(data)} records from Scrunch")
# 2. Transform to XDM ExperienceEvent format
events = []
for row in data:
# Handle potential null values
events.append({
"timestamp": row.get("date"),
"aiPlatform": row.get("ai_platform", "Unknown"),
"promptId": str(row.get("prompt_id", "")),
"promptText": row.get("prompt", ""),
"personaName": row.get("persona_name"),
"country": row.get("country"),
"responses": int(row.get("responses", 0)),
"brandPresencePct": float(row.get("brand_presence_percentage", 0)),
"brandPositionScore": float(row.get("brand_position_score", 0)) if row.get("brand_position_score") is not None else None,
"brandSentimentScore": float(row.get("brand_sentiment_score", 0)) if row.get("brand_sentiment_score") is not None else None
})
# 3. Write to JSON file for batch upload
output_file = "scrunch_batch.json"
with open(output_file, "w") as f:
json.dump(events, f, indent=2)
print(f"Successfully transformed {len(events)} records to {output_file}")
# 4. Upload to AEP batch ingestion endpoint
# (See Adobe's AEP Batch Ingestion API docs for authentication and upload syntax)
# You'll need to:
# - Create a batch
# - Upload the file
# - Signal batch completion
(See Adobe’s AEP Batch Ingestion API docs for authentication and upload syntax.)
For high-volume Responses API ingestion (optional):
For high-volume Responses API ingestion (optional):
python
# Fetch row-level responses with pagination
all_responses = []
offset = 0
limit = 1000
while True:
resp = requests.get(
f"https://api.scrunchai.com/v1/{SCRUNCH_BRAND_ID}/responses",
headers={"Authorization": f"Bearer {SCRUNCH_API_KEY}"},
params={
"start_date": start_date.strftime("%Y-%m-%d"),
"end_date": end_date.strftime("%Y-%m-%d"),
"limit": limit,
"offset": offset
}
)
if resp.status_code != 200:
print(f"Error: {resp.status_code}")
break
batch = resp.json()
items = batch.get("items", [])
if not items:
break
all_responses.extend(items)
print(f"Fetched {len(items)} responses (total: {len(all_responses)})")
# Check if we've retrieved all available data
if len(items) < limit:
break
offset += limit
print(f"Total responses retrieved: {len(all_responses)}")
Step 4: Connect in CJA
Go to Customer Journey Analytics → Connections → Create Connection.
Choose your AEP dataset (
Scrunch_AI_Metrics).Build a Data View with relevant fields.
Create a new Workspace project to visualize Scrunch metrics over time.
Step-by-Step: Load into Classic Adobe Analytics (BDIA)
Step 1: Create custom variables
Reserve a few eVars and events in your report suite:
Variable | Example mapping |
eVar1 | AI Platform |
eVar2 | Prompt ID |
eVar3 | Country |
eVar4 | Competitor |
event1 | Responses |
event2 | Brand Presence % |
event3 | Brand Position Score |
Step 2: Generate BDIA file
Each line represents a “hit”:
Date,AI Platform,Prompt ID,Country,Responses,Brand Presence %,Brand Position Score 2025-10-26,ChatGPT,12345,US,200,65,82
Step 3: Upload
Use Adobe’s Bulk Data Insertion API or FTP Data Sources to upload the file daily.
See Adobe BDIA Documentation.
Recommended metrics in Adobe
KPI | Description | Example Adobe setup |
Brand Presence % | % of AI responses where brand was mentioned | Use |
Brand Position Score | Normalized average ranking per AI platform | Average of |
Influence Score | Weighting based on citations across domains | Derived field using |
Sentiment | Avg. sentiment per response | Average of |
Response Volume | Total AI responses collected | Sum of
|
Note: brand_presence_percentage comes pre-calculated from the API (0-100 scale). You don't need to compute (brand_present / total_responses) × 100 manually.
Automation and Scheduling
Run ETL daily or weekly depending on prompt frequency.
Store results in a staging bucket (S3, GCS, or Azure Blob).
Use Airflow, GitHub Actions, or Cloud Scheduler for automation.
Keep historical data for trend analysis—Scrunch’s API supports date-based filtering.
Troubleshooting
Issue | Likely Cause | Resolution |
Empty dataset | No prompts or incorrect date range | Confirm API call includes correct |
"Brand ID required" error | Missing brand ID in API path | Use |
Fields parameter error | Incorrect field names | Match field names exactly from API documentation (case-sensitive) |
AEP batch rejected | Schema mismatch | Ensure field names and types match your XDM definition |
BDIA error 405 | File format issue | Verify CSV headers and ensure UTF-8 encoding |
Time zone mismatch | Scrunch returns UTC | Adjust timestamps in ETL before sending to Adobe |
Pagination incomplete | Not handling offset correctly | For Responses API, continue paginating until |
Example project structure
scrunch_adobe_integration/
│
├── etl.py # Fetch + transform + upload
├── config.json # API keys, dataset IDs
├── schemas/
│ └── scrunch_xdm.json # XDM field definitions
├── outputs/
│ ├── scrunch_batch.json
│ └── scrunch_bdia.csv
└── README.md
