Overview
API integrations allow you to connect PXM with external systems such as ERPs, ecommerce platforms, or custom applications. Instead of manually updating data, APIs enable systems to communicate automatically — saving time, reducing errors, and keeping your product content in sync.
The PXM API is a RESTful HTTP API using JSON for request and response bodies. It supports operations on collections (products), files (digital assets), categories (navigation folders), attributes (metadata fields), and more.
Base URL: https://entapi.amplifi.io
All versioned endpoints live under /v2.1/. For example, to list collections: GET https://entapi.amplifi.io/v2.1/collection
Before You Begin
To create or manage API integrations in PXM, you must have the Super Admin role.
If you do not see the settings or options described below, contact your PXM administrator to confirm your role.
Creating an API Authorization
Follow these steps to generate your API credentials:
In PXM, open the left-hand sidebar and click Settings.
Under System, select the Integrations tab.
In the bottom-right corner, click Register Application.
Enter a name for your integration (e.g., ERP Sync) and save.
PXM will generate a Client ID and Client Secret. Copy both immediately — the secret is only shown once.
⚠️ Keep your Client Secret secure. Do not commit it to source control or share it publicly. Store it in an environment variable or secrets manager.
Authentication & Getting a Token
The PXM API uses OAuth 2.0 Client Credentials flow. You exchange your Client ID and Client Secret for a Bearer token, which is then included on every subsequent API request.
Step 1 — Request a Token
Send a POST request to the token endpoint. Your Client ID and Client Secret must be Base64-encoded and passed as a Basic authorization header.
Endpoint: POST /v2.1/oauth/authorize
Headers:
Authorization: Basic <base64(client_id:client_secret)> Content-Type: application/json
Request Body:
{
"grant_type": "client_credentials"
}
Response:
{
"access_token": "eyJhbGciOiJSUzI1NiIs...",
"token_type": "Bearer",
"expires_in": "31536000",
"user": {
"email": "api@yourcompany.com",
"hostname": "yourcompany.amplifi.io"
}
}
Step 2 — Include the Token on All Requests
Pass the access_token value as a Bearer token on every subsequent API call:
Authorization: Bearer eyJhbGciOiJSUzI1NiIs...
Python Example — Full Auth Flow
import requests
import base64CLIENT_ID = "your_client_id"
CLIENT_SECRET = "your_client_secret"
BASE_URL = "https://entapi.amplifi.io/v2.1"def get_token(client_id, client_secret):
credentials = base64.b64encode(f"{client_id}:{client_secret}".encode()).decode()
resp = requests.post(
f"{BASE_URL}/oauth/authorize",
headers={
"Authorization": f"Basic {credentials}",
"Content-Type": "application/json"
},
json={"grant_type": "client_credentials"}
)
resp.raise_for_status()
return resp.json()["access_token"]token = get_token(CLIENT_ID, CLIENT_SECRET)
headers = {
"Authorization": f"Bearer {token}",
"Content-Type": "application/json"
}
💡 Tokens are long-lived (approximately 1 year). Cache your token and reuse it across requests. Only re-authenticate if you receive a 401 Unauthorized response.
Accessing the PXM API Documentation
The interactive Swagger UI lets you explore every endpoint and test live requests directly in your browser:
Open the documentation URL above.
Click Authorize (top right) and enter your Client ID and Client Secret.
Expand any endpoint group to see available operations, try them live, and inspect the expected request/response shapes.
Key Concepts
Understanding these core terms will help you work with the API more effectively:
PXM Term | What It Means | API Resource |
Collection | A product record. Parent collections represent a product family; variants represent individual SKUs. |
|
File | A digital asset (image, video, PDF, etc.) stored in the DAM. |
|
Category | A navigation folder used to organize files in the DAM. Separate from product hierarchy. |
|
Attribute | A metadata field on a collection, file, or category (e.g., "Description", "UPC", "Color"). |
|
Region | A locale/market context (e.g., US, EU). Products and files can be scoped to specific regions. |
|
Label | A tag applied to a file for filtering and organization (analogous to a keyword). | Returned via |
ℹ️ Important: In the API, products are referred to as collections. If you're used to thinking in terms of "products" in your ERP or ecommerce system, map that concept to /collection in the API.
Common Endpoints
Collections (Products)
GET /v2.1/collection— List all collections. Supports filtering byparent_id,region_ids, date ranges, and more. Paginate withlimitandoffset.GET /v2.1/collection/{id}— Get a single collection by ID, including all its attributes.POST /v2.1/collection— Create a new collection (product).PUT /v2.1/collection/{id}— Update a collection. See PUT behavior section below — this is a full replacement, not a partial update.
Files (Digital Assets)
GET /v2.1/file— List files. Filter bycollection_ids, date ranges, regions.GET /v2.1/file/{id}— Get a single file by ID, including metadata, attributes, and linked collections.PUT /v2.1/file/{id}— Update a file record — e.g., associate it with collections, update attributes, or apply labels.POST /v2.1/file/request-upload— Initiate a file upload. Returns a pre-signed S3 URL. Then upload the file directly to that URL, and callPOST /file/confirm-uploadto finalize.
Attributes
GET /v2.1/attribute— List all attribute definitions in your PXM instance.GET /v2.1/collection/{id}/attribute— Get all attribute values for a specific collection.
Search
GET /v3/file/search— Search files by keyword. Returns results with atotalcount andhitsarray.GET /v2.1/file/search/{file_name}— Search for a specific file by its exact filename.
Making Your First Request
Get All Collections (Python)
import requestsBASE_URL = "https://entapi.amplifi.io/v2.1"
headers = {"Authorization": f"Bearer {token}"}resp = requests.get(f"{BASE_URL}/collection", headers=headers, params={"limit": 100, "offset": 0})
resp.raise_for_status()
collections = resp.json()
print(f"Fetched {len(collections)} collections")
Update Attribute Values on a File
To write attribute values, use PUT /file/{id} with an attributes array. Each entry needs the attribute's id, a type of "update", and the new value:
payload = {
"attributes": [
{
"id": "attribute-uuid-here",
"type": "update",
"value": "New value"
}
]
}resp = requests.put(
f"{BASE_URL}/file/{file_id}",
headers={**headers, "Content-Type": "application/json"},
json=payload
)
resp.raise_for_status()
Associate a File with Collections
The collections field on a file expects an array of objects with an id property, not plain strings:
# ✅ Correct format
payload = {
"collections": [
{"id": "collection-uuid-1"},
{"id": "collection-uuid-2"}
]
}# ❌ Incorrect — plain strings will not work
# payload = { "collections": ["collection-uuid-1", "collection-uuid-2"] }resp = requests.put(f"{BASE_URL}/file/{file_id}", headers=headers, json=payload)
resp.raise_for_status()
Upload a File (Two-Step Process)
File uploads require two API calls:
Step 1 — Request a pre-signed upload URL:
resp = requests.post(
f"{BASE_URL}/file/request-upload",
headers=headers,
json={
"filename": "product-image.jpg",
"mime_type": "image/jpeg",
"size": 1024000
}
)
upload_data = resp.json()["upload_data"]
upload_url = upload_data["upload_url"]
file_path = upload_data["file_path"]
Step 2 — PUT the file binary to the presigned URL (no auth header needed here — this goes directly to S3), then confirm:
with open("product-image.jpg", "rb") as f:
requests.put(upload_url, data=f, headers={"Content-Type": "image/jpeg"})# Step 3: Confirm upload to move file to permanent storage
requests.post(
f"{BASE_URL}/file/confirm-upload",
headers=headers,
json={
"file_path": file_path,
"file_size": 1024000,
"file_name": "product-image.jpg",
"metadata": {"published": False}
}
)
Pagination
All list endpoints (/collection, /file, /category, etc.) support limit and offset query parameters. You must paginate to retrieve more records than the per-page limit. Keep fetching pages until the number of results returned is less than your limit:
def fetch_all(url, headers, params=None):
results = []
offset = 0
limit = 100
base_params = dict(params or {}) while True:
base_params.update({"limit": limit, "offset": offset})
resp = requests.get(url, headers=headers, params=base_params)
resp.raise_for_status()
page = resp.json() # Some endpoints return a list, others wrap in {"items": [...]}
items = page.get("items", page) if isinstance(page, dict) else page
results.extend(items)
print(f"Fetched {len(results)} so far...") if len(items) < limit:
break
offset += limit print(f"Total: {len(results)}")
return resultsall_collections = fetch_all(f"{BASE_URL}/collection", headers)
ℹ️ The search endpoints (/v3/{entity}/search) use a different pagination model: from (offset) and size (page size), with a maximum from of 10,000.
Understanding PUT Behavior
⚠️ Critical: PUT operations on association fields (like collections on a file) are full replacements, not partial updates. If you PUT with only one collection ID, any other collections the file was already associated with will be removed.
The safe pattern is to always GET first, merge, then PUT:
# 1. Fetch the file's current state
resp = requests.get(f"{BASE_URL}/file/{file_id}", headers=headers)
current = resp.json()# 2. Collect existing collection IDs so we don't lose them
existing_ids = [c["id"] for c in current.get("collections", [])]# 3. Add the new collection IDs (de-duplicate)
new_ids = ["new-collection-uuid"]
merged_ids = list(set(existing_ids + new_ids))# 4. PUT the merged list back
requests.put(
f"{BASE_URL}/file/{file_id}",
headers=headers,
json={"collections": [{"id": i} for i in merged_ids]}
)
Error Codes
HTTP Status | Meaning | Common Cause |
| Bad Request | Malformed request body, missing required field, or invalid parameter value. |
| Unauthorized | Missing, expired, or invalid Bearer token. Re-authenticate and retry. |
| Not Found | The resource ID does not exist in this PXM instance. |
| Too Many Requests | Rate limit exceeded. Back off and retry after the |
| Internal Server Error | Server-side error. Log the request and contact support if persistent. |
Error responses return a JSON body:
{
"code": "string",
"message": "string"
}
Best Practices for API Usage
Rate Limits
The PXM API enforces a limit of 1,000 requests per 60 seconds. For batch integrations processing many records:
Target ~950 requests/60s to stay safely under the limit.
Use asynchronous HTTP calls (e.g., Python's
aiohttp+asyncio) for throughput on bulk operations.Handle
429responses gracefully — wait for theRetry-Afterheader value before retrying.Avoid running more than ~10 concurrent workers — beyond that, rate limiting becomes more likely.
Caching & Data Freshness
Batch updates only: PXM APIs are designed for daily or less frequent batch updates.
Write/update actions: Update data no more than once per day.
Read actions: Extract data no more than once per day.
Use caching: If PXM data is consumed by other applications, maintain a local data source (cache) that is updated via the API instead of repeatedly calling PXM.
Working with IDs
All PXM entity IDs are UUIDs. Store these — not names — in your integration, as names can change.
Discover attribute IDs by calling
GET /attributeand caching theid-to-labelmapping.Discover region IDs by calling
GET /region.
Attribute Types
When creating or updating attributes, the value_type field controls what values are valid:
value_type | Accepted Values |
| Any string |
| Numeric value |
|
|
| ISO 8601 date string (YYYY-MM-DD) |
| Array of strings |
| One of the predefined options |
General Checklist
Always store credentials in environment variables, not hardcoded in scripts.
URL-encode filenames when passing them as path parameters in search requests.
Log request IDs and HTTP status codes for every API call to aid debugging.
Test integrations against a small batch of records before running bulk operations.
For bulk writes, collect errors separately rather than aborting the entire run on first failure.
Need Help?
Once your API keys are set up and best practices are followed, API integrations can greatly simplify how you manage and synchronize data in PXM.
For more details, refer to the API documentation or reach out to PXMsupport@pattern.com if you need assistance.
