Skip to main content

Why is my PDF.co output link not working or looks broken in my API response?

Updated over a month ago

Issue:

You’re calling the PDF.co API and getting a link back, but:

  • The link looks broken or unusable

  • The link has strange characters like \u0026 instead of &

  • The link doesn’t work when copied into a browser

This usually happens when using API clients like cURL or other tools that don’t automatically parse JSON.

Example raw response from cURL:

{"url": "https://pdf-temp-files.s3.us-west-2.amazonaws.com/...sample.pdf?X-Amz-Expires=3600\u0026X-Amz-Security-Token=..."}

  • Notice the \u0026 → this is a valid JSON-escaped version of &.

  • JSON parsers automatically decode this → &

  • But tools like cURL will show it as-is unless you parse the JSON first.

Solution: Parse the JSON response before using the URL

Before using the output link in your app, browser, or follow-up API call, you need to:

  1. Decode the JSON response.
    → Use a JSON parser to process the output from PDF.co.

  2. Extract the url value after decoding.
    → The decoded \u0026 will turn into a usable & character.

Example:

If using JavaScript:

const response = '{"url":"https://...\\u0026..."}';
const parsed = JSON.parse(response);
console.log(parsed.url); // ✅ prints usable URL

If using cURL, pipe it into a JSON parser like jq:

curl -X POST ... | jq -r '.url'

Why this happens

PDF.co’s API responses are valid JSON. The long signed URL contains & symbols, which get escaped as \u0026 inside JSON for safety.

Without JSON parsing, you’ll see the raw escaped form → the browser won’t understand it → link fails.

After JSON parsing, the link works normally.

Helpful tips

  • Tools like Zapier, Make, Postman already handle JSON parsing → no action needed.

  • Tools like cURL, plain HTTP requests, some code editors → need manual JSON parsing.

  • Don’t manually copy raw JSON URLs → always extract them from parsed JSON.

Did this answer your question?