Every moderation run is saved under Results, giving you full visibility, traceability, and control over all past and ongoing validations. This is where teams review outcomes, take action, and connect moderation decisions back to their workflows.
Moderation runs
Each row in the Results table represents a single moderation run and includes:
Run name
Assets moderated
Results summary
Applied rules
Status (Completed, Running, or Failed)
Click any run to drill into detailed, asset-level results.
Asset statuses
Within a run, assets are automatically grouped by outcome:
✅ Approved - Assets that passed all selected rules and are ready for use.
⚠️ Needs review - Borderline cases where AI confidence is lower or where human review is required. This enables a true human-in-the-loop workflow so teams focus only where attention is needed.
❌ Rejected - Assets that clearly failed one or more rules.
Filters allow you to focus on a specific status or even review results for a single rule at a time.
Asset details and actions
Selecting an asset opens a detailed side panel with:
Image preview
AI confidence score
Rule-by-rule results with pass/fail confidence
Clear rejection reasons when applicable
Moderators always stay in control. From the Results view, you can:
Manually approve or reject assets, even if the AI decision was confident
Override decisions in bulk when needed
Maintain a full audit trail, including manual overrides and status changes
Metadata and workflow integration
Once results are reviewed, moderation outcomes can be saved as metadata on assets. You can choose to save metadata for:
All assets
Only Approved and Rejected assets
Only assets marked as Needs review
This allows moderation results to flow directly into your DAM and downstream workflows such as search, routing, notifications, or automated fixes using Cloudinary transformations.

