Every moderation run is saved under Results, giving you full visibility, traceability, and control over all past and ongoing validations. This is where teams review outcomes, take action, and connect moderation decisions back to their workflows.
Moderation runs
Each row in the Results table represents a single moderation run and includes:
Name
Assets moderated
Applied rules
Results summary
Date
Status (Processed, Running, or Failed)
Review button in case there are assets under Needs Review
Click any run to drill into detailed, asset-level results.
Asset statuses
Within a run, assets are automatically grouped by outcome:
β Approved - Assets that passed all selected rules and are ready for use.
β Rejected - Assets that clearly failed one or more rules.
β οΈ Needs review - Borderline cases where AI confidence is lower or where human review is required. This enables a true human-in-the-loop workflow so teams focus only where attention is needed.
Filters allow you to focus on a specific status or even review results for a single rule at a time.
Asset details and actions
Selecting an asset opens a detailed side panel with:
Image preview
AI confidence score
Rule-by-rule results with pass/fail confidence
Clear rejection reasons when applicable
Moderators always stay in control. From the Results view, you can:
Manually approve or reject assets, even if the AI decision was confident
Override decisions in bulk when needed
Maintain a full audit trail, including manual overrides and status changes

Metadata and workflow integration
Once results are reviewed, moderation outcomes can be saved as metadata on assets. You can choose to save metadata for:
All assets
Only Approved assets
Only Rejected assets
Only assets marked as Needs review
This allows moderation results to flow directly into your DAM and downstream workflows such as search, routing, notifications, or automated fixes using Cloudinary transformations.


