Skip to main content

Understanding your Results

Understand how Reality Defender combines multiple detection models to determine whether your audio, image, video, or text has been manipulated.

Ben M avatar
Written by Ben M
Updated yesterday


How to Read Your Scan Results

Each media type is evaluated using a different set of specialized detectors. Your final score is an aggregate of these models, weighted by reliability and designed to give you a clear, actionable assessment.dels Reality Defender uses

Every analysis includes:

  • A final probability score (0–100%) showing the likelihood the content is manipulated or AI-generated

  • A severity label (Low, Medium, High, Critical)

  • Breakdown of contributing models, including each model’s score and what it detects

  • A combined explanation showing how the models produced the final conclusion

Each detection model specializes in a different type of manipulation or synthesis method. Some models look at artifacts, others at context, content patterns, temporal dynamics, or linguistic signals.


Interpreting the Confidence Scale

RealScan simplifies all model outputs into a single, human-readable Confidence Scale. It applies to both the amount of generative content and the likelihood of manipulation.

Level

Meaning

What To Do

Minimal

No meaningful signs of AI generation or manipulation. Authenticity appears high.

Safe to consider authentic; continue standard verification.

Low

Weak or isolated AI indicators found, but not enough to conclude manipulation.

Review manually; check for compression or lighting issues that might cause false signals.

Moderate

Multiple AI signatures detected, or models disagree.

Proceed with caution; verify source and metadata.

High

Strong, repeated evidence of AI generation or manipulation.

Treat as likely synthetic. Recommend escalation or secondary validation.

Tip: A higher confidence level means more evidence of AI involvement, not necessarily that the entire file is fake. Authentic media that’s been lightly enhanced (e.g., color corrected or denoised) may still trigger a Low confidence result.


🔈 Understanding Audio Results

When analyzing audio, Reality Defender evaluates indicators of voice synthesis, voice cloning, or audio splicing.

Models used

Advanced

Detects AI-generated audio using a large foundation model trained on highly diverse synthesized-speech datasets.
Looks for:

  • Neural vocoder artifacts

  • Frequency-domain anomalies

  • Model-specific generation fingerprints

Generalizable

Detects AI-synthesized audio using broad linguistic and stylistic cues found in real human speech.
Looks for:

  • Unnatural prosody

  • Overly consistent tone or pacing

  • Style mismatches common to audio LLMs

How the final score is determined

Both models produce an independent probability. The final result is a weighted combination of the two, with Advanced contributing more heavily when audio quality is high.


📸 Understanding Image Results

Image detection evaluates synthetic or manipulated visual signals across GAN, diffusion, and traditional image-editing workflows.

Models used

Context Aware

Detects deepfake manipulation by evaluating the full visual context of the image.
Looks for:

  • Lighting inconsistencies

  • Abnormal physical context

  • Semantic mismatches

Visual Noise Analysis

Detects fake images by analyzing the texture and distribution of visual noise.
Looks for:

  • Diffusion-grid artifacts

  • Upsampler inconsistencies

  • GAN-style frequency patterns

GANs

Detects faces generated or manipulated using GAN-based methods.
Looks for:

  • StyleGAN fingerprints

  • Resolution-frequency mismatches

  • Geometric distortions

Diffusion

Detects images created by diffusion models (ex: Midjourney, SDXL).
Looks for:

  • Diffusion sampling artifacts

  • Uniform noise fields

  • Overly smooth or algorithmic textures

Faceswaps

Detects traditional and modern faceswap-based manipulations.
Looks for:

  • Boundary inconsistencies

  • Compositing artifacts

  • Identity mismatches

How the final score is determined

Each model contributes a score, weighted depending on relevance (e.g., Diffusion and Visual Noise Analysis weigh more heavily for generative images; Faceswaps weigh more for portrait manipulation).


🎥 Understanding Video Results

Video analysis evaluates frame-level, temporal, and contextual indicators of deepfake generation.

Models used

Context Aware

Detects deepfake manipulation by evaluating the full visual and scene-level context.
Looks for:

  • Inconsistent lighting

  • Contextual anomalies

  • Scene-wide tampering indicators

Dynamics

Detects deepfake faces generated with various methods by analyzing temporal information.
Looks for:

  • Frame-to-frame inconsistencies

  • Motion artifacts

  • Lip-sync irregularities

Guided

Focuses on specific facial features known to differ between real and generated faces.
Looks for:

  • Eye-region anomalies

  • Facial microexpression inconsistencies

  • Local generation fingerprints

Universal

Detects deepfake faces generated across many methods (GANs, diffusion, hybrid approaches).
Looks for:

  • Global artifact patterns

  • Multimethod synthesis signals

How the final score is determined

Temporal models (Dynamics + Context Aware) take heavier weight for video, with Guided and Universal refining the final probability.


🔡 Understanding Text Results

Text detection evaluates whether text has been generated or heavily edited by a large language model (LLM).

Models used

Text Detector – Generative

Detects linguistic patterns characteristic of LLM-generated text.
Looks for:

  • Over-optimized phrasing

  • Statistical smoothness

  • Predictable structural patterns

  • Low-variance word choice

How the final score is determined

Since text has a single model type, the final score reflects the model’s probability directly.


Why Model Scores May Differ

Some models specialize in:

  • Certain generation methods (e.g., GANs, vocoders, diffusion)

  • Certain content types (faces, scenes, natural speech)

  • High- vs low-quality inputs

  • Different artifact families (noise, compression, temporal distortions, linguistic patterns)

It is normal for one model to score low while another scores extremely high—this is expected behavior and is incorporated into the weighted final score.


If Your Result Seems Unexpected

Content may test as manipulated due to:

  • Heavy compression or filtering

  • AI-assisted editing

  • Image upscaling or enhancement

  • Non-human voices (e.g., IVR, TTS, synthetic accents)

  • AI-generated copy mixed with human text

  • Blend of real and generated segments

If you have questions about a specific file, use the "Report an Issue" button at the top right corner of your Scan Results page, or contact support@realitydefender.com.


Downloading Reports

Every completed scan can be exported as a PDF or CSV report containing:

  • Upload timestamp

  • Detected modality and file metadata

  • Per-model results and triggered indicators

  • Final confidence rating and explanation

Reports are timestamped and can be shared internally for record-keeping or audits.

Did this answer your question?