Results, Scores, Explainability
Make confident calls with our risk scores. Learn per-model vs composite scoring, modality ranges, accuracy/precision/recall, and what “80%” really means.
How do you measure the accuracy of your models?
How do I interpret model scores (%) for image, video and audio?
What are false positives/negatives?
What are some factors affecting the accuracy of the detection on Reality Defender?
Why did I get a wrong result on Reality Defender, and how should I interpret that?
Why did I get a wrong result on Reality Defender, and how should I interpret that?
How should we interpret results? Does 80% mean that 80% of the content is generative?
What is the difference between accuracy, precision, and recall?