Skip to main content

Reality Defender — User Help and FAQs

What are some factors affecting the accuracy of the detection on Reality Defender?
Why do you believe the Reality Defender text model is superior to other solutions?
Do I need special access to implement the Reality Defender API?
How is the Reality Defender API different from the web application?
How do I implement the Reality Defender API onto my backend or service?
Have you run a stress test? How much volume can you handle?
Are there any limits/costs for calling the query endpoints for retrieving information?
How do you manage API versions? How do you prevent breaking changes?
Do you have a taxonomy for the response objects describing what each field means?
What visibility does the API enable? Does the account used need any special privileges?
Do users require API token for every single account, or can users share 3-4 tokens?
How to Spot Deepfakes
How long does it take to onboard my team onto the Reality Defender platform?
Does Reality Defender require two-factor authentication (2FA) to use?
Is there som​​e form of super user or admin account given to us to add or remove accounts?
Glossary
Should platforms or individuals be held responsible for detecting deepfakes?
How does Reality Defender prevent people from using our service from creating advanced deepfakes?
Can Reality Defender see something that is updated a lot, or flag media that people are constantly checking?
What formats are supported for Video?
How do bad actors create video-based deepfakes?
How Does Reality Defender Detect Video Deepfakes?
When using the Youtube Link to upload media on the web interface, the analysis was conducted on a low quality stream (360p) despite the fact that a higher quality video/audio (1080p) streams were available. Why is that?
Why did I get a wrong result on Reality Defender, and how should I interpret that?
What are some factors affecting the accuracy of the detection on Reality Defender?
If there is a video of a person speaking but there is a voiceover that is not meant to match the speaker's moving lips, will the video be registered as fake?
Does your product provide localization of which video segments were predicted as fake?
How does one separate video from audio on the API?
How do bad actors generate image deepfakes?
How does Reality Defender detect image-based deepfakes?
What formats are supported for Image Detection?
Why did I get a wrong result on Reality Defender, and how should I interpret that?
What are some factors affecting the accuracy of the detection on Reality Defender?
Does GAN detection work for all GAN generated images or certain types of GANs, e.g. StyleGAN?
How is AI-generated text created?
How does Reality Defender detect generative text?
What LLMs are supported by Reality Defender’s Text Detection?
Why did I get a wrong result on Reality Defender, and how should I interpret that?
Does Reality Defender analyze metadata?
Can you describe your Datasets and architecture team?
What are the best practices for collecting content to scan?
Do you provide separate scores for audio and video?
Do you provide a composite score, or do you provide individual scores for each model?
What is the difference between accuracy, precision, and recall?
What are the file size limits for each modality?
What do I do if my video or audio file is too long?
Can your models detect deepfakes after they have been compressed on a certain service?
How should we interpret results? Does 80% mean that 80% of the content is generative?
Does the platform have a special algorithm to combine the scoring from multiple deepfake detection models of one modality or special algorithm to combine the score across multiple modalities?
What kind of AI-generated audio does Reality Defender look at?
How does Reality Defender detect deepfaked audio?
What languages are supported?
What formats are supported for Audio?
Why did I get a wrong result on Reality Defender, and how should I interpret that?
What are some factors affecting the accuracy of the detection on Reality Defender?
Can the audio deepfake detector handler audio clips with cross talk, or only individual speaker scenarios?
If the audio clip was classified as a fake, will your product be able to identify the type of fake, e.g. voice cloning or TTS.
Do you do Voice Prints or ID Verifications?
Is any uploaded information leveraged to train the AI models?
What information do you retain from the data passed to the AI models?
Do you have any data filtering capabilities which can be customized to filter out PII or other client information from the requests sent to the Generative AI engine?
What is your retention plan for all the information collected?
What are Reality Defender’s certifications?
Are you collecting feedback information from customer users?
Where do we track the number of scans so that we can manage against the scan limit for a pilot?
What is the response time for the remediate to issues faced during the pilot? For instance, 24 hours to get the system back up to working conditions.
How many files can I scan at any one time?
How long does it take to scan a single file on Reality Defender?
Is it possible to get the original file, including the flow of how the deepfake file is being made inside the report?
Why does the platform not support larger files and AVI (along with other formats)?
Can you increase scans on a pro rata basis?
What data/algorithms do you use to train your detection tools?
How often is the system updated?
How do you measure the accuracy of your models?
Does your tech work separately on video and audio, or do you leverage a combination of signals from both?
How does Reality Defender use PII?
How to Use the Reality Defender Web Application
Is the Reality Defender platform always running 24/7/365
What are false positives/negatives?
Is Generative AI involved in the delivery of the service/product that you provide?
Is the Generative AI technology that you are leveraging proprietary, or is it provided by a third party?
Where are the Generative AI models stored?