Can we determine where the synthetic content came from? Is it possible if the report shows the part of the image, audio or video files that is deepfake/ see which part of the file(s) is an artifact that is used to detect deepfake?
Reporting the point of origination and generative AI method is on our roadmap. The most significant security risk is determining if the media is synthetic. The point of origination comes second. Soon, we will release a significant feature called explainable AI. This will allow clients to see a heat map of where RD believes areas of content show signs of deepfake manipulation, signifiers, and artifacts.