Image-Deep-Fake-Detector
Classification report:
precision recall f1-score support
Real 0.9933 0.9937 0.9935 4761
Fake 0.9937 0.9933 0.9935 4760
accuracy 0.9935 9521
macro avg 0.9935 0.9935 0.9935 9521
weighted avg 0.9935 0.9935 0.9935 9521
The precision score is a key metric to evaluate the performance of a deep fake detector. Precision is defined as:
[ \text{Precision} = \frac{\text{True Positives}}{\text{True Positives + False Positives}} ]
It indicates how well the model avoids false positives, which in the context of a deep fake detector means it measures how often the "Fake" label is correctly identified without mistakenly classifying real content as fake.
From the classification report, the precision values are:
- Real: 0.9933
- Fake: 0.9937
- Macro average: 0.9935
- Weighted average: 0.9935
Demo Inference:
Key Observations:
High precision (0.9933 for Real, 0.9937 for Fake):
The model rarely misclassifies real content as fake and vice versa. This is critical for applications like deep fake detection, where false accusations (false positives) can have significant consequences.Macro and Weighted Averages (0.9935):
The precision is evenly high across both classes, which shows that the model is well-balanced in its performance for detecting both real and fake content.Reliability of Predictions:
With precision near 1.0, when the model predicts a video as fake (or real), it's highly likely to be correct. This is essential in reducing unnecessary manual verification in real-world applications like social media content moderation or fraud detection.
ONNX Exchange
The ONNX model is converted using the following method, which directly writes the ONNX files to the repository using the Hugging Face write token.
๐งช : https://huggingface.co/spaces/prithivMLmods/convert-to-onnx-dir
Conclusion:
The deep fake detector model demonstrates excellent precision for both the "Real" and "Fake" classes, indicating a highly reliable detection system with minimal false positives. Combined with similarly high recall and F1-score, the overall accuracy (99.35%) reflects that this is a robust and trustworthy model for identifying deep fakes.
- Downloads last month
- 95,037