7.png

AI-vs-Deepfake-vs-Real-9999

AI-vs-Deepfake-vs-Real-9999 is an image classification vision-language encoder model fine-tuned from google/siglip2-base-patch16-224 for a single-label classification task. It is designed to detect whether an image is AI-generated, a deepfake, or a real one using the SiglipForImageClassification architecture.

Classification Report:
              precision    recall  f1-score   support

  Artificial     0.9994    0.9979    0.9986      3333
    Deepfake     0.9979    0.9994    0.9987      3333
    Real one     0.9994    0.9994    0.9994      3333

    accuracy                         0.9989      9999
   macro avg     0.9989    0.9989    0.9989      9999
weighted avg     0.9989    0.9989    0.9989      9999

sdsxzdcvxzdcv.png

The model categorizes images into three classes:

  • Class 0: "Artificial"
  • Class 1: "Deepfake"
  • Class 2: "Real one"

Run with Transformers🤗

!pip install -q transformers torch pillow gradio
import gradio as gr
from transformers import AutoImageProcessor
from transformers import SiglipForImageClassification
from transformers.image_utils import load_image
from PIL import Image
import torch

# Load model and processor
model_name = "prithivMLmods/AI-vs-Deepfake-vs-Real-9999"
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)

def classify_image(image):
    """Predicts whether an image is Artificial, Deepfake, or Real."""
    image = Image.fromarray(image).convert("RGB")
    inputs = processor(images=image, return_tensors="pt")
    
    with torch.no_grad():
        outputs = model(**inputs)
        logits = outputs.logits
        probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
    
    labels = {
        "0": "Artificial", "1": "Deepfake", "2": "Real one"
    }
    predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
    
    return predictions

# Create Gradio interface
iface = gr.Interface(
    fn=classify_image,
    inputs=gr.Image(type="numpy"),
    outputs=gr.Label(label="Prediction Scores"),
    title="AI vs. Deepfake vs. Real Image Classification",
    description="Upload an image to determine if it's AI-generated, a Deepfake, or a Real one."
)

# Launch the app
if __name__ == "__main__":
    iface.launch()

Intended Use:

The AI-vs-Deepfake-vs-Real-9999 model is designed to classify images into three categories: AI-generated, deepfake, or real. Potential use cases include:

  • AI Content Detection: Identifying AI-generated images from real ones.
  • Deepfake Detection: Assisting cybersecurity experts and forensic teams in detecting synthetic media.
  • Media Verification: Helping journalists and fact-checkers verify the authenticity of images.
  • AI Ethics & Research: Contributing to studies on AI-generated content detection.
  • Social Media Moderation: Enhancing tools to prevent misinformation and digital deception.
Downloads last month
58
Safetensors
Model size
92.9M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prithivMLmods/AI-vs-Deepfake-vs-Real-9999

Finetuned
(56)
this model

Dataset used to train prithivMLmods/AI-vs-Deepfake-vs-Real-9999

Collection including prithivMLmods/AI-vs-Deepfake-vs-Real-9999