--- license: apache-2.0 datasets: - nebula/OpenSDI_test - madebyollin/megalith-10m language: - en base_model: - google/siglip2-base-patch16-224 pipeline_tag: image-classification library_name: transformers tags: - OpenSDI - Spotting Diffusion-Generated Images in the Open World - OpenSDI - SD1.5 - AI-vs-Real - SigLIP2 - Stable Diffusion v1-5 --- ![5.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/WMJLfT8M3ajdAxtDhdskK.png) # OpenSDI-SD1.5-SigLIP2 > OpenSDI-SD1.5-SigLIP2 is a vision-language encoder model fine-tuned from google/siglip2-base-patch16-224 for binary image classification. It is trained to detect whether an image is a real photograph or generated using Stable Diffusion 1.5 (SD1.5), utilizing the SiglipForImageClassification architecture. > [!note] *SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features* https://arxiv.org/pdf/2502.14786 > [!note] *OpenSDI: Spotting Diffusion-Generated Images in the Open World* https://arxiv.org/pdf/2503.19653, OpenSDI SD1.5 SigLIP2 works best with crisp and high-quality images. Noisy images are not recommended for validation. > [!warning] If the task is based on image content moderation or AI-generated image vs. real image classification, it is recommended to use the OpenSDI-Flux.1-SigLIP2 model. ```py Classification Report: precision recall f1-score support Real_Image 0.9036 0.9323 0.9177 10000 SD1.5_Generated 0.9301 0.9005 0.9150 10000 accuracy 0.9164 20000 macro avg 0.9168 0.9164 0.9164 20000 weighted avg 0.9168 0.9164 0.9164 20000 ``` ![download (1).png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/bvT7lxG1deDnMPjVt7f5C.png) --- ## Label Space: 2 Classes The model classifies an image as either: ``` Class 0: Real_Image Class 1: SD1.5_Generated ``` --- ## Install Dependencies ```bash pip install -q transformers torch pillow gradio hf_xet ``` --- ## Inference Code ```python import gradio as gr from transformers import AutoImageProcessor, SiglipForImageClassification from PIL import Image import torch # Load model and processor model_name = "prithivMLmods/OpenSDI-SD1.5-SigLIP2" # Replace with your model path model = SiglipForImageClassification.from_pretrained(model_name) processor = AutoImageProcessor.from_pretrained(model_name) # Label mapping id2label = { "0": "Real_Image", "1": "SD1.5_Generated" } def classify_image(image): image = Image.fromarray(image).convert("RGB") inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) logits = outputs.logits probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist() prediction = { id2label[str(i)]: round(probs[i], 3) for i in range(len(probs)) } return prediction # Gradio Interface iface = gr.Interface( fn=classify_image, inputs=gr.Image(type="numpy"), outputs=gr.Label(num_top_classes=2, label="SD1.5 Image Detection"), title="OpenSDI-SD1.5-SigLIP2", description="Upload an image to determine whether it is a real photograph or generated by Stable Diffusion 1.5 (SD1.5)." ) if __name__ == "__main__": iface.launch() ``` --- ## Intended Use OpenSDI-SD1.5-SigLIP2 is designed for the following use cases: * Generative Model Evaluation – Detect SD1.5-generated images for analysis and benchmarking. * Dataset Integrity – Filter out AI-generated images from real-world image datasets. * Digital Media Forensics – Support visual content verification and source validation. * Trust & Safety – Detect synthetic media used in deceptive or misleading contexts.