Open Classification 2805
Collection
models, datasets
β’
8 items
β’
Updated
tooth-agenesis-siglip2 is a vision-language encoder model fine-tuned from
google/siglip2-base-patch16-512
for multi-class image classification. It is trained to detect various dental anomalies and conditions such as Calculus, Caries, Gingivitis, Mouth Ulcer, Tooth Discoloration, and Hypodontia. The model uses theSiglipForImageClassification
architecture.
SigLIP 2: Multilingual Vision-Language Encoders with Improved Semantic Understanding, Localization, and Dense Features https://arxiv.org/pdf/2502.14786
Classification Report:
precision recall f1-score support
Calculus 0.6640 0.7623 0.7098 1296
Caries 0.9525 0.9558 0.9541 2601
Gingivitis 0.8496 0.7842 0.8156 2349
Mouth Ulcer 0.9939 0.9893 0.9916 2806
Tooth Discoloration 0.9314 0.9757 0.9530 2017
hypodontia 0.9983 0.9161 0.9554 1251
accuracy 0.9096 12320
macro avg 0.8983 0.8972 0.8966 12320
weighted avg 0.9132 0.9096 0.9105 12320
Class 0: Calculus
Class 1: Caries
Class 2: Gingivitis
Class 3: Mouth Ulcer
Class 4: Tooth Discoloration
Class 5: hypodontia
pip install -q transformers torch pillow gradio hf_xet
import gradio as gr
from transformers import AutoImageProcessor, SiglipForImageClassification
from PIL import Image
import torch
# Load model and processor
model_name = "prithivMLmods/tooth-agenesis-siglip2" # Update with actual model name on Hugging Face
model = SiglipForImageClassification.from_pretrained(model_name)
processor = AutoImageProcessor.from_pretrained(model_name)
# Updated label mapping
id2label = {
"0": "Calculus",
"1": "Caries",
"2": "Gingivitis",
"3": "Mouth Ulcer",
"4": "Tooth Discoloration",
"5": "hypodontia"
}
def classify_image(image):
image = Image.fromarray(image).convert("RGB")
inputs = processor(images=image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
prediction = {
id2label[str(i)]: round(probs[i], 3) for i in range(len(probs))
}
return prediction
# Gradio Interface
iface = gr.Interface(
fn=classify_image,
inputs=gr.Image(type="numpy"),
outputs=gr.Label(num_top_classes=6, label="Dental Condition Classification"),
title="Tooth Agenesis Detection",
description="Upload a dental image to detect conditions such as Calculus, Caries, Gingivitis, Mouth Ulcer, Tooth Discoloration, or Hypodontia."
)
if __name__ == "__main__":
iface.launch()
tooth-agenesis-siglip2
is designed for:
Base model
google/siglip2-base-patch16-512