Content Safety Model

Model Summary

This model is designed to classify images as either "safe" or "unsafe." It helps in identifying potentially dangerous or sensitive content, making it useful for content moderation tasks. For example, it can flag images showing children in risky situations, like playing with fire, as "unsafe" while marking other benign images as "safe."

Source Model and Dataset

Base Model: This model is fine-tuned from the pre-trained CLIP ViT-B/32 model by OpenAI, a model known for its zero-shot image classification abilities. Dataset: The model was trained on a custom dataset containing labeled images of safe and unsafe scenarios. The dataset includes various examples of unsafe situations (e.g., fire, sharp objects, precarious activities) to help the model learn these contextual cues.

Sample model predictions

Input Image Prediction
image/jpeg Output:- image/png
image/jpeg Output:- image/png
image/jpeg Output:- image/png
image/jpeg Output:- image/png
image/jpeg Output:- image/png
Downloads last month
61
Safetensors
Model size
151M params
Tensor type
F32
ยท
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for quadranttechnologies/retail-content-safety-clip-finetuned

Finetuned
(54)
this model

Space using quadranttechnologies/retail-content-safety-clip-finetuned 1