prithivMLmods commited on
Commit
39c9e2a
·
verified ·
1 Parent(s): 8df7d0f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +128 -0
README.md CHANGED
@@ -7,3 +7,131 @@ base_model:
7
  # Vit-Mature-Content-Detection (ONNX)
8
 
9
  This is an ONNX version of [prithivMLmods/Vit-Mature-Content-Detection](https://huggingface.co/prithivMLmods/Vit-Mature-Content-Detection). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  # Vit-Mature-Content-Detection (ONNX)
8
 
9
  This is an ONNX version of [prithivMLmods/Vit-Mature-Content-Detection](https://huggingface.co/prithivMLmods/Vit-Mature-Content-Detection). It was automatically converted and uploaded using [this space](https://huggingface.co/spaces/onnx-community/convert-to-onnx).
10
+
11
+ ---
12
+
13
+
14
+ ![uijytyyt.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/IDbH_a4KQpydEVQ4VtKiA.png)
15
+
16
+ # **Vit-Mature-Content-Detection**
17
+
18
+ > **Vit-Mature-Content-Detection** is an image classification vision-language model fine-tuned from **vit-base-patch16-224-in21k** for a single-label classification task. It classifies images into various mature or neutral content categories using the **ViTForImageClassification** architecture.
19
+
20
+ > [!Note]
21
+ > Use this model to support positive, safe, and respectful digital spaces. Misuse is strongly discouraged and may violate platform or regional policies. This model doesn't generate any unsafe content, as it is a classification model and does not fall under the category of models not suitable for all audiences.
22
+
23
+ > [!Important]
24
+ > Neutral = Safe / Normal
25
+
26
+ ```py
27
+ Classification Report:
28
+ precision recall f1-score support
29
+
30
+ Anime Picture 0.9311 0.9455 0.9382 5600
31
+ Hentai 0.9520 0.9244 0.9380 4180
32
+ Neutral 0.9681 0.9529 0.9604 5503
33
+ Pornography 0.9896 0.9832 0.9864 5600
34
+ Enticing or Sensual 0.9602 0.9870 0.9734 5600
35
+
36
+ accuracy 0.9605 26483
37
+ macro avg 0.9602 0.9586 0.9593 26483
38
+ weighted avg 0.9606 0.9605 0.9604 26483
39
+ ```
40
+
41
+ ![download.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/FvFTPm_JKwFIffb_LF4ft.png)
42
+
43
+ ```py
44
+ from datasets import load_dataset
45
+
46
+ # Load the dataset
47
+ dataset = load_dataset("YOUR-DATASET-HERE")
48
+
49
+ # Extract unique labels
50
+ labels = dataset["train"].features["label"].names
51
+
52
+ # Create id2label mapping
53
+ id2label = {str(i): label for i, label in enumerate(labels)}
54
+
55
+ # Print the mapping
56
+ print(id2label)
57
+ ```
58
+
59
+ ---
60
+
61
+ The model categorizes images into five classes:
62
+
63
+ - **Class 0:** Anime Picture
64
+ - **Class 1:** Hentai
65
+ - **Class 2:** Neutral
66
+ - **Class 3:** Pornography
67
+ - **Class 4:** Enticing or Sensual
68
+
69
+ # **Run with Transformers 🤗**
70
+
71
+ ```python
72
+ !pip install -q transformers torch pillow gradio
73
+ ```
74
+
75
+ ```python
76
+ import gradio as gr
77
+ from transformers import ViTImageProcessor, ViTForImageClassification
78
+ from PIL import Image
79
+ import torch
80
+
81
+ # Load model and processor
82
+ model_name = "prithivMLmods/Vit-Mature-Content-Detection" # Replace with your actual model path
83
+ model = ViTForImageClassification.from_pretrained(model_name)
84
+ processor = ViTImageProcessor.from_pretrained(model_name)
85
+
86
+ # Label mapping
87
+ labels = {
88
+ "0": "Anime Picture",
89
+ "1": "Hentai",
90
+ "2": "Neutral",
91
+ "3": "Pornography",
92
+ "4": "Enticing or Sensual"
93
+ }
94
+
95
+ def mature_content_detection(image):
96
+ """Predicts the type of content in the image."""
97
+ image = Image.fromarray(image).convert("RGB")
98
+ inputs = processor(images=image, return_tensors="pt")
99
+
100
+ with torch.no_grad():
101
+ outputs = model(**inputs)
102
+ logits = outputs.logits
103
+ probs = torch.nn.functional.softmax(logits, dim=1).squeeze().tolist()
104
+
105
+ predictions = {labels[str(i)]: round(probs[i], 3) for i in range(len(probs))}
106
+
107
+ return predictions
108
+
109
+ # Create Gradio interface
110
+ iface = gr.Interface(
111
+ fn=mature_content_detection,
112
+ inputs=gr.Image(type="numpy"),
113
+ outputs=gr.Label(label="Prediction Scores"),
114
+ title="Vit-Mature-Content-Detection",
115
+ description="Upload an image to classify whether it contains anime, hentai, neutral, pornographic, or enticing/sensual content."
116
+ )
117
+
118
+ # Launch the app
119
+ if __name__ == "__main__":
120
+ iface.launch()
121
+ ```
122
+
123
+ ---
124
+
125
+ # **Recommended Use Cases**
126
+ - Content moderation systems
127
+ - Parental control filters
128
+ - Dataset preprocessing and filtering
129
+ - Digital well-being and user safety tools
130
+ - Search engine safe filter enhancements
131
+
132
+ # **Discouraged / Prohibited Use**
133
+ - Harassment or shaming
134
+ - Unethical surveillance
135
+ - Illegal or deceptive applications
136
+ - Sole-dependency without human oversight
137
+ - Misuse to mislead moderation decisions