File size: 9,626 Bytes
fb91448 4ca5977 fb91448 4ca5977 9391db9 2d1d9d2 4ca5977 3314e87 4ca5977 3314e87 4ca5977 61dee5e 4ca5977 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 |
---
library_name: transformers
tags:
- text-classification
- content-moderation
- comment-moderation
- text-moderation
license: openrail
language:
- en
base_model:
- distilbert/distilbert-base-uncased
---
# π‘οΈ Comment Moderation Model
[](https://huggingface.co/Vrandan/Comment-Moderation)
[](https://www.python.org/downloads/release/python-312/)
[](https://huggingface.co/Vrandan/Comment-Moderation/blob/main/Comment%20Moderation-OpenRAIL.md)
A powerful, multi-label content moderation system built on **DistilBERT** architecture, designed to detect and classify potentially harmful content in user-generated comments with high accuracy. This model stands out as currently the best in terms of performance based on the provided dataset for text moderation. Additionally, it has the smallest footprint, making it ideal for deployment on edge devices. Currently, it is the only model trained to achieve such high performance while maintaining a minimal size relative to the training data on Hugging Face.
## π― Key Features
- Multi-label classification
- Real-time content analysis
- 95.4% accuracy rate
- 9 distinct content categories
- Easy integration via API or local implementation
- Lightweight deployment footprint
- Suitable for **edge devices and mobile applications**
- Low latency inference
- Resource-efficient while maintaining high accuracy
- Can run on consumer-grade hardware
## π Content Categories
The model identifies the following types of potentially harmful content:
| Category | Label | Definition |
|----------|--------|------------|
| Sexual | `S` | Content meant to arouse sexual excitement, such as the description of sexual activity, or that promotes sexual services (excluding sex education and wellness). |
| Hate | `H` | Content that expresses, incites, or promotes hate based on race, gender, ethnicity, religion, nationality, sexual orientation, disability status, or caste. |
| Violence | `V` | Content that promotes or glorifies violence or celebrates the suffering or humiliation of others. |
| Harassment | `HR` | Content that may be used to torment or annoy individuals in real life, or make harassment more likely to occur. |
| Self-Harm | `SH` | Content that promotes, encourages, or depicts acts of self-harm, such as suicide, cutting, and eating disorders. |
| Sexual/Minors | `S3` | Sexual content that includes an individual who is under 18 years old. |
| Hate/Threat | `H2` | Hateful content that also includes violence or serious harm towards the targeted group. |
| Violence/Graphic | `V2` | Violent content that depicts death, violence, or serious physical injury in extreme graphic detail. |
| Safe Content | `OK` | Appropriate content that doesn't violate any guidelines. |
## π Performance Metrics
```
Accuracy: 95.4%
Mean ROC AUC: 0.912
Macro F1 Score: 0.407
Micro F1 Score: 0.802
```
[View detailed performance metrics](#model-performance)
## π₯οΈ Training Details
The model was trained on an **NVIDIA RTX 3080** GPU in a home setup, demonstrating that effective content moderation models can be developed with consumer-grade hardware. This makes the model development process more accessible to individual developers and smaller organizations.
Key Training Specifications:
- Hardware: NVIDIA RTX 3080
- Base Model: DistilBERT
- Model Size: 67M parameters (optimized for efficient deployment)
- Training Environment: Local workstation
- Training Type: Fine-tuning
Despite its relatively compact size **(67M parameters)**, this model achieves impressive performance metrics, making it suitable for deployment across various devices and environments. The model's efficiency-to-performance ratio demonstrates that effective content moderation is possible without requiring extensive computational resources.
## π Quick Start
### Python Implementation (Local)
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer
# Initialize model and tokenizer
model = AutoModelForSequenceClassification.from_pretrained("Vrandan/Comment-Moderation")
tokenizer = AutoTokenizer.from_pretrained("Vrandan/Comment-Moderation")
def analyze_text(text):
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
probabilities = outputs.logits.softmax(dim=-1).squeeze()
# Get predictions
labels = [model.config.id2label[i] for i in range(len(probabilities))]
predictions = sorted(zip(labels, probabilities), key=lambda x: x[1], reverse=True)
return predictions
# Example usage
text = "Your text here"
results = analyze_text(text)
for label, prob in results:
print(f"{label}: {prob:.4f}")
```
#### Example Output:
```
Label: OK - Probability: 0.9840
Label: H - Probability: 0.0043
Label: SH - Probability: 0.0039
Label: V - Probability: 0.0019
Label: S - Probability: 0.0018
Label: HR - Probability: 0.0015
Label: V2 - Probability: 0.0011
Label: S3 - Probability: 0.0010
Label: H2 - Probability: 0.0006
```
### Python Implementation (Serverless)
```python
import requests
API_URL = "https://api-inference.huggingface.co/models/Vrandan/Comment-Moderation"
headers = {"Authorization": "Bearer hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"}
def query(payload):
response = requests.post(API_URL, headers=headers, json=payload)
return response.json()
output = query({
"inputs": "Your text here",
})
```
### JavaScript Implementation (Node.js)
```javascript
require('dotenv').config();
const { HfInference } = require('@huggingface/inference');
const readline = require('readline');
// Initialize the Hugging Face client
// To use this, follow these steps:
// 1. Create a `.env` file in the root directory of your project.
// 2. Visit https://huggingface.co/settings/tokens to generate your access token (you may need to create an account if you haven't already).
// 3. Add the token to your `.env` file like this:
// HUGGING_FACE_ACCESS_TOKEN=your_token_here
// 4. Install dotenv & huggingface/inference package (`npm install dotenv` & `npm install @huggingface/inference`) and load it in your project.
const hf = new HfInference(process.env.HUGGING_FACE_ACCESS_TOKEN);
// Create readline interface
const rl = readline.createInterface({
input: process.stdin,
output: process.stdout
});
async function analyzeText(text) {
try {
const result = await hf.textClassification({
model: 'Vrandan/Comment-Moderation',
inputs: text
});
console.log('\nResults:');
result.forEach(pred => {
console.log(`Label: ${pred.label} - Probability: ${pred.score.toFixed(4)}`);
});
} catch (error) {
console.error('Error analyzing text:', error.message);
}
}
async function main() {
while (true) {
try {
const text = await new Promise(resolve => {
rl.question('\nEnter text to analyze (or "quit" to exit): ', resolve);
});
if (text.toLowerCase() === 'quit') break;
if (text.trim()) await analyzeText(text);
} catch (error) {
console.error('Error:', error.message);
}
}
rl.close();
}
main().catch(console.error);
```
### JavaScript Implementation (Serverless)
```javascript
async function query(data) {
const response = await fetch(
"https://api-inference.huggingface.co/models/Vrandan/Comment-Moderation",
{
headers: {
Authorization: "Bearer hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"Content-Type": "application/json",
},
method: "POST",
body: JSON.stringify(data),
}
);
const result = await response.json();
return result;
}
query({"inputs": "Your text here"}).then((response) => {
console.log(JSON.stringify(response));
});
```
## π Detailed Model Performance <a name="model-performance"></a>
The model has been extensively evaluated using standard classification metrics:
- **Loss:** 0.641
- **Accuracy:** 0.954 (95.4%)
- **Macro F1 Score:** 0.407
- **Micro F1 Score:** 0.802
- **Weighted F1 Score:** 0.763
- **Macro Precision:** 0.653
- **Micro Precision:** 0.875
- **Weighted Precision:** 0.838
- **Macro Recall:** 0.349
- **Micro Recall:** 0.740
- **Weighted Recall:** 0.740
- **Mean ROC AUC:** 0.912
## β οΈ Important Considerations
### Ethical Usage
- Regular bias monitoring
- Context-aware implementation
- Privacy-first approach
### Limitations
- May miss contextual nuances
- Potential for false positives
- Cultural context variations
## π Dataset Information
This model was trained on the dataset released by OpenAI, as described in their paper ["A Holistic Approach to Undesired Content Detection"](https://arxiv.org/abs/2208.03274).
### Dataset Source
- π [Original Paper (PDF)](https://arxiv.org/pdf/2208.03274)
- πΎ [Dataset Repository](https://github.com/openai/moderation-api-release)
### Citation
If you use this model or dataset in your research, please cite:
```bibtex
@article{openai2022moderation,
title={A Holistic Approach to Undesired Content Detection},
author={Todor Markov and Chong Zhang and Sandhini Agarwal and Tyna Eloundou and Teddy Lee and Steven Adler and Angela Jiang and Lilian Weng},
journal={arXiv preprint arXiv:2208.03274},
year={2022}
}
```
## π§ Contact
For support or queries, please message me on Slack.
--- |