Fix Formatting and Words
Browse files
README.md
CHANGED
@@ -20,19 +20,6 @@ base_model:
|
|
20 |
|
21 |
A powerful, multi-label content moderation system built on **DistilBERT** architecture, designed to detect and classify potentially harmful content in user-generated comments with high accuracy. This model stands out as currently the best in terms of performance based on the provided dataset for text moderation. Additionally, it has the smallest footprint, making it ideal for deployment on edge devices. Currently, it is the only model trained to achieve such high performance while maintaining a minimal size relative to the training data on Hugging Face.
|
22 |
|
23 |
-
## π₯οΈ Training Details
|
24 |
-
|
25 |
-
The model was trained on an **NVIDIA RTX 3080** GPU in a home setup, demonstrating that effective content moderation models can be developed with consumer-grade hardware. This makes the model development process more accessible to individual developers and smaller organizations.
|
26 |
-
|
27 |
-
Key Training Specifications:
|
28 |
-
- Hardware: NVIDIA RTX 3080
|
29 |
-
- Base Model: DistilBERT
|
30 |
-
- Model Size: 67M parameters (optimized for efficient deployment)
|
31 |
-
- Training Environment: Local workstation
|
32 |
-
- Training Type: Fine-tuning
|
33 |
-
|
34 |
-
Despite its relatively compact size **(67M parameters)**, this model achieves impressive performance metrics, making it suitable for deployment across various devices and environments. The model's efficiency-to-performance ratio demonstrates that effective content moderation is possible without requiring extensive computational resources.
|
35 |
-
|
36 |
## π― Key Features
|
37 |
|
38 |
- Multi-label classification
|
@@ -52,15 +39,15 @@ The model identifies the following types of potentially harmful content:
|
|
52 |
|
53 |
| Category | Label | Definition |
|
54 |
|----------|--------|------------|
|
55 |
-
| Sexual | `S` |
|
56 |
-
| Hate | `H` |
|
57 |
-
| Violence | `V` | Content
|
58 |
-
| Harassment | `HR` |
|
59 |
-
| Self-Harm | `SH` | Content
|
60 |
-
| Sexual/Minors | `S3` |
|
61 |
-
| Hate/Threat | `H2` |
|
62 |
-
| Violence/Graphic | `V2` |
|
63 |
-
| Safe Content | `OK` | Appropriate content that doesn't violate any guidelines |
|
64 |
|
65 |
## π Performance Metrics
|
66 |
|
@@ -73,6 +60,19 @@ Micro F1 Score: 0.802
|
|
73 |
|
74 |
[View detailed performance metrics](#model-performance)
|
75 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
76 |
## π Quick Start
|
77 |
|
78 |
### Python Implementation (Local)
|
|
|
20 |
|
21 |
A powerful, multi-label content moderation system built on **DistilBERT** architecture, designed to detect and classify potentially harmful content in user-generated comments with high accuracy. This model stands out as currently the best in terms of performance based on the provided dataset for text moderation. Additionally, it has the smallest footprint, making it ideal for deployment on edge devices. Currently, it is the only model trained to achieve such high performance while maintaining a minimal size relative to the training data on Hugging Face.
|
22 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
## π― Key Features
|
24 |
|
25 |
- Multi-label classification
|
|
|
39 |
|
40 |
| Category | Label | Definition |
|
41 |
|----------|--------|------------|
|
42 |
+
| Sexual | `S` | Content meant to arouse sexual excitement, such as the description of sexual activity, or that promotes sexual services (excluding sex education and wellness). |
|
43 |
+
| Hate | `H` | Content that expresses, incites, or promotes hate based on race, gender, ethnicity, religion, nationality, sexual orientation, disability status, or caste. |
|
44 |
+
| Violence | `V` | Content that promotes or glorifies violence or celebrates the suffering or humiliation of others. |
|
45 |
+
| Harassment | `HR` | Content that may be used to torment or annoy individuals in real life, or make harassment more likely to occur. |
|
46 |
+
| Self-Harm | `SH` | Content that promotes, encourages, or depicts acts of self-harm, such as suicide, cutting, and eating disorders. |
|
47 |
+
| Sexual/Minors | `S3` | Sexual content that includes an individual who is under 18 years old. |
|
48 |
+
| Hate/Threat | `H2` | Hateful content that also includes violence or serious harm towards the targeted group. |
|
49 |
+
| Violence/Graphic | `V2` | Violent content that depicts death, violence, or serious physical injury in extreme graphic detail. |
|
50 |
+
| Safe Content | `OK` | Appropriate content that doesn't violate any guidelines. |
|
51 |
|
52 |
## π Performance Metrics
|
53 |
|
|
|
60 |
|
61 |
[View detailed performance metrics](#model-performance)
|
62 |
|
63 |
+
## π₯οΈ Training Details
|
64 |
+
|
65 |
+
The model was trained on an **NVIDIA RTX 3080** GPU in a home setup, demonstrating that effective content moderation models can be developed with consumer-grade hardware. This makes the model development process more accessible to individual developers and smaller organizations.
|
66 |
+
|
67 |
+
Key Training Specifications:
|
68 |
+
- Hardware: NVIDIA RTX 3080
|
69 |
+
- Base Model: DistilBERT
|
70 |
+
- Model Size: 67M parameters (optimized for efficient deployment)
|
71 |
+
- Training Environment: Local workstation
|
72 |
+
- Training Type: Fine-tuning
|
73 |
+
|
74 |
+
Despite its relatively compact size **(67M parameters)**, this model achieves impressive performance metrics, making it suitable for deployment across various devices and environments. The model's efficiency-to-performance ratio demonstrates that effective content moderation is possible without requiring extensive computational resources.
|
75 |
+
|
76 |
## π Quick Start
|
77 |
|
78 |
### Python Implementation (Local)
|