Vrandan commited on
Commit
4ca5977
Β·
verified Β·
1 Parent(s): 447cccc

Updated the README

Browse files
Files changed (1) hide show
  1. README.md +265 -196
README.md CHANGED
@@ -1,199 +1,268 @@
1
  ---
2
  library_name: transformers
3
- tags: []
 
 
 
 
 
 
 
 
 
4
  ---
5
-
6
- # Model Card for Model ID
7
-
8
- <!-- Provide a quick summary of what the model is/does. -->
9
-
10
-
11
-
12
- ## Model Details
13
-
14
- ### Model Description
15
-
16
- <!-- Provide a longer summary of what this model is. -->
17
-
18
- This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
-
20
- - **Developed by:** [More Information Needed]
21
- - **Funded by [optional]:** [More Information Needed]
22
- - **Shared by [optional]:** [More Information Needed]
23
- - **Model type:** [More Information Needed]
24
- - **Language(s) (NLP):** [More Information Needed]
25
- - **License:** [More Information Needed]
26
- - **Finetuned from model [optional]:** [More Information Needed]
27
-
28
- ### Model Sources [optional]
29
-
30
- <!-- Provide the basic links for the model. -->
31
-
32
- - **Repository:** [More Information Needed]
33
- - **Paper [optional]:** [More Information Needed]
34
- - **Demo [optional]:** [More Information Needed]
35
-
36
- ## Uses
37
-
38
- <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
-
40
- ### Direct Use
41
-
42
- <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
-
44
- [More Information Needed]
45
-
46
- ### Downstream Use [optional]
47
-
48
- <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
-
50
- [More Information Needed]
51
-
52
- ### Out-of-Scope Use
53
-
54
- <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
-
56
- [More Information Needed]
57
-
58
- ## Bias, Risks, and Limitations
59
-
60
- <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
-
62
- [More Information Needed]
63
-
64
- ### Recommendations
65
-
66
- <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
-
68
- Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
-
70
- ## How to Get Started with the Model
71
-
72
- Use the code below to get started with the model.
73
-
74
- [More Information Needed]
75
-
76
- ## Training Details
77
-
78
- ### Training Data
79
-
80
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
-
82
- [More Information Needed]
83
-
84
- ### Training Procedure
85
-
86
- <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
-
88
- #### Preprocessing [optional]
89
-
90
- [More Information Needed]
91
-
92
-
93
- #### Training Hyperparameters
94
-
95
- - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
-
97
- #### Speeds, Sizes, Times [optional]
98
-
99
- <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
-
101
- [More Information Needed]
102
-
103
- ## Evaluation
104
-
105
- <!-- This section describes the evaluation protocols and provides the results. -->
106
-
107
- ### Testing Data, Factors & Metrics
108
-
109
- #### Testing Data
110
-
111
- <!-- This should link to a Dataset Card if possible. -->
112
-
113
- [More Information Needed]
114
-
115
- #### Factors
116
-
117
- <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
-
119
- [More Information Needed]
120
-
121
- #### Metrics
122
-
123
- <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
-
125
- [More Information Needed]
126
-
127
- ### Results
128
-
129
- [More Information Needed]
130
-
131
- #### Summary
132
-
133
-
134
-
135
- ## Model Examination [optional]
136
-
137
- <!-- Relevant interpretability work for the model goes here -->
138
-
139
- [More Information Needed]
140
-
141
- ## Environmental Impact
142
-
143
- <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
-
145
- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
-
147
- - **Hardware Type:** [More Information Needed]
148
- - **Hours used:** [More Information Needed]
149
- - **Cloud Provider:** [More Information Needed]
150
- - **Compute Region:** [More Information Needed]
151
- - **Carbon Emitted:** [More Information Needed]
152
-
153
- ## Technical Specifications [optional]
154
-
155
- ### Model Architecture and Objective
156
-
157
- [More Information Needed]
158
-
159
- ### Compute Infrastructure
160
-
161
- [More Information Needed]
162
-
163
- #### Hardware
164
-
165
- [More Information Needed]
166
-
167
- #### Software
168
-
169
- [More Information Needed]
170
-
171
- ## Citation [optional]
172
-
173
- <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
-
175
- **BibTeX:**
176
-
177
- [More Information Needed]
178
-
179
- **APA:**
180
-
181
- [More Information Needed]
182
-
183
- ## Glossary [optional]
184
-
185
- <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
-
187
- [More Information Needed]
188
-
189
- ## More Information [optional]
190
-
191
- [More Information Needed]
192
-
193
- ## Model Card Authors [optional]
194
-
195
- [More Information Needed]
196
-
197
- ## Model Card Contact
198
-
199
- [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  library_name: transformers
3
+ tags:
4
+ - text-classification
5
+ - content-moderation
6
+ - comment-moderation
7
+ - text-moderation
8
+ license: openrail
9
+ language:
10
+ - en
11
+ base_model:
12
+ - distilbert/distilbert-base-uncased
13
  ---
14
+ # πŸ›‘οΈ Comment Moderation Model
15
+
16
+ [![HuggingFace](https://img.shields.io/badge/πŸ€—%20Hugging%20Face-Spaces-blue)](https://huggingface.co/Vrandan/Comment-Moderation)
17
+ [![Python 3.12+](https://img.shields.io/badge/python-3.12+-blue.svg)](https://www.python.org/downloads/release/python-370/)
18
+ [![License](https://img.shields.io/badge/License-MIT-green.svg)](https://opensource.org/licenses/MIT)
19
+
20
+
21
+ A powerful, multi-label content moderation system built on **DistilBERT** architecture, designed to detect and classify potentially harmful content in user-generated comments with high accuracy. This model stands out as currently the best in terms of performance based on the provided dataset for text moderation. Additionally, it has the smallest footprint, making it ideal for deployment on edge devices. Currently, it is the only model trained to achieve such high performance while maintaining a minimal size relative to the training data on Hugging Face.
22
+
23
+ ## πŸ–₯️ Training Details
24
+
25
+ The model was trained on an **NVIDIA RTX 3080** GPU in a home setup, demonstrating that effective content moderation models can be developed with consumer-grade hardware. This makes the model development process more accessible to individual developers and smaller organizations.
26
+
27
+ Key Training Specifications:
28
+ - Hardware: NVIDIA RTX 3080
29
+ - Base Model: DistilBERT
30
+ - Model Size: 67M parameters (optimized for efficient deployment)
31
+ - Training Environment: Local workstation
32
+ - Training Type: Fine-tuning
33
+
34
+ Despite its relatively compact size **(67M parameters)**, this model achieves impressive performance metrics, making it suitable for deployment across various devices and environments. The model's efficiency-to-performance ratio demonstrates that effective content moderation is possible without requiring extensive computational resources.
35
+
36
+ ## 🎯 Key Features
37
+
38
+ - Multi-label classification
39
+ - Real-time content analysis
40
+ - 95.4% accuracy rate
41
+ - 9 distinct content categories
42
+ - Easy integration via API or local implementation
43
+ - Lightweight deployment footprint
44
+ - Suitable for **edge devices and mobile applications**
45
+ - Low latency inference
46
+ - Resource-efficient while maintaining high accuracy
47
+ - Can run on consumer-grade hardware
48
+
49
+ ## πŸ“Š Content Categories
50
+
51
+ The model identifies the following types of potentially harmful content:
52
+
53
+ | Category | Label | Definition |
54
+ |----------|--------|------------|
55
+ | Sexual | `S` | Adult content including explicit sexual references, nudity discussions, or suggestive material |
56
+ | Hate | `H` | Discriminatory or prejudiced content targeting individuals or groups based on protected characteristics |
57
+ | Violence | `V` | Content depicting or promoting physical harm, aggression, or cruel behavior |
58
+ | Harassment | `HR` | Bullying, stalking, or targeted hostile behavior towards individuals or groups |
59
+ | Self-Harm | `SH` | Content related to suicide, self-injury, or dangerous behavior that could lead to personal harm |
60
+ | Sexual/Minors | `S3` | Any sexual content involving or targeting minors - strictly prohibited |
61
+ | Hate/Threat | `H2` | Hate speech combined with explicit threats or incitement to violence |
62
+ | Violence/Graphic | `V2` | Extreme violence, gore, or disturbing graphic content |
63
+ | Safe Content | `OK` | Appropriate content that doesn't violate any guidelines |
64
+
65
+ ## πŸ“ˆ Performance Metrics
66
+
67
+ ```
68
+ Accuracy: 95.4%
69
+ Mean ROC AUC: 0.912
70
+ Macro F1 Score: 0.407
71
+ Micro F1 Score: 0.802
72
+ ```
73
+
74
+ [View detailed performance metrics](#model-performance)
75
+
76
+ ## πŸš€ Quick Start
77
+
78
+ ### Python Implementation (Local)
79
+
80
+ ```python
81
+ from transformers import AutoModelForSequenceClassification, AutoTokenizer
82
+
83
+ # Initialize model and tokenizer
84
+ model = AutoModelForSequenceClassification.from_pretrained("Vrandan/Comment-Moderation")
85
+ tokenizer = AutoTokenizer.from_pretrained("Vrandan/Comment-Moderation")
86
+
87
+ def analyze_text(text):
88
+ inputs = tokenizer(text, return_tensors="pt")
89
+ outputs = model(**inputs)
90
+ probabilities = outputs.logits.softmax(dim=-1).squeeze()
91
+
92
+ # Get predictions
93
+ labels = [model.config.id2label[i] for i in range(len(probabilities))]
94
+ predictions = sorted(zip(labels, probabilities), key=lambda x: x[1], reverse=True)
95
+
96
+ return predictions
97
+
98
+ # Example usage
99
+ text = "Your text here"
100
+ results = analyze_text(text)
101
+ for label, prob in results:
102
+ print(f"{label}: {prob:.4f}")
103
+ ```
104
+
105
+ #### Example Output:
106
+ ```
107
+ Label: OK - Probability: 0.9840
108
+ Label: H - Probability: 0.0043
109
+ Label: SH - Probability: 0.0039
110
+ Label: V - Probability: 0.0019
111
+ Label: S - Probability: 0.0018
112
+ Label: HR - Probability: 0.0015
113
+ Label: V2 - Probability: 0.0011
114
+ Label: S3 - Probability: 0.0010
115
+ Label: H2 - Probability: 0.0006
116
+ ```
117
+
118
+ ### Python Implementation (Serverless)
119
+
120
+ ```python
121
+ import requests
122
+
123
+ API_URL = "https://api-inference.huggingface.co/models/Vrandan/Comment-Moderation"
124
+ headers = {"Authorization": "Bearer hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"}
125
+
126
+ def query(payload):
127
+ response = requests.post(API_URL, headers=headers, json=payload)
128
+ return response.json()
129
+
130
+ output = query({
131
+ "inputs": "Your text here",
132
+ })
133
+ ```
134
+
135
+ ### JavaScript Implementation (Node.js)
136
+
137
+ ```javascript
138
+ require('dotenv').config();
139
+ const { HfInference } = require('@huggingface/inference');
140
+ const readline = require('readline');
141
+
142
+ // Initialize the Hugging Face client
143
+ // To use this, follow these steps:
144
+ // 1. Create a `.env` file in the root directory of your project.
145
+ // 2. Visit https://huggingface.co/settings/tokens to generate your access token (you may need to create an account if you haven't already).
146
+ // 3. Add the token to your `.env` file like this:
147
+ // HUGGING_FACE_ACCESS_TOKEN=your_token_here
148
+ // 4. Install dotenv & huggingface/inference package (`npm install dotenv` & `npm install @huggingface/inference`) and load it in your project.
149
+ const hf = new HfInference(process.env.HUGGING_FACE_ACCESS_TOKEN);
150
+
151
+ // Create readline interface
152
+ const rl = readline.createInterface({
153
+ input: process.stdin,
154
+ output: process.stdout
155
+ });
156
+
157
+ async function analyzeText(text) {
158
+ try {
159
+ const result = await hf.textClassification({
160
+ model: 'Vrandan/Comment-Moderation',
161
+ inputs: text
162
+ });
163
+
164
+ console.log('\nResults:');
165
+ result.forEach(pred => {
166
+ console.log(`Label: ${pred.label} - Probability: ${pred.score.toFixed(4)}`);
167
+ });
168
+ } catch (error) {
169
+ console.error('Error analyzing text:', error.message);
170
+ }
171
+ }
172
+
173
+ async function main() {
174
+ while (true) {
175
+ try {
176
+ const text = await new Promise(resolve => {
177
+ rl.question('\nEnter text to analyze (or "quit" to exit): ', resolve);
178
+ });
179
+
180
+ if (text.toLowerCase() === 'quit') break;
181
+ if (text.trim()) await analyzeText(text);
182
+ } catch (error) {
183
+ console.error('Error:', error.message);
184
+ }
185
+ }
186
+ rl.close();
187
+ }
188
+
189
+ main().catch(console.error);
190
+ ```
191
+
192
+ ### JavaScript Implementation (Serverless)
193
+
194
+ ```javascript
195
+ async function query(data) {
196
+ const response = await fetch(
197
+ "https://api-inference.huggingface.co/models/Vrandan/Comment-Moderation",
198
+ {
199
+ headers: {
200
+ Authorization: "Bearer hf_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
201
+ "Content-Type": "application/json",
202
+ },
203
+ method: "POST",
204
+ body: JSON.stringify(data),
205
+ }
206
+ );
207
+ const result = await response.json();
208
+ return result;
209
+ }
210
+
211
+ query({"inputs": "Your text here"}).then((response) => {
212
+ console.log(JSON.stringify(response));
213
+ });
214
+ ```
215
+
216
+ ## πŸ“Š Detailed Model Performance <a name="model-performance"></a>
217
+
218
+ The model has been extensively evaluated using standard classification metrics:
219
+
220
+ - **Loss:** 0.641
221
+ - **Accuracy:** 0.954 (95.4%)
222
+ - **Macro F1 Score:** 0.407
223
+ - **Micro F1 Score:** 0.802
224
+ - **Weighted F1 Score:** 0.763
225
+ - **Macro Precision:** 0.653
226
+ - **Micro Precision:** 0.875
227
+ - **Weighted Precision:** 0.838
228
+ - **Macro Recall:** 0.349
229
+ - **Micro Recall:** 0.740
230
+ - **Weighted Recall:** 0.740
231
+ - **Mean ROC AUC:** 0.912
232
+
233
+ ## ⚠️ Important Considerations
234
+
235
+ ### Ethical Usage
236
+ - Regular bias monitoring
237
+ - Context-aware implementation
238
+ - Privacy-first approach
239
+
240
+ ### Limitations
241
+ - May miss contextual nuances
242
+ - Potential for false positives
243
+ - Cultural context variations
244
+
245
+ ## πŸ“š Dataset Information
246
+
247
+ This model was trained on the dataset released by OpenAI, as described in their paper ["A Holistic Approach to Undesired Content Detection"](https://arxiv.org/abs/2208.03274).
248
+
249
+ ### Dataset Source
250
+ - πŸ“„ [Original Paper (PDF)](https://arxiv.org/pdf/2208.03274)
251
+ - πŸ’Ύ [Dataset Repository](https://github.com/openai/moderation-api-release)
252
+
253
+ ### Citation
254
+ If you use this model or dataset in your research, please cite:
255
+ ```bibtex
256
+ @article{openai2022moderation,
257
+ title={A Holistic Approach to Undesired Content Detection},
258
+ author={Todor Markov and Chong Zhang and Sandhini Agarwal and Tyna Eloundou and Teddy Lee and Steven Adler and Angela Jiang and Lilian Weng},
259
+ journal={arXiv preprint arXiv:2208.03274},
260
+ year={2022}
261
+ }
262
+ ```
263
+
264
+ ## πŸ“§ Contact
265
+
266
+ For support or queries, please [open an issue](https://github.com/Vrandan/Comment-Moderation/issues).
267
+
268
+ ---