Create README.md
Browse files# Facebook Post Classifier (RoBERTa Base, fine-tuned)
This model classifies short Facebook posts into **one** of the following **three mutually exclusive categories**:
- `Appreciation`
- `Complaint`
- `Feedback`
It is fine-tuned on ~8k manually labeled posts from business pages (e.g. Target, Walmart), based on the `cardiffnlp/twitter-roberta-base` model, which is pretrained on 58M tweets.
## 🧠 Intended Use
- Customer support automation
- Sentiment analysis on social media
- CRM pipelines or chatbot classification
## 📊 Performance
| Class | Precision | Recall | F1 Score |
|--------------|-----------|--------|----------|
| Appreciation | 0.906 | 0.936 | 0.921 |
| Complaint | 0.931 | 0.902 | 0.916 |
| Feedback | 0.840 | 0.874 | 0.857 |
| **Average** | – | – | **0.898** |
> Evaluated on 2039 unseen posts with held-out labels using macro-averaged F1.
## 🛠️ How to Use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from torch.nn.functional import softmax
import torch
model = AutoModelForSequenceClassification.from_pretrained("harshithan/fb-post-classifier-roberta")
tokenizer = AutoTokenizer.from_pretrained("harshithan/fb-post-classifier-roberta")
inputs = tokenizer("I love the fast delivery!", return_tensors="pt")
outputs = model(**inputs)
probs = softmax(outputs.logits, dim=1)
label = torch.argmax(probs).item()
classes = ["Appreciation", "Complaint", "Feedback"]
print("Predicted:", classes[label])
```
## 📚 Academic Disclaimer
This model was developed as part of an academic experimentation project. It is intended solely for educational and research purposes.
The model has not been validated for production use and may not generalize to real-world Facebook or customer support data beyond the scope of the assignment.
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
metrics:
|
6 |
+
- f1
|
7 |
+
- accuracy
|
8 |
+
base_model:
|
9 |
+
- cardiffnlp/twitter-roberta-base
|
10 |
+
datasets:
|
11 |
+
- custom
|
12 |
+
tags:
|
13 |
+
- facebook
|
14 |
+
- text-classification
|
15 |
+
- sentiment
|
16 |
+
- customer-support
|
17 |
+
- transformers
|
18 |
+
- roberta
|
19 |
+
- huggingface
|
20 |
+
- fine-tuned
|
21 |
+
model-index:
|
22 |
+
- name: fb-post-classifier-roberta
|
23 |
+
results:
|
24 |
+
- task:
|
25 |
+
name: Text Classification
|
26 |
+
type: text-classification
|
27 |
+
dataset:
|
28 |
+
name: Facebook Posts (Appreciation / Complaint / Feedback)
|
29 |
+
type: custom
|
30 |
+
metrics:
|
31 |
+
- name: F1
|
32 |
+
type: f1
|
33 |
+
value: 0.8979
|
34 |
+
library_name: transformers
|
35 |
+
---
|