Update Readme.md
Browse files# Facebook Post Classifier (RoBERTa Base, fine-tuned)
This model classifies short Facebook posts into **one** of the following **three mutually exclusive categories**:
- `Appreciation`
- `Complaint`
- `Feedback`
It is fine-tuned on ~8k manually labeled posts from business pages (e.g. Target, Walmart), based on the `cardiffnlp/twitter-roberta-base` model, which is pretrained on 58M tweets.
## 🧠 Intended Use
- Customer support automation
- Sentiment analysis on social media
- CRM pipelines or chatbot classification
## 📊 Performance
| Class | Precision | Recall | F1 Score |
|--------------|-----------|--------|----------|
| Appreciation | 0.906 | 0.936 | 0.921 |
| Complaint | 0.931 | 0.902 | 0.916 |
| Feedback | 0.840 | 0.874 | 0.857 |
| **Average** | – | – | **0.898** |
> Evaluated on 2039 unseen posts with held-out labels using macro-averaged F1.
## 🛠️ How to Use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from torch.nn.functional import softmax
import torch
model = AutoModelForSequenceClassification.from_pretrained("your-username/fb-post-classifier-roberta")
tokenizer = AutoTokenizer.from_pretrained("your-username/fb-post-classifier-roberta")
inputs = tokenizer("I love the fast delivery!", return_tensors="pt")
outputs = model(**inputs)
probs = softmax(outputs.logits, dim=1)
label = torch.argmax(probs).item()
classes = ["Appreciation", "Complaint", "Feedback"]
print("Predicted:", classes[label])