File size: 2,563 Bytes
6164e10
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
622910f
313fd0a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20aa03f
 
313fd0a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
license: mit
language:
- en
metrics:
- f1
- accuracy
base_model:
- cardiffnlp/twitter-roberta-base
datasets:
- custom
tags:
- facebook
- text-classification
- sentiment
- customer-support
- transformers
- roberta
- huggingface
- fine-tuned
model-index:
- name: fb-post-classifier-roberta
  results:
  - task:
      name: Text Classification
      type: text-classification
    dataset:
      name: Facebook Posts (Appreciation / Complaint / Feedback)
      type: custom
    metrics:
    - name: F1
      type: f1
      value: 0.8979
library_name: transformers
pipeline_tag: text-classification
---
# Facebook Post Classifier (RoBERTa Base, fine-tuned)

This model classifies short Facebook posts into **one** of the following **three mutually exclusive categories**:
- `Appreciation`
- `Complaint`
- `Feedback`

It is fine-tuned on ~8k manually labeled posts from business pages (e.g. Target, Walmart), based on the `cardiffnlp/twitter-roberta-base` model, which is pretrained on 58M tweets.

## 🧠 Intended Use

- Customer support automation
- Sentiment analysis on social media
- CRM pipelines or chatbot classification

## πŸ“Š Performance

| Class        | Precision | Recall | F1 Score |
|--------------|-----------|--------|----------|
| Appreciation | 0.906     | 0.936  | 0.921    |
| Complaint    | 0.931     | 0.902  | 0.916    |
| Feedback     | 0.840     | 0.874  | 0.857    |
| **Average**  | –         | –      | **0.898** |

> Evaluated on 2039 unseen posts with held-out labels using macro-averaged F1.

## πŸ› οΈ How to Use

```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from torch.nn.functional import softmax
import torch

model = AutoModelForSequenceClassification.from_pretrained("harshithan/fb-post-classifier-roberta_v1")
tokenizer = AutoTokenizer.from_pretrained("harshithan/fb-post-classifier-roberta_v1")

inputs = tokenizer("I love the fast delivery!", return_tensors="pt")
outputs = model(**inputs)
probs = softmax(outputs.logits, dim=1)

label = torch.argmax(probs).item()
classes = ["Appreciation", "Complaint", "Feedback"]
print("Predicted:", classes[label])
```

## 🧾 License
MIT License

## πŸ™‹β€β™€οΈ Author
This model was fine-tuned by @harshithan.

## πŸ“š Academic Disclaimer
This model was developed as part of an academic experimentation project. It is intended solely for educational and research purposes.
The model has not been validated for production use and may not generalize to real-world Facebook or customer support data beyond the scope of the assignment.