File size: 2,100 Bytes
55906fb 9b037b9 55906fb 6db9445 55906fb 6db9445 55906fb 6db9445 55906fb |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 |
---
license: apache-2.0
tags:
- sentiment-analysis
- distillation
- small-model
- smollm
- nlp
model-index:
- name: distilled-smollm-sentiment-analyzer
results:
- task:
type: sentiment-analysis
dataset:
name: Custom Distillation Dataset
type: text
metrics:
- name: Accuracy
type: accuracy
value: ~65% (Relative Accuracy - compared with Teacher model gemma3:12b)
---
# Distilled SmolLM Sentiment Analyzer
This model is a distilled version of a larger sentiment analysis model, fine-tuned on custom datasets using the [Hugging Face Transformers](https://huggingface.co/docs/transformers) library. It is designed for **efficient, lightweight sentiment analysis** tasks in resource-constrained environments.
โ
**Key Features:**
- Compact model architecture (`SmolLM`)
- Distilled for speed and smaller size
- Fine-tuned for sentiment classification tasks
- Supports labels: `negative`, `neutral`, `positive`
---
## ๐ Model Details
| Model | Distilled SmolLM Sentiment Analyzer |
|:------|:------------------------------------|
| Base Model | SmollM |
| Task | Sentiment Analysis (3-class: negative, neutral, positive) |
| Dataset | Custom Yelp Review + Distilled Dataset |
| Framework | Hugging Face Transformers |
| Distillation Method | Knowledge Distillation |
| Accuracy | ~75% (Relative Accuracy - compared with Teacher model gemma3:12b) |
---
## ๐ Usage
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("AhilanPonnusamy/distilled-smollm-sentiment-analyzer")
model = AutoModelForSequenceClassification.from_pretrained("AhilanPonnusamy/distilled-smollm-sentiment-analyzer")
inputs = tokenizer("The movie was amazing!", return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
predicted_class_id = logits.argmax().item()
label_map = {0: "negative", 1: "neutral", 2: "positive"}
print("Predicted sentiment:", label_map[predicted_class_id])
|