Distilled SmolLM Sentiment Analyzer
This model is a distilled version of a larger sentiment analysis model, fine-tuned on custom datasets using the Hugging Face Transformers library. It is designed for efficient, lightweight sentiment analysis tasks in resource-constrained environments.
β Key Features:
- Compact model architecture (
SmolLM
) - Distilled for speed and smaller size
- Fine-tuned for sentiment classification tasks
- Supports labels:
negative
,neutral
,positive
π Model Details
Model | Distilled SmolLM Sentiment Analyzer |
---|---|
Base Model | SmollM |
Task | Sentiment Analysis (3-class: negative, neutral, positive) |
Dataset | Custom Yelp Review + Distilled Dataset |
Framework | Hugging Face Transformers |
Distillation Method | Knowledge Distillation |
Accuracy | ~75% (Relative Accuracy - compared with Teacher model gemma3:12b) |
π Usage
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
tokenizer = AutoTokenizer.from_pretrained("AhilanPonnusamy/distilled-smollm-sentiment-analyzer")
model = AutoModelForSequenceClassification.from_pretrained("AhilanPonnusamy/distilled-smollm-sentiment-analyzer")
inputs = tokenizer("The movie was amazing!", return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
logits = outputs.logits
predicted_class_id = logits.argmax().item()
label_map = {0: "negative", 1: "neutral", 2: "positive"}
print("Predicted sentiment:", label_map[predicted_class_id])
- Downloads last month
- 14
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Evaluation results
- Accuracy on Custom Distillation Datasetself-reported~65% (Relative Accuracy - compared with Teacher model gemma3:12b)