Model Details
- Base Model: distilbert-base-uncased
- Task: Text Classification (Sentiment Analysis)
- Dataset: stanfordnlp/imdb
- Fine-tuning Method: Full Fine-tuning
- Training samples: 5000
- Epochs: 3
- Batch Size: 8
- Learning Rate: {LEARNING_RATE}
Model Description
Fine-tuned distilbert-base-uncased for Sentiment Analysis
This model is a fine-tuned version of distilbert-base-uncased on the stanfordnlp/imdb dataset.
Uses
from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch
Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("{HF_REPO_NAME}") model = AutoModelForSequenceClassification.from_pretrained("{HF_REPO_NAME}")
Example usage
text = "This movie is amazing! I really enjoyed watching it." inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True)
with torch.no_grad(): outputs = model(**inputs) predictions = torch.nn.functional.softmax(outputs.logits, dim=-1) predicted_class = torch.argmax(predictions, dim=-1).item() confidence = torch.max(predictions).item()
0 = negative, 1 = positive
sentiment = "positive" if predicted_class == 1 else "negative" print(f"Sentiment: {{sentiment}} (confidence: {{confidence:.3f}})")
Training Details
Optimizer: AdamW
Weight Decay: 0.01
Warmup Steps: 500
Mixed Precision: {"Enabled" if torch.cuda.is_available() else "Disabled"}
Training Data
Intended Use
This model is intended for sentiment analysis of English text. It classifies text into two categories:
0: Negative sentiment
1: Positive sentiment
Limitations
Trained primarily on movie reviews โ may not generalize to other domains
Weak on neutral/mixed sentiment
Designed for English text only
Training Infrastructure
Platform: Google Colab
GPU: {torch.cuda.get_device_name(0) if torch.cuda.is_available() else "CPU"}
Framework: PyTorch + Transformers
Summary
============================================================ FINE-TUNING COMPLETED SUCCESSFULLY!
Training Summary:
- Model: distilbert-base-uncased
- Task: classification
- Method: Full Fine-tuning
- Dataset: stanfordnlp/imdb
- Training samples: 5000
- Epochs: 3
- Batch size: 8
Your Model:
- Repository: imrgurmeet/fine-tuned-sentiment-model
- URL: https://huggingface.co/imrgurmeet/fine-tuned-sentiment-model
- Local path: ./fine-tuned-model
Next Steps:
- Test your model on new data
- Share it with the community
- Use it in your applications
- Continue fine-tuning if needed
Usage in your code:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("imrgurmeet/fine-tuned-sentiment-model") model = AutoModelForSequenceClassification.from_pretrained("imrgurmeet/fine-tuned-sentiment-model")
text = "Your text here" inputs = tokenizer(text, return_tensors="pt") outputs = model(**inputs) prediction = torch.nn.functional.softmax(outputs.logits, dim=-1)
============================================================ Happy fine-tuning! Your model is ready to use!
COMPLETE INFERENCE CODE FOR FINE-TUNED MODEL
SIMPLE VERSION:
from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch
Load once
tokenizer = AutoTokenizer.from_pretrained("imrgurmeet/fine-tuned-sentiment-model") model = AutoModelForSequenceClassification.from_pretrained("imrgurmeet/fine-tuned-sentiment-model")
Use anywhere:
def analyze(text): inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True) with torch.no_grad(): outputs = model(**inputs) prediction = torch.argmax(outputs.logits, dim=-1).item() return "positive" if prediction == 1 else "negative"
Test it:
print(analyze("I love this!")) # positive print(analyze("This is terrible!")) # negative print(analyze("It's okay")) # positive or negative
- Downloads last month
- -