DistilBERT Sentiment Analysis Model
This model is a fine-tuned version of DistilBERT for sentiment analysis on the IMDb dataset. It classifies movie reviews as either positive or negative based on the text content.
Model Details
- Model Type: DistilBERT (a smaller and faster variant of BERT)
- Task: Sentiment Analysis
- Dataset: IMDb dataset containing movie reviews with labels (positive/negative)
- Fine-Tuned On: IMDb dataset
Model Performance
This model was fine-tuned on the IMDb dataset for sentiment classification, achieving good performance for binary sentiment classification tasks (positive/negative).
Usage
To use this model, you can load it from the Hugging Face Model Hub using the transformers
library:
from transformers import pipeline
# Load the model
classifier = pipeline('sentiment-analysis', model='dorukan/distilbert-base-uncased-bert-finetuned-imdb')
# Example usage
result = classifier("This movie was amazing!")
print(result)
This will output a sentiment prediction for the given text.
License
This model is licensed under the MIT License. For more information, see the LICENSE file.
Acknowledgments
- DistilBERT: A smaller version of BERT, created by the Hugging Face team.
- IMDb Dataset: A collection of movie reviews used for sentiment classification, widely used in NLP tasks.
You can find more details about the model at the Hugging Face model page.
- Downloads last month
- 4
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.
Model tree for dorukan/distilbert-base-uncased-bert-finetuned-imdb
Base model
distilbert/distilbert-base-uncased