File size: 1,608 Bytes
c88900a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
96e53d7
c88900a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
license: mit
language:
- en
base_model:
- distilbert/distilbert-base-uncased
---
# DistilBERT Sentiment Analysis Model

This model is a fine-tuned version of **DistilBERT** for sentiment analysis on the **IMDb** dataset. It classifies movie reviews as either **positive** or **negative** based on the text content.

## Model Details

- **Model Type**: DistilBERT (a smaller and faster variant of BERT)
- **Task**: Sentiment Analysis
- **Dataset**: IMDb dataset containing movie reviews with labels (positive/negative)
- **Fine-Tuned On**: IMDb dataset

## Model Performance

This model was fine-tuned on the IMDb dataset for sentiment classification, achieving good performance for binary sentiment classification tasks (positive/negative).

## Usage

To use this model, you can load it from the Hugging Face Model Hub using the `transformers` library:

```python
from transformers import pipeline

# Load the model
classifier = pipeline('sentiment-analysis', model='dorukan/distilbert-base-uncased-bert-finetuned-imdb')

# Example usage
result = classifier("This movie was amazing!")
print(result)
```

This will output a sentiment prediction for the given text.


## License

This model is licensed under the MIT License. For more information, see the LICENSE file.

## Acknowledgments

- **DistilBERT**: A smaller version of BERT, created by the Hugging Face team.
- **IMDb Dataset**: A collection of movie reviews used for sentiment classification, widely used in NLP tasks.

You can find more details about the model at the [Hugging Face model page](https://huggingface.co/dorukan/distil-bert-imdb).