Files changed (1) hide show
  1. README.md +0 -74
README.md DELETED
@@ -1,74 +0,0 @@
1
- ---
2
- license: mit
3
- tags:
4
- - generated_from_trainer
5
- datasets:
6
- - imdb
7
- metrics:
8
- - accuracy
9
- model-index:
10
- - name: gpt2-imdb-sentiment-classifier
11
- results:
12
- - task:
13
- name: Text Classification
14
- type: text-classification
15
- dataset:
16
- name: imdb
17
- type: imdb
18
- args: plain_text
19
- metrics:
20
- - name: Accuracy
21
- type: accuracy
22
- value: 0.9394
23
- ---
24
-
25
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
26
- should probably proofread and complete it, then remove this comment. -->
27
-
28
- # gpt2-imdb-sentiment-classifier
29
-
30
- This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the imdb dataset.
31
- It achieves the following results on the evaluation set:
32
- - Loss: 0.1703
33
- - Accuracy: 0.9394
34
-
35
- ## Model description
36
-
37
- More information needed
38
-
39
- ## Intended uses & limitations
40
-
41
- This is comparable to [distilbert-imdb](https://huggingface.co/lvwerra/distilbert-imdb) and trained with exactly the same [script](https://huggingface.co/lvwerra/distilbert-imdb/blob/main/distilbert-imdb-training.ipynb)
42
-
43
- It achieves slightly lower loss (0.1703 vs 0.1903) and slightly higher accuracy (0.9394 vs 0.928)
44
-
45
- ## Training and evaluation data
46
-
47
- More information needed
48
-
49
- ## Training procedure
50
-
51
- ### Training hyperparameters
52
-
53
- The following hyperparameters were used during training:
54
- - learning_rate: 5e-05
55
- - train_batch_size: 16
56
- - eval_batch_size: 16
57
- - seed: 42
58
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
- - lr_scheduler_type: linear
60
- - num_epochs: 1
61
-
62
- ### Training results
63
-
64
- | Training Loss | Epoch | Step | Validation Loss | Accuracy |
65
- |:-------------:|:-----:|:----:|:---------------:|:--------:|
66
- | 0.1967 | 1.0 | 1563 | 0.1703 | 0.9394 |
67
-
68
-
69
- ### Framework versions
70
-
71
- - Transformers 4.18.0
72
- - Pytorch 1.13.1+cu117
73
- - Datasets 2.9.0
74
- - Tokenizers 0.12.1