File size: 1,992 Bytes
71adff2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
---
license: mit
base_model: gpt2-medium
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: gmra_model_gpt2-medium_14082023T134929
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# gmra_model_gpt2-medium_14082023T134929

This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3831
- Accuracy: 0.9438

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10

### Training results

| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log        | 1.0   | 284  | 0.2626          | 0.9069   |
| 0.3464        | 2.0   | 568  | 0.2263          | 0.9262   |
| 0.3464        | 3.0   | 852  | 0.2545          | 0.9394   |
| 0.1022        | 4.0   | 1137 | 0.2577          | 0.9464   |
| 0.1022        | 5.0   | 1421 | 0.3485          | 0.9420   |
| 0.0292        | 6.0   | 1705 | 0.3445          | 0.9429   |
| 0.0292        | 7.0   | 1989 | 0.3127          | 0.9464   |
| 0.0125        | 8.0   | 2274 | 0.4068          | 0.9411   |
| 0.0085        | 9.0   | 2558 | 0.3853          | 0.9438   |
| 0.0085        | 9.99  | 2840 | 0.3831          | 0.9438   |


### Framework versions

- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3