alexander-hm commited on
Commit
6a48a97
·
verified ·
1 Parent(s): 502f53e

End of training

Browse files
Files changed (7) hide show
  1. README.md +115 -0
  2. all_results.json +12 -0
  3. completed +0 -0
  4. eval_results.json +7 -0
  5. metrics.json +1 -0
  6. train_results.json +8 -0
  7. trainer_state.json +0 -0
README.md ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: huggyllama/llama-7b
3
+ library_name: peft
4
+ license: other
5
+ tags:
6
+ - generated_from_trainer
7
+ model-index:
8
+ - name: llama-7b_oasst1_l0.0002_64
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # llama-7b_oasst1_l0.0002_64
16
+
17
+ This model is a fine-tuned version of [huggyllama/llama-7b](https://huggingface.co/huggyllama/llama-7b) on an unknown dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 2.6145
20
+
21
+ ## Model description
22
+
23
+ More information needed
24
+
25
+ ## Intended uses & limitations
26
+
27
+ More information needed
28
+
29
+ ## Training and evaluation data
30
+
31
+ More information needed
32
+
33
+ ## Training procedure
34
+
35
+ ### Training hyperparameters
36
+
37
+ The following hyperparameters were used during training:
38
+ - learning_rate: 0.0002
39
+ - train_batch_size: 1
40
+ - eval_batch_size: 1
41
+ - seed: 0
42
+ - gradient_accumulation_steps: 16
43
+ - total_train_batch_size: 16
44
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
+ - lr_scheduler_type: constant
46
+ - lr_scheduler_warmup_ratio: 0.03
47
+ - training_steps: 10000
48
+
49
+ ### Training results
50
+
51
+ | Training Loss | Epoch | Step | Validation Loss |
52
+ |:-------------:|:-------:|:----:|:---------------:|
53
+ | 1.5015 | 0.0018 | 1 | 1.7367 |
54
+ | 1.5123 | 0.3392 | 187 | 1.3207 |
55
+ | 1.1391 | 0.6783 | 374 | 1.3086 |
56
+ | 1.4068 | 1.0175 | 561 | 1.3091 |
57
+ | 1.2847 | 1.3566 | 748 | 1.3037 |
58
+ | 1.2433 | 1.6958 | 935 | 1.3003 |
59
+ | 0.9507 | 2.0349 | 1122 | 1.3159 |
60
+ | 1.0924 | 2.3741 | 1309 | 1.3710 |
61
+ | 0.9754 | 2.7132 | 1496 | 1.3433 |
62
+ | 0.858 | 3.0524 | 1683 | 1.3880 |
63
+ | 0.8205 | 3.3915 | 1870 | 1.3864 |
64
+ | 0.9249 | 3.7307 | 2057 | 1.4946 |
65
+ | 0.6185 | 4.0698 | 2244 | 1.5166 |
66
+ | 0.7531 | 4.4090 | 2431 | 1.4576 |
67
+ | 0.9268 | 4.7481 | 2618 | 1.4874 |
68
+ | 0.2016 | 5.0873 | 2805 | 1.6889 |
69
+ | 0.4437 | 5.4264 | 2992 | 1.6356 |
70
+ | 0.818 | 5.7656 | 3179 | 1.5275 |
71
+ | 0.5957 | 6.1047 | 3366 | 1.8285 |
72
+ | 0.2364 | 6.4439 | 3553 | 1.8515 |
73
+ | 0.3734 | 6.7830 | 3740 | 1.7053 |
74
+ | 0.3691 | 7.1222 | 3927 | 1.8442 |
75
+ | 0.4452 | 7.4613 | 4114 | 1.9495 |
76
+ | 0.2076 | 7.8005 | 4301 | 1.9195 |
77
+ | 0.2793 | 8.1397 | 4488 | 1.9103 |
78
+ | 0.2388 | 8.4788 | 4675 | 1.9957 |
79
+ | 0.4627 | 8.8180 | 4862 | 2.0253 |
80
+ | 0.1041 | 9.1571 | 5049 | 1.9997 |
81
+ | 0.1822 | 9.4963 | 5236 | 2.0561 |
82
+ | 0.242 | 9.8354 | 5423 | 2.1230 |
83
+ | 0.1277 | 10.1746 | 5610 | 2.1026 |
84
+ | 0.1238 | 10.5137 | 5797 | 2.1111 |
85
+ | 0.1503 | 10.8529 | 5984 | 2.2355 |
86
+ | 0.1341 | 11.1920 | 6171 | 2.2269 |
87
+ | 0.1374 | 11.5312 | 6358 | 2.2022 |
88
+ | 0.1162 | 11.8703 | 6545 | 2.3055 |
89
+ | 0.1062 | 12.2095 | 6732 | 2.3849 |
90
+ | 0.1457 | 12.5486 | 6919 | 2.2853 |
91
+ | 0.1185 | 12.8878 | 7106 | 2.3576 |
92
+ | 0.0897 | 13.2269 | 7293 | 2.4654 |
93
+ | 0.1202 | 13.5661 | 7480 | 2.3938 |
94
+ | 0.1729 | 13.9052 | 7667 | 2.3956 |
95
+ | 0.083 | 14.2444 | 7854 | 2.4934 |
96
+ | 0.0805 | 14.5835 | 8041 | 2.5021 |
97
+ | 0.1386 | 14.9227 | 8228 | 2.4270 |
98
+ | 0.1107 | 15.2618 | 8415 | 2.5474 |
99
+ | 0.0821 | 15.6010 | 8602 | 2.5688 |
100
+ | 0.0774 | 15.9401 | 8789 | 2.5323 |
101
+ | 0.0953 | 16.2793 | 8976 | 2.5760 |
102
+ | 0.0841 | 16.6185 | 9163 | 2.5870 |
103
+ | 0.0784 | 16.9576 | 9350 | 2.5858 |
104
+ | 0.0673 | 17.2968 | 9537 | 2.5586 |
105
+ | 0.131 | 17.6359 | 9724 | 2.5801 |
106
+ | 0.0789 | 17.9751 | 9911 | 2.6012 |
107
+
108
+
109
+ ### Framework versions
110
+
111
+ - PEFT 0.12.1.dev0
112
+ - Transformers 4.45.0.dev0
113
+ - Pytorch 2.3.0+cu121
114
+ - Datasets 2.19.0
115
+ - Tokenizers 0.19.1
all_results.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 18.13647698934482,
3
+ "eval_loss": 2.614521026611328,
4
+ "eval_runtime": 151.4293,
5
+ "eval_samples_per_second": 6.604,
6
+ "eval_steps_per_second": 6.604,
7
+ "total_flos": 2.272703333604655e+18,
8
+ "train_loss": 0.43576861339397727,
9
+ "train_runtime": 182687.1281,
10
+ "train_samples_per_second": 0.876,
11
+ "train_steps_per_second": 0.055
12
+ }
completed ADDED
File without changes
eval_results.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 18.13647698934482,
3
+ "eval_loss": 2.614521026611328,
4
+ "eval_runtime": 151.4293,
5
+ "eval_samples_per_second": 6.604,
6
+ "eval_steps_per_second": 6.604
7
+ }
metrics.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"run_name": "huggyllama/llama-7b_oasst1_l0.0002_64", "train_runtime": 182687.1281, "train_samples_per_second": 0.876, "train_steps_per_second": 0.055, "total_flos": 2.272703333604655e+18, "train_loss": 0.43576861339397727, "epoch": 18.13647698934482, "eval_loss": 2.614521026611328, "eval_runtime": 151.4293, "eval_samples_per_second": 6.604, "eval_steps_per_second": 6.604}
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 18.13647698934482,
3
+ "total_flos": 2.272703333604655e+18,
4
+ "train_loss": 0.43576861339397727,
5
+ "train_runtime": 182687.1281,
6
+ "train_samples_per_second": 0.876,
7
+ "train_steps_per_second": 0.055
8
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff