AnonyResearcher commited on
Commit
465600e
·
1 Parent(s): 20e2f56
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ *.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: llama3.1
3
+ base_model: meta-llama/Llama-3.1-8B-Instruct
4
+ tags:
5
+ - alignment-handbook
6
+ - generated_from_trainer
7
+ datasets:
8
+ - meng-lab/Llama-3.1-8B-Instruct-humaneval
9
+ model-index:
10
+ - name: Llama-3.1-8B-Instruct-sft-5e-3-epoch-100-human-eval-final
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/uva-llm/huggingface/runs/b08pi5fy)
18
+ # Llama-3.1-8B-Instruct-sft-5e-3-epoch-100-human-eval-final
19
+
20
+ This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct) on the meng-lab/Llama-3.1-8B-Instruct-humaneval dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 5.3754
23
+ - Loss Layer 4 Head: 1.6774
24
+ - Loss Layer 8 Head: 1.3806
25
+ - Loss Layer 12 Head: 1.2795
26
+ - Loss Layer 16 Head: 0.6378
27
+ - Loss Layer 20 Head: 0.3110
28
+ - Loss Layer 24 Head: 0.1844
29
+ - Loss Layer 28 Head: 0.0864
30
+
31
+ ## Model description
32
+
33
+ More information needed
34
+
35
+ ## Intended uses & limitations
36
+
37
+ More information needed
38
+
39
+ ## Training and evaluation data
40
+
41
+ More information needed
42
+
43
+ ## Training procedure
44
+
45
+ ### Training hyperparameters
46
+
47
+ The following hyperparameters were used during training:
48
+ - learning_rate: 0.005
49
+ - train_batch_size: 1
50
+ - eval_batch_size: 2
51
+ - seed: 42
52
+ - distributed_type: multi-GPU
53
+ - num_devices: 4
54
+ - gradient_accumulation_steps: 32
55
+ - total_train_batch_size: 128
56
+ - total_eval_batch_size: 8
57
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
58
+ - lr_scheduler_type: cosine
59
+ - lr_scheduler_warmup_ratio: 0.1
60
+ - num_epochs: 100
61
+
62
+ ### Training results
63
+
64
+ | Training Loss | Epoch | Step | Validation Loss | Loss Layer 4 Head | Loss Layer 8 Head | Loss Layer 12 Head | Loss Layer 16 Head | Loss Layer 20 Head | Loss Layer 24 Head | Loss Layer 28 Head |
65
+ |:-------------:|:-------:|:----:|:---------------:|:-----------------:|:-----------------:|:------------------:|:------------------:|:------------------:|:------------------:|:------------------:|
66
+ | 7.7477 | 9.6823 | 200 | 7.6952 | 1.9941 | 1.7442 | 1.9609 | 1.0923 | 0.4414 | 0.2459 | 0.4381 |
67
+ | 5.8078 | 19.3646 | 400 | 6.4289 | 1.9090 | 1.5288 | 1.4099 | 0.9812 | 0.3976 | 0.2383 | 0.1448 |
68
+ | 4.8435 | 29.0469 | 600 | 5.9964 | 1.8480 | 1.5236 | 1.3836 | 0.6737 | 0.3976 | 0.2537 | 0.1092 |
69
+ | 4.6084 | 38.7292 | 800 | 6.0069 | 1.8460 | 1.7121 | 1.3111 | 0.6743 | 0.3436 | 0.2146 | 0.0977 |
70
+ | 4.0625 | 48.4115 | 1000 | 5.7159 | 1.8920 | 1.4329 | 1.3107 | 0.6548 | 0.3220 | 0.1980 | 0.0920 |
71
+ | 3.7565 | 58.0938 | 1200 | 5.4530 | 1.7095 | 1.3997 | 1.2900 | 0.6451 | 0.3159 | 0.1877 | 0.0897 |
72
+ | 3.5758 | 67.7761 | 1400 | 5.4088 | 1.6897 | 1.3862 | 1.2843 | 0.6413 | 0.3125 | 0.1860 | 0.0880 |
73
+ | 3.5369 | 77.4584 | 1600 | 5.3933 | 1.6839 | 1.3837 | 1.2815 | 0.6409 | 0.3124 | 0.1856 | 0.0870 |
74
+ | 3.51 | 87.1407 | 1800 | 5.3780 | 1.6781 | 1.3809 | 1.2799 | 0.6378 | 0.3111 | 0.1843 | 0.0865 |
75
+ | 3.4762 | 96.8230 | 2000 | 5.3754 | 1.6774 | 1.3806 | 1.2795 | 0.6378 | 0.3110 | 0.1844 | 0.0864 |
76
+
77
+
78
+ ### Framework versions
79
+
80
+ - Transformers 4.43.2
81
+ - Pytorch 2.4.1+cu121
82
+ - Datasets 3.0.1
83
+ - Tokenizers 0.19.1
all_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ceb7dd893c42cd90ace2ea82d58c04c027dad35146766f3ef4d106ab3eb12727
3
+ size 248
config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0853bee540c88980a875e719009ab014f3e2705748fc076a98348bbebc7eaabc
3
+ size 951
generation_config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:324483b31149f2d62161e2fa091349f0dab8dac13704351ef2fd0a7076aca7ce
3
+ size 284
model-00001-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2b1879f356aed350030bb40eb45ad362c89d9891096f79a3ab323d3ba5607668
3
+ size 4976698672
model-00002-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:09d433f650646834a83c580877bd60c6d1f88f7755305c12576b5c7058f9af15
3
+ size 4999802720
model-00003-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fc1cdddd6bfa91128d6e94ee73d0ce62bfcdb7af29e978ddcab30c66ae9ea7fa
3
+ size 4915916176
model-00004-of-00004.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd5efa5874a68615294f8e256e65d0d64bbf0b5f7a06c812c9548c040a469a14
3
+ size 1403020632
model.safetensors.index.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c03816fe763739367a4a05039e17c75cfc4d7d9388d4aa56bf8aaab0952e3d0b
3
+ size 24522
special_tokens_map.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1b1835caa5b4d70acaa210fa222b0036f1882f9525c4660fd4810fb3e1e40ff8
3
+ size 325
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:79e3e522635f3171300913bb421464a87de6222182a0570b9b2ccba2a964b2b4
3
+ size 9085657
tokenizer_config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d67100f4fc8212c9bca4aea3c80348d0269240c483f96da7d8065104322391dc
3
+ size 50938
train_results.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ceb7dd893c42cd90ace2ea82d58c04c027dad35146766f3ef4d106ab3eb12727
3
+ size 248
trainer_state.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3cee9651ead819a9458c611c77eabeb0ec4b8e51eec7bd35b99bfb7dd2eeea73
3
+ size 209884
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78b2c5ba7ebb871da137347134b01647c8ec08705aaa12bcef358620cbcc2586
3
+ size 6904