imdatta0 commited on
Commit
2060448
1 Parent(s): 6ec93bf

End of training

Browse files
Files changed (2) hide show
  1. README.md +100 -0
  2. adapter_model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: unsloth/mistral-7b-v0.3-bnb-4bit
3
+ library_name: peft
4
+ license: apache-2.0
5
+ tags:
6
+ - unsloth
7
+ - generated_from_trainer
8
+ model-index:
9
+ - name: Mistral-7B-v0.3_magiccoder_ortho
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # Mistral-7B-v0.3_magiccoder_ortho
17
+
18
+ This model is a fine-tuned version of [unsloth/mistral-7b-v0.3-bnb-4bit](https://huggingface.co/unsloth/mistral-7b-v0.3-bnb-4bit) on an unknown dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 7.8233
21
+
22
+ ## Model description
23
+
24
+ More information needed
25
+
26
+ ## Intended uses & limitations
27
+
28
+ More information needed
29
+
30
+ ## Training and evaluation data
31
+
32
+ More information needed
33
+
34
+ ## Training procedure
35
+
36
+ ### Training hyperparameters
37
+
38
+ The following hyperparameters were used during training:
39
+ - learning_rate: 0.0001
40
+ - train_batch_size: 8
41
+ - eval_batch_size: 8
42
+ - seed: 42
43
+ - gradient_accumulation_steps: 8
44
+ - total_train_batch_size: 64
45
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
+ - lr_scheduler_type: cosine
47
+ - lr_scheduler_warmup_ratio: 0.02
48
+ - num_epochs: 1
49
+
50
+ ### Training results
51
+
52
+ | Training Loss | Epoch | Step | Validation Loss |
53
+ |:-------------:|:------:|:----:|:---------------:|
54
+ | 4.7819 | 0.0262 | 4 | 5.1512 |
55
+ | 9.2164 | 0.0523 | 8 | 8.3377 |
56
+ | 8.1082 | 0.0785 | 12 | 8.9375 |
57
+ | 9.0762 | 0.1047 | 16 | 8.3063 |
58
+ | 8.2007 | 0.1308 | 20 | 8.0203 |
59
+ | 8.168 | 0.1570 | 24 | 8.2340 |
60
+ | 7.8692 | 0.1832 | 28 | 7.8876 |
61
+ | 7.8831 | 0.2093 | 32 | 7.8978 |
62
+ | 7.7946 | 0.2355 | 36 | 7.8117 |
63
+ | 7.8717 | 0.2617 | 40 | 7.8140 |
64
+ | 7.9497 | 0.2878 | 44 | 7.9363 |
65
+ | 8.0978 | 0.3140 | 48 | 7.9038 |
66
+ | 7.8654 | 0.3401 | 52 | 7.8165 |
67
+ | 7.8036 | 0.3663 | 56 | 7.8578 |
68
+ | 7.8264 | 0.3925 | 60 | 7.8504 |
69
+ | 7.8333 | 0.4186 | 64 | 7.8426 |
70
+ | 7.8526 | 0.4448 | 68 | 7.8285 |
71
+ | 7.802 | 0.4710 | 72 | 7.7864 |
72
+ | 7.8376 | 0.4971 | 76 | 7.8583 |
73
+ | 7.8992 | 0.5233 | 80 | 7.8449 |
74
+ | 7.8557 | 0.5495 | 84 | 7.8771 |
75
+ | 7.8194 | 0.5756 | 88 | 7.8423 |
76
+ | 7.9157 | 0.6018 | 92 | 7.8123 |
77
+ | 7.8291 | 0.6280 | 96 | 7.7872 |
78
+ | 7.8662 | 0.6541 | 100 | 7.8912 |
79
+ | 7.8973 | 0.6803 | 104 | 7.9091 |
80
+ | 7.9194 | 0.7065 | 108 | 7.9010 |
81
+ | 7.8688 | 0.7326 | 112 | 7.8714 |
82
+ | 7.8032 | 0.7588 | 116 | 7.7568 |
83
+ | 7.7982 | 0.7850 | 120 | 7.7807 |
84
+ | 7.9577 | 0.8111 | 124 | 7.8259 |
85
+ | 7.886 | 0.8373 | 128 | 7.8117 |
86
+ | 7.8537 | 0.8635 | 132 | 7.7975 |
87
+ | 7.832 | 0.8896 | 136 | 7.8116 |
88
+ | 7.7412 | 0.9158 | 140 | 7.8055 |
89
+ | 7.822 | 0.9419 | 144 | 7.8141 |
90
+ | 7.7889 | 0.9681 | 148 | 7.8214 |
91
+ | 7.8316 | 0.9943 | 152 | 7.8233 |
92
+
93
+
94
+ ### Framework versions
95
+
96
+ - PEFT 0.12.0
97
+ - Transformers 4.44.0
98
+ - Pytorch 2.4.0+cu121
99
+ - Datasets 2.20.0
100
+ - Tokenizers 0.19.1
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e0001106735a7a3a78582738c69b90ced091f0a7b62955ef6b2ea258f97545e9
3
  size 83945296
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bcbfa1c46d5e1aca67581d5fb373ff70e4e342e6f22347618a4c6076f100f978
3
  size 83945296