zeeshan73 commited on
Commit
8b6e639
·
verified ·
1 Parent(s): 1b86d22

Model save

Browse files
Files changed (2) hide show
  1. README.md +96 -0
  2. adapter_model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,96 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: mistralai/Mistral-7B-Instruct-v0.3
3
+ datasets:
4
+ - generator
5
+ library_name: peft
6
+ license: apache-2.0
7
+ tags:
8
+ - trl
9
+ - sft
10
+ - generated_from_trainer
11
+ model-index:
12
+ - name: mistral_7b_cosine_lr_2e-4_bs2
13
+ results: []
14
+ ---
15
+
16
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
+ should probably proofread and complete it, then remove this comment. -->
18
+
19
+ # mistral_7b_cosine_lr_2e-4_bs2
20
+
21
+ This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on the generator dataset.
22
+ It achieves the following results on the evaluation set:
23
+ - Loss: 0.3819
24
+
25
+ ## Model description
26
+
27
+ More information needed
28
+
29
+ ## Intended uses & limitations
30
+
31
+ More information needed
32
+
33
+ ## Training and evaluation data
34
+
35
+ More information needed
36
+
37
+ ## Training procedure
38
+
39
+ ### Training hyperparameters
40
+
41
+ The following hyperparameters were used during training:
42
+ - learning_rate: 0.0002
43
+ - train_batch_size: 2
44
+ - eval_batch_size: 8
45
+ - seed: 42
46
+ - gradient_accumulation_steps: 8
47
+ - total_train_batch_size: 16
48
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
+ - lr_scheduler_type: cosine
50
+ - lr_scheduler_warmup_ratio: 0.03
51
+ - lr_scheduler_warmup_steps: 15
52
+ - num_epochs: 4
53
+
54
+ ### Training results
55
+
56
+ | Training Loss | Epoch | Step | Validation Loss |
57
+ |:-------------:|:------:|:----:|:---------------:|
58
+ | 3.5854 | 0.0366 | 10 | 0.7044 |
59
+ | 0.6542 | 0.0732 | 20 | 0.8683 |
60
+ | 0.5736 | 0.1098 | 30 | 0.5023 |
61
+ | 0.4886 | 0.1465 | 40 | 0.4735 |
62
+ | 0.4757 | 0.1831 | 50 | 0.4552 |
63
+ | 0.453 | 0.2197 | 60 | 0.4451 |
64
+ | 0.4494 | 0.2563 | 70 | 0.4380 |
65
+ | 0.4457 | 0.2929 | 80 | 0.4329 |
66
+ | 0.4353 | 0.3295 | 90 | 0.4271 |
67
+ | 0.434 | 0.3661 | 100 | 0.4239 |
68
+ | 0.4307 | 0.4027 | 110 | 0.4198 |
69
+ | 0.4256 | 0.4394 | 120 | 0.4167 |
70
+ | 0.4173 | 0.4760 | 130 | 0.4130 |
71
+ | 0.4195 | 0.5126 | 140 | 0.4100 |
72
+ | 0.4159 | 0.5492 | 150 | 0.4075 |
73
+ | 0.4102 | 0.5858 | 160 | 0.4045 |
74
+ | 0.4135 | 0.6224 | 170 | 0.4034 |
75
+ | 0.408 | 0.6590 | 180 | 0.4004 |
76
+ | 0.405 | 0.6957 | 190 | 0.3992 |
77
+ | 0.4053 | 0.7323 | 200 | 0.3960 |
78
+ | 0.3994 | 0.7689 | 210 | 0.3934 |
79
+ | 0.3968 | 0.8055 | 220 | 0.3914 |
80
+ | 0.3966 | 0.8421 | 230 | 0.3885 |
81
+ | 0.3894 | 0.8787 | 240 | 0.3868 |
82
+ | 0.3896 | 0.9153 | 250 | 0.3860 |
83
+ | 0.3939 | 0.9519 | 260 | 0.3836 |
84
+ | 0.387 | 0.9886 | 270 | 0.3818 |
85
+ | 0.3511 | 1.0252 | 280 | 0.3839 |
86
+ | 0.3316 | 1.0618 | 290 | 0.3834 |
87
+ | 0.3281 | 1.0984 | 300 | 0.3819 |
88
+
89
+
90
+ ### Framework versions
91
+
92
+ - PEFT 0.13.2
93
+ - Transformers 4.45.2
94
+ - Pytorch 2.4.1+cu121
95
+ - Datasets 3.0.1
96
+ - Tokenizers 0.20.0
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b21eb10b3f230f70c8e8c8b0ac2b375be2564dae2313b46c3065aebb07f3e5fc
3
  size 3221320120
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:24a651881deb79bf45823941431b305f695652f0a02b60370f9c0db3ffae147d
3
  size 3221320120