Keano95 commited on
Commit
4a7c8df
1 Parent(s): fd2d87b

Keano95/mistral-7b-instruct-fmel-berichte

Browse files
README.md CHANGED
@@ -15,6 +15,8 @@ should probably proofread and complete it, then remove this comment. -->
15
  # fmel-ft
16
 
17
  This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset.
 
 
18
 
19
  ## Model description
20
 
@@ -34,16 +36,31 @@ More information needed
34
 
35
  The following hyperparameters were used during training:
36
  - learning_rate: 0.0002
37
- - train_batch_size: 3
38
- - eval_batch_size: 3
39
  - seed: 42
40
  - gradient_accumulation_steps: 4
41
- - total_train_batch_size: 12
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: linear
44
  - lr_scheduler_warmup_steps: 2
45
  - num_epochs: 10
46
- - mixed_precision_training: Native AMP
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
 
48
  ### Framework versions
49
 
 
15
  # fmel-ft
16
 
17
  This model is a fine-tuned version of [TheBloke/Mistral-7B-Instruct-v0.2-GPTQ](https://huggingface.co/TheBloke/Mistral-7B-Instruct-v0.2-GPTQ) on an unknown dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 1.9580
20
 
21
  ## Model description
22
 
 
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 0.0002
39
+ - train_batch_size: 4
40
+ - eval_batch_size: 4
41
  - seed: 42
42
  - gradient_accumulation_steps: 4
43
+ - total_train_batch_size: 16
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 2
47
  - num_epochs: 10
48
+
49
+ ### Training results
50
+
51
+ | Training Loss | Epoch | Step | Validation Loss |
52
+ |:-------------:|:-----:|:----:|:---------------:|
53
+ | 0.4274 | 1.0 | 1 | 1.9580 |
54
+ | 0.4274 | 2.0 | 2 | 1.9580 |
55
+ | 0.4274 | 3.0 | 3 | 1.9580 |
56
+ | 0.4274 | 4.0 | 4 | 1.9580 |
57
+ | 0.4274 | 5.0 | 5 | 1.9580 |
58
+ | 0.4274 | 6.0 | 6 | 1.9580 |
59
+ | 0.4274 | 7.0 | 7 | 1.9580 |
60
+ | 0.4274 | 8.0 | 8 | 1.9580 |
61
+ | 0.4274 | 9.0 | 9 | 1.9580 |
62
+ | 0.4274 | 10.0 | 10 | 1.9580 |
63
+
64
 
65
  ### Framework versions
66
 
adapter_model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:75a1278c506c63353d48a9e0500a431e49e668cf3b9011caa89a50f6d3ee4313
3
  size 8397056
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cfaa388dec5f26e3975cc5f2229e2a119088dfea0400079293dc9dc92eb45dc8
3
  size 8397056
runs/May16_19-45-40_NB-KENE/events.out.tfevents.1715881540.NB-KENE.34600.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f4083ef6870dedf180d26c970a4cef0737b05babf6937c3b109cfeff2e2650b8
3
+ size 5248
runs/May16_19-47-04_NB-KENE/events.out.tfevents.1715881624.NB-KENE.20172.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a31099849f3f71aa412bcdf8cc66a3b218e5580c2952508a5a0c52d074f7eee3
3
+ size 5256
runs/May16_19-49-55_NB-KENE/events.out.tfevents.1715881795.NB-KENE.14536.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:994d0a2d5bdb79ac9c8dd6a5469909e99f82a4894104d4ea4f3db68d501397c6
3
+ size 10335
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:951b2fac7775af6a3740f7fcc62b7847ad82802f1f26a85195927ecd724007d4
3
  size 4984
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dfe39c9499a8e362b59873b014109ed03a80cadee72a86c245ea97d615a4975b
3
  size 4984