mindchain commited on
Commit
f9edc1d
·
1 Parent(s): 486ea45

End of training

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  license: llama2
3
- base_model: TheBloke/Llama-2-7B-GPTQ
4
  tags:
5
  - generated_from_trainer
6
  model-index:
@@ -13,7 +13,7 @@ should probably proofread and complete it, then remove this comment. -->
13
 
14
  # xwin-finetuned-alpaca-cleaned
15
 
16
- This model is a fine-tuned version of [TheBloke/Llama-2-7B-GPTQ](https://huggingface.co/TheBloke/Llama-2-7B-GPTQ) on the None dataset.
17
 
18
  ## Model description
19
 
@@ -38,7 +38,7 @@ The following hyperparameters were used during training:
38
  - seed: 42
39
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
40
  - lr_scheduler_type: cosine
41
- - training_steps: 250
42
 
43
  ### Training results
44
 
 
1
  ---
2
  license: llama2
3
+ base_model: TheBloke/Xwin-LM-7B-V0.1-GPTQ
4
  tags:
5
  - generated_from_trainer
6
  model-index:
 
13
 
14
  # xwin-finetuned-alpaca-cleaned
15
 
16
+ This model is a fine-tuned version of [TheBloke/Xwin-LM-7B-V0.1-GPTQ](https://huggingface.co/TheBloke/Xwin-LM-7B-V0.1-GPTQ) on the None dataset.
17
 
18
  ## Model description
19
 
 
38
  - seed: 42
39
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
40
  - lr_scheduler_type: cosine
41
+ - training_steps: 20
42
 
43
  ### Training results
44