mrferr3t commited on
Commit
36dc527
·
verified ·
1 Parent(s): 33b73fe

End of training

Browse files
Files changed (2) hide show
  1. README.md +6 -6
  2. adapter_model.bin +1 -1
README.md CHANGED
@@ -64,7 +64,7 @@ lora_model_dir: null
64
  lora_r: 8
65
  lora_target_linear: true
66
  lr_scheduler: cosine
67
- max_steps: 17
68
  micro_batch_size: 2
69
  mlflow_experiment_name: /tmp/559d5227401ea00d_train_data.json
70
  model_type: AutoModelForCausalLM
@@ -103,7 +103,7 @@ xformers_attention: null
103
 
104
  This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset.
105
  It achieves the following results on the evaluation set:
106
- - Loss: 10.3726
107
 
108
  ## Model description
109
 
@@ -131,16 +131,16 @@ The following hyperparameters were used during training:
131
  - optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
132
  - lr_scheduler_type: cosine
133
  - lr_scheduler_warmup_steps: 10
134
- - training_steps: 17
135
 
136
  ### Training results
137
 
138
  | Training Loss | Epoch | Step | Validation Loss |
139
  |:-------------:|:------:|:----:|:---------------:|
140
  | 10.3682 | 0.0005 | 1 | 10.3733 |
141
- | 10.3736 | 0.0024 | 5 | 10.3732 |
142
- | 10.3688 | 0.0047 | 10 | 10.3729 |
143
- | 10.371 | 0.0071 | 15 | 10.3726 |
144
 
145
 
146
  ### Framework versions
 
64
  lora_r: 8
65
  lora_target_linear: true
66
  lr_scheduler: cosine
67
+ max_steps: 15
68
  micro_batch_size: 2
69
  mlflow_experiment_name: /tmp/559d5227401ea00d_train_data.json
70
  model_type: AutoModelForCausalLM
 
103
 
104
  This model is a fine-tuned version of [fxmarty/tiny-llama-fast-tokenizer](https://huggingface.co/fxmarty/tiny-llama-fast-tokenizer) on the None dataset.
105
  It achieves the following results on the evaluation set:
106
+ - Loss: 10.3728
107
 
108
  ## Model description
109
 
 
131
  - optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
132
  - lr_scheduler_type: cosine
133
  - lr_scheduler_warmup_steps: 10
134
+ - training_steps: 15
135
 
136
  ### Training results
137
 
138
  | Training Loss | Epoch | Step | Validation Loss |
139
  |:-------------:|:------:|:----:|:---------------:|
140
  | 10.3682 | 0.0005 | 1 | 10.3733 |
141
+ | 10.3764 | 0.0019 | 4 | 10.3733 |
142
+ | 10.3747 | 0.0038 | 8 | 10.3731 |
143
+ | 10.3756 | 0.0056 | 12 | 10.3728 |
144
 
145
 
146
  ### Framework versions
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e60655c822606e697981536e35386c2ccb4e60a7f873c37aafe3f0d7d84f4345
3
  size 33666
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:13c0e5ae132ffa554f3c57c0bb84c9358f978fd6f2c48df8b5eed70ac911886e
3
  size 33666