Yi-6B-ruozhiba3

This model is a fine-tuned version of 01-ai/Yi-6B on the ruozhiba dataset. It achieves the following results on the evaluation set:

  • Loss: 4.1874

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss
2.7671 1.0 55 2.3123
2.0319 2.0 110 1.9679
1.7972 3.0 165 1.9426
1.5841 4.0 220 2.0110
1.2842 5.0 275 2.2671
0.9305 6.0 330 2.5263
0.6734 7.0 385 2.7798
0.4579 8.0 440 3.1052
0.3091 9.0 495 3.3409
0.2418 10.0 550 3.4999
0.1718 11.0 605 3.6688
0.1555 12.0 660 3.7819
0.1191 13.0 715 3.9108
0.1291 14.0 770 3.9953
0.1213 15.0 825 4.1020
0.1 16.0 880 4.1205
0.115 17.0 935 4.1606
0.1076 18.0 990 4.1839
0.0962 19.0 1045 4.1873
0.0917 20.0 1100 4.1874

Framework versions

  • PEFT 0.7.1
  • Transformers 4.36.2
  • Pytorch 2.2.2+cu118
  • Datasets 2.14.6
  • Tokenizers 0.15.2
Downloads last month
0
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Model tree for yyx123/Yi-6B-ruozhiba3

Base model

01-ai/Yi-6B
Adapter
(23)
this model