Xwin70b_fans
This model is a fine-tuned version of TheBloke/Xwin-LM-70B-V0.1-GPTQ on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.0073
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- training_steps: 120
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.8351 | 0.02 | 10 | 1.4544 |
1.2903 | 0.04 | 20 | 1.2835 |
1.1891 | 0.07 | 30 | 1.1606 |
1.0652 | 0.09 | 40 | 1.1180 |
1.0889 | 0.11 | 50 | 1.0877 |
1.0931 | 0.13 | 60 | 1.0617 |
0.981 | 0.15 | 70 | 1.0435 |
0.9941 | 0.17 | 80 | 1.0276 |
0.9802 | 0.2 | 90 | 1.0218 |
1.1242 | 0.22 | 100 | 1.0135 |
1.0687 | 0.24 | 110 | 1.0111 |
0.9263 | 0.26 | 120 | 1.0073 |
Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0
Model tree for affecto/Xwin70b_fans
Base model
TheBloke/Xwin-LM-70B-V0.1-GPTQ