|
--- |
|
base_model: mistralai/Mistral-7B-v0.1 |
|
library_name: peft |
|
license: apache-2.0 |
|
tags: |
|
- trl |
|
- dpo |
|
- generated_from_trainer |
|
model-index: |
|
- name: zephyr-7b-dpo-qlora |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# zephyr-7b-dpo-qlora |
|
|
|
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on an unknown dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.4952 |
|
- Rewards/chosen: -2.8107 |
|
- Rewards/rejected: -3.8708 |
|
- Rewards/accuracies: 0.7718 |
|
- Rewards/margins: 1.0601 |
|
- Logps/rejected: -631.7385 |
|
- Logps/chosen: -545.9743 |
|
- Logits/rejected: -1.0385 |
|
- Logits/chosen: -1.1509 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 5e-06 |
|
- train_batch_size: 4 |
|
- eval_batch_size: 8 |
|
- seed: 42 |
|
- distributed_type: multi-GPU |
|
- num_devices: 4 |
|
- gradient_accumulation_steps: 4 |
|
- total_train_batch_size: 64 |
|
- total_eval_batch_size: 32 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: cosine |
|
- lr_scheduler_warmup_ratio: 0.1 |
|
- num_epochs: 1 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Logits/chosen | Logits/rejected | Logps/chosen | Logps/rejected | Validation Loss | Rewards/accuracies | Rewards/chosen | Rewards/margins | Rewards/rejected | |
|
|:-------------:|:------:|:----:|:-------------:|:---------------:|:------------:|:--------------:|:---------------:|:------------------:|:--------------:|:---------------:|:----------------:| |
|
| 0.6163 | 0.1047 | 100 | -2.1006 | -2.0162 | -303.8351 | -310.3097 | 0.6178 | 0.6806 | -0.3893 | 0.2672 | -0.6565 | |
|
| 0.5679 | 0.2094 | 200 | -1.8227 | -1.7394 | -352.2879 | -389.6575 | 0.5567 | 0.7401 | -0.8739 | 0.5761 | -1.4500 | |
|
| 0.5412 | 0.3141 | 300 | -1.3111 | -1.2181 | -421.3257 | -483.0423 | 0.5305 | 0.7460 | -1.5642 | 0.8196 | -2.3838 | |
|
| 0.5364 | 0.4187 | 400 | -1.2334 | -1.1332 | -416.6979 | -476.3458 | 0.5143 | 0.7579 | -1.5180 | 0.7989 | -2.3169 | |
|
| 0.5046 | 0.5234 | 500 | -1.1373 | -1.0302 | -529.9542 | -605.2977 | 0.5062 | 0.7579 | -2.6505 | 0.9559 | -3.6064 | |
|
| 0.4736 | 0.6281 | 600 | 0.5059 | -2.7244 | -3.7650 | 0.7639 | 1.0406 | -621.1549 | -537.3406 | -1.0135 | -1.1253 | |
|
| 0.4619 | 0.7328 | 700 | 0.4994 | -2.9240 | -3.9991 | 0.7619 | 1.0750 | -644.5651 | -557.3041 | -1.0064 | -1.1194 | |
|
| 0.4926 | 0.8375 | 800 | 0.4962 | -2.7247 | -3.7455 | 0.7659 | 1.0207 | -619.2051 | -537.3770 | -1.0516 | -1.1641 | |
|
| 0.4856 | 0.9422 | 900 | 0.4952 | -2.8107 | -3.8708 | 0.7718 | 1.0601 | -631.7385 | -545.9743 | -1.0385 | -1.1509 | |
|
|
|
|
|
### Framework versions |
|
|
|
- PEFT 0.12.0 |
|
- Transformers 4.44.2 |
|
- Pytorch 2.4.0 |
|
- Datasets 2.21.0 |
|
- Tokenizers 0.19.1 |