TrainOutput(global_step=252, training_loss=0.0682461416144461, metrics={'train_runtime': 2060.2317, 'train_samples_per_second': 1.953, 'train_steps_per_second': 0.122, 'total_flos': 0.0, 'train_loss': 0.0682461416144461, 'epoch': 0.5009940357852882}) ==((====))== Unsloth - 2x faster free finetuning | Num GPUs = 1 \ /| Num examples = 8,046 | Num Epochs = 1 O^O/ _/ \ Batch size per device = 4 | Gradient Accumulation steps = 4 \ / Total batch size = 16 | Total steps = 252 "-____-" Number of trainable parameters = 41,943,040

Uploaded model

  • Developed by: sonthenguyen
  • License: apache-2.0
  • Finetuned from model : unsloth/zephyr-sft-bnb-4bit

This mistral model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
8
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for sonthenguyen/zephyr-sft-bnb-4bit-DPO-dismissive_mtbc-252steps

Finetuned
(71)
this model