imdatta0's picture
End of training
2060448 verified
|
raw
history blame
3.43 kB
metadata
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
library_name: peft
license: apache-2.0
tags:
  - unsloth
  - generated_from_trainer
model-index:
  - name: Mistral-7B-v0.3_magiccoder_ortho
    results: []

Mistral-7B-v0.3_magiccoder_ortho

This model is a fine-tuned version of unsloth/mistral-7b-v0.3-bnb-4bit on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 7.8233

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.02
  • num_epochs: 1

Training results

Training Loss Epoch Step Validation Loss
4.7819 0.0262 4 5.1512
9.2164 0.0523 8 8.3377
8.1082 0.0785 12 8.9375
9.0762 0.1047 16 8.3063
8.2007 0.1308 20 8.0203
8.168 0.1570 24 8.2340
7.8692 0.1832 28 7.8876
7.8831 0.2093 32 7.8978
7.7946 0.2355 36 7.8117
7.8717 0.2617 40 7.8140
7.9497 0.2878 44 7.9363
8.0978 0.3140 48 7.9038
7.8654 0.3401 52 7.8165
7.8036 0.3663 56 7.8578
7.8264 0.3925 60 7.8504
7.8333 0.4186 64 7.8426
7.8526 0.4448 68 7.8285
7.802 0.4710 72 7.7864
7.8376 0.4971 76 7.8583
7.8992 0.5233 80 7.8449
7.8557 0.5495 84 7.8771
7.8194 0.5756 88 7.8423
7.9157 0.6018 92 7.8123
7.8291 0.6280 96 7.7872
7.8662 0.6541 100 7.8912
7.8973 0.6803 104 7.9091
7.9194 0.7065 108 7.9010
7.8688 0.7326 112 7.8714
7.8032 0.7588 116 7.7568
7.7982 0.7850 120 7.7807
7.9577 0.8111 124 7.8259
7.886 0.8373 128 7.8117
7.8537 0.8635 132 7.7975
7.832 0.8896 136 7.8116
7.7412 0.9158 140 7.8055
7.822 0.9419 144 7.8141
7.7889 0.9681 148 7.8214
7.8316 0.9943 152 7.8233

Framework versions

  • PEFT 0.12.0
  • Transformers 4.44.0
  • Pytorch 2.4.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1