Mistral-8B-Instruct-2410-009-3000

This model is a fine-tuned version of mistralai/Ministral-8B-Instruct-2410 on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5345

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 2
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 16
  • optimizer: Use paged_adamw_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 10
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
1.2576 0.8658 100 1.2716
1.0967 1.7273 200 1.0722
0.9321 2.5887 300 0.9199
0.755 3.4502 400 0.8018
0.6895 4.3117 500 0.7204
0.5723 5.1732 600 0.6567
0.5696 6.0346 700 0.6137
0.5127 6.9004 800 0.5841
0.4962 7.7619 900 0.5562
0.4982 8.6234 1000 0.5444
0.4259 9.4848 1100 0.5345

Framework versions

  • PEFT 0.15.2
  • Transformers 4.51.3
  • Pytorch 2.6.0+cu126
  • Datasets 3.5.0
  • Tokenizers 0.21.1
Downloads last month
63
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for raulgdp/Mistral-8B-Instruct-2410-009-3000

Adapter
(70)
this model

Collection including raulgdp/Mistral-8B-Instruct-2410-009-3000