Edit model card
Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

results_Phi3_medium_4k

This model is a fine-tuned version of microsoft/Phi-3-medium-4k-instruct on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3210

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
2.1017 0.1065 100 2.1255
2.042 0.2130 200 2.0092
1.8529 0.3195 300 1.8467
1.7071 0.4260 400 1.6110
1.3058 0.5325 500 1.4370
1.0229 0.6390 600 1.1798
0.7982 0.7455 700 1.0212
0.9674 0.8520 800 0.8781
0.7444 0.9585 900 0.7509
0.6362 1.0650 1000 0.6309
0.4962 1.1715 1100 0.5848
0.3617 1.2780 1200 0.5107
0.3606 1.3845 1300 0.4532
0.4413 1.4909 1400 0.4173
0.4594 1.5974 1500 0.3998
0.3132 1.7039 1600 0.3795
0.313 1.8104 1700 0.3667
0.2656 1.9169 1800 0.3534
0.2892 2.0234 1900 0.3488
0.2654 2.1299 2000 0.3452
0.3642 2.2364 2100 0.3376
0.3124 2.3429 2200 0.3365
0.3334 2.4494 2300 0.3319
0.1839 2.5559 2400 0.3290
0.2809 2.6624 2500 0.3259
0.3199 2.7689 2600 0.3237
0.3017 2.8754 2700 0.3213
0.3106 2.9819 2800 0.3210

Framework versions

  • PEFT 0.11.1
  • Transformers 4.41.2
  • Pytorch 2.1.2+cu121
  • Datasets 2.19.2
  • Tokenizers 0.19.1
Downloads last month
2
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for kostasman1/results_Phi3_medium_4k

Adapter
(8)
this model