lora
This model is a fine-tuned version of microsoft/Phi-3-mini-4k-instruct on the flock_task5_tranning dataset. It achieves the following results on the evaluation set:
- Loss: 0.7054
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- total_eval_batch_size: 2
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.287 | 0.6897 | 10 | 1.4997 |
1.3264 | 1.3448 | 20 | 1.3904 |
0.9293 | 2.0 | 30 | 1.2929 |
1.2359 | 2.6897 | 40 | 1.2001 |
1.1123 | 3.3448 | 50 | 1.1125 |
0.8456 | 4.0 | 60 | 1.0130 |
0.92 | 4.6897 | 70 | 0.9277 |
0.9875 | 5.3448 | 80 | 0.8583 |
0.6225 | 6.0 | 90 | 0.8013 |
0.825 | 6.6897 | 100 | 0.7550 |
0.6775 | 7.3448 | 110 | 0.7281 |
0.6614 | 8.0 | 120 | 0.7120 |
0.6203 | 8.6897 | 130 | 0.7067 |
0.6521 | 9.3448 | 140 | 0.7054 |
Framework versions
- PEFT 0.12.0
- Transformers 4.48.3
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 63
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.
Model tree for jerseyjerry/task-5-microsoft-Phi-3-mini-4k-instruct-20250224
Base model
microsoft/Phi-3-mini-4k-instruct