Discussion-Phi-4-multimodal-instruct-audio-dimp-tag

This model is a fine-tuned version of microsoft/Phi-4-multimodal-instruct on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 33.4176

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 4e-05
  • train_batch_size: 1
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 16
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-07 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss
1160211.625 0.1117 10 7426.7354
0.6156 0.2235 20 31.8351
12.5708 0.3352 30 38.5527
120.2882 0.4469 40 34.2812
0.5927 0.5587 50 30.9922
2.379 0.6704 60 30.3796
0.6282 0.7821 70 29.5946
0.4296 0.8939 80 28.8131
0.1972 1.0 90 26.5031
0.2999 1.1117 100 29.2309
1.3594 1.2235 110 30.3200
0.5613 1.3352 120 27.7579
0.1148 1.4469 130 30.7001
2.4258 1.5587 140 31.5338
0.2405 1.6704 150 32.9445
0.1763 1.7821 160 33.0411
0.3052 1.8939 170 29.8340
0.0634 2.0 180 30.9146
0.1446 2.1117 190 34.0906
0.1127 2.2235 200 34.1306
2.9534 2.3352 210 31.8096
0.1518 2.4469 220 35.4243
0.148 2.5587 230 33.2750
0.0649 2.6704 240 32.6477
0.0999 2.7821 250 33.6155
0.3312 2.8939 260 33.4176

Framework versions

  • Transformers 4.48.2
  • Pytorch 2.4.1+cu124
  • Datasets 3.6.0
  • Tokenizers 0.21.1
Downloads last month
19
Safetensors
Model size
5.57B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for TakalaWang/Discussion-Phi-4-multimodal-instruct-audio-dimp-tag-beta2

Finetuned
(45)
this model