okai-musiclang-content-t5-small_finetune

This model is a fine-tuned version of sandernotenbaert/okai-musiclang-content-t5-small_finetune on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.5041

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 16
  • total_train_batch_size: 64
  • optimizer: Use OptimizerNames.ADAFACTOR and the args are: No additional optimizer arguments
  • lr_scheduler_type: constant_with_warmup
  • lr_scheduler_warmup_ratio: 0.02
  • num_epochs: 5
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
1.7839 0.2226 500 1.6513
1.6576 0.4452 1000 1.6615
1.6396 0.6679 1500 1.6650
1.7168 0.8905 2000 1.6315
1.7366 1.1131 2500 1.6234
1.7171 1.3357 3000 1.6028
1.6238 1.5583 3500 1.6130
1.6217 1.7810 4000 1.6218
1.7077 2.0036 4500 1.5784
1.7034 2.2262 5000 1.5792
1.6049 2.4488 5500 1.5866
1.6018 2.6714 6000 1.5869
1.6628 2.8941 6500 1.5628
1.653 3.1171 7000 1.5606
1.6575 3.3397 7500 1.5381
1.64 3.5619 8000 1.5395
1.6455 3.7845 8500 1.5163
1.6308 4.0076 9000 1.5311
1.6324 4.2302 9500 1.5118
1.5481 4.4528 10000 1.5092
1.547 4.6754 10500 1.5109
1.5584 4.8981 11000 1.5041

Framework versions

  • Transformers 4.55.0
  • Pytorch 2.6.0+cu124
  • Datasets 4.0.0
  • Tokenizers 0.21.2
Downloads last month
245
Safetensors
Model size
7.55M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for sandernotenbaert/okai-musiclang-content-t5-small_finetune

Unable to build the model tree, the base model loops to the model itself. Learn more.