okai-musiclang-content-t5-small

This model is a fine-tuned version of sandernotenbaert/okai-musiclang-content-t5-small on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.1968

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 8e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine_with_restarts
  • lr_scheduler_warmup_ratio: 0.1
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 3
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
1.4556 0.2226 500 1.3230
1.4166 0.4452 1000 1.3003
1.3956 0.6679 1500 1.2741
1.3865 0.8905 2000 1.2605
1.3589 1.1131 2500 1.2368
1.3514 1.3357 3000 1.2370
1.3349 1.5583 3500 1.2322
1.3163 1.7809 4000 1.2099
1.3183 2.0036 4500 1.2062
1.3204 2.2262 5000 1.2014
1.303 2.4488 5500 1.2008
1.3205 2.6714 6000 1.1953
1.3108 2.8940 6500 1.1968

Framework versions

  • Transformers 4.54.1
  • Pytorch 2.6.0+cu124
  • Datasets 4.0.0
  • Tokenizers 0.21.2
Downloads last month
3
Safetensors
Model size
7.55M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for sandernotenbaert/okai-musiclang-content-t5-small

Unable to build the model tree, the base model loops to the model itself. Learn more.