DinoAmoros-large-2025_05_06_37720-prova_bs16_freeze_monolabel

This model is a fine-tuned version of facebook/dinov2-large on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 2.9011
  • F1 Micro: 0.4
  • F1 Macro: 0.2667
  • Accuracy: 0.4
  • Learning Rate: 1e-05

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 150
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss F1 Micro F1 Macro Accuracy Rate
No log 1.0 1 3.1371 0.1 0.0667 0.1 0.001
No log 2.0 2 3.1559 0.1 0.05 0.1 0.001
No log 3.0 3 3.0740 0.2 0.1524 0.2 0.001
No log 4.0 4 3.0686 0.4 0.2333 0.4 0.001
No log 5.0 5 2.9377 0.5 0.2714 0.5 0.001
No log 6.0 6 2.8814 0.4 0.2286 0.4 0.001
No log 7.0 7 2.8164 0.3 0.1587 0.3 0.001
No log 8.0 8 2.7270 0.2 0.1333 0.2 0.001
No log 9.0 9 2.6143 0.4 0.2667 0.4 0.001
No log 10.0 10 2.4839 0.4 0.2667 0.4 0.001
No log 11.0 11 2.3587 0.4 0.2667 0.4 0.001
No log 12.0 12 2.2280 0.5 0.3048 0.5 0.001
No log 13.0 13 2.0928 0.5 0.3048 0.5 0.001
No log 14.0 14 2.0064 0.5 0.3048 0.5 0.001
No log 15.0 15 1.8491 0.5 0.3048 0.5 0.001
No log 16.0 16 1.7906 0.6 0.4048 0.6 0.001
No log 17.0 17 1.7055 0.6 0.4048 0.6 0.001
No log 18.0 18 1.6328 0.6 0.4048 0.6 0.001
No log 19.0 19 1.5969 0.6 0.4048 0.6 0.001
No log 20.0 20 1.5740 0.6 0.4048 0.6 0.001
No log 21.0 21 1.5900 0.6 0.4048 0.6 0.001
No log 22.0 22 1.6051 0.6 0.4048 0.6 0.001
No log 23.0 23 1.6009 0.6 0.4048 0.6 0.001
No log 24.0 24 1.5874 0.6 0.4048 0.6 0.001
No log 25.0 25 1.5908 0.6 0.4048 0.6 0.001
No log 26.0 26 1.5866 0.6 0.4048 0.6 0.001
No log 27.0 27 1.5541 0.6 0.4048 0.6 0.0001
No log 28.0 28 1.5380 0.6 0.4048 0.6 0.0001
No log 29.0 29 1.5012 0.6 0.4048 0.6 0.0001
No log 30.0 30 1.4620 0.6 0.4048 0.6 0.0001
No log 31.0 31 1.4533 0.6 0.4048 0.6 0.0001
No log 32.0 32 1.4629 0.6 0.4048 0.6 0.0001
No log 33.0 33 1.4447 0.6 0.4048 0.6 0.0001
No log 34.0 34 1.4385 0.6 0.4048 0.6 0.0001
No log 35.0 35 1.4305 0.6 0.4048 0.6 0.0001
No log 36.0 36 1.4212 0.6 0.4048 0.6 0.0001
No log 37.0 37 1.4205 0.6 0.4048 0.6 0.0001
No log 38.0 38 1.4142 0.6 0.4048 0.6 0.0001
No log 39.0 39 1.3952 0.6 0.4048 0.6 0.0001
No log 40.0 40 1.3966 0.6 0.4048 0.6 0.0001
No log 41.0 41 1.4088 0.6 0.4048 0.6 0.0001
No log 42.0 42 1.4072 0.6 0.4048 0.6 0.0001
No log 43.0 43 1.4033 0.6 0.4048 0.6 0.0001
No log 44.0 44 1.4202 0.6 0.4048 0.6 0.0001
No log 45.0 45 1.4144 0.5 0.3048 0.5 0.0001
No log 46.0 46 1.4181 0.5 0.3048 0.5 1e-05
No log 47.0 47 1.4177 0.5 0.3048 0.5 1e-05
No log 48.0 48 1.4321 0.5 0.3048 0.5 1e-05
No log 49.0 49 1.4261 0.5 0.3048 0.5 1e-05

Framework versions

  • Transformers 4.48.0
  • Pytorch 2.6.0+cu118
  • Datasets 3.0.2
  • Tokenizers 0.21.1
Downloads last month
7
Safetensors
Model size
306M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Amoros/DinoAmoros-large-2025_05_06_37720-prova_bs16_freeze_monolabel

Finetuned
(20)
this model