mask2former-finetuned-ER-Mito-LD4

This model is a fine-tuned version of facebook/mask2former-swin-base-IN21k-ade-semantic on the Dnq2025/Mask2former_Pretrain dataset. It achieves the following results on the evaluation set:

  • Loss: 33.9148
  • Dummy: 1.0

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0004
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 1337
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: polynomial
  • training_steps: 6450

Training results

Training Loss Epoch Step Validation Loss Dummy
53.0042 1.0 129 41.6702 1.0
41.5032 2.0 258 35.3152 1.0
37.8318 3.0 387 33.2929 1.0
33.1734 4.0 516 31.6052 1.0
31.0889 5.0 645 32.0792 1.0
30.5091 6.0 774 29.4252 1.0
27.7742 7.0 903 29.3660 1.0
27.1136 8.0 1032 28.6043 1.0
25.1614 9.0 1161 28.0848 1.0
24.7794 10.0 1290 28.1507 1.0
23.636 11.0 1419 28.3853 1.0
22.7494 12.0 1548 27.2592 1.0
22.7129 13.0 1677 29.8838 1.0
21.1747 14.0 1806 28.1624 1.0
20.9589 15.0 1935 27.9121 1.0
20.2591 16.0 2064 26.6467 1.0
20.1436 17.0 2193 26.9901 1.0
19.5047 18.0 2322 29.2895 1.0
18.4257 19.0 2451 27.0489 1.0
18.6316 20.0 2580 27.3730 1.0
18.037 21.0 2709 28.0853 1.0
17.6324 22.0 2838 26.6344 1.0
17.19 23.0 2967 28.1709 1.0
17.5784 24.0 3096 26.3646 1.0
16.3714 25.0 3225 28.6477 1.0
16.2177 26.0 3354 29.9328 1.0
15.8326 27.0 3483 27.1418 1.0
15.7345 28.0 3612 28.5265 1.0
14.918 29.0 3741 30.8378 1.0
15.2316 30.0 3870 28.5173 1.0
14.6576 31.0 3999 29.0688 1.0
14.5837 32.0 4128 29.7354 1.0
13.7819 33.0 4257 28.6140 1.0
14.851 34.0 4386 30.7131 1.0
14.1454 35.0 4515 29.3673 1.0
13.5445 36.0 4644 30.1412 1.0
13.3725 37.0 4773 29.7489 1.0
13.8976 38.0 4902 32.2482 1.0
13.2317 39.0 5031 33.3837 1.0
12.8382 40.0 5160 31.9261 1.0
12.8798 41.0 5289 31.0644 1.0
12.5615 42.0 5418 32.6052 1.0
12.4595 43.0 5547 32.6710 1.0
12.9861 44.0 5676 32.3271 1.0
12.3429 45.0 5805 33.1802 1.0
11.6031 46.0 5934 33.3981 1.0
12.7182 47.0 6063 33.2806 1.0
11.8251 48.0 6192 33.9491 1.0
12.4439 49.0 6321 33.4338 1.0
11.9834 50.0 6450 33.8444 1.0

Framework versions

  • Transformers 4.50.0.dev0
  • Pytorch 2.4.1
  • Datasets 3.3.2
  • Tokenizers 0.21.0
Downloads last month
1
Safetensors
Model size
107M params
Tensor type
I64
·
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Dnq2025/mask2former-finetuned-ER-Mito-LD4

Finetuned
(4)
this model