swin-large-mask2former-finetuned-Mitoliked-Mito-Innertongue
This model is a fine-tuned version of Dnq2025/swin-large-mask2former-finetuned-ER-Mito-LD8 on the Dnq2025/Mask2former_Finetune dataset. It achieves the following results on the evaluation set:
- Mean Iou: 0.3878
- Loss: 39.8613
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 4
- eval_batch_size: 4
- seed: 1337
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: polynomial
- training_steps: 3225
Training results
Training Loss | Epoch | Step | Mean Iou | Validation Loss |
---|---|---|---|---|
No log | 1.1364 | 100 | 0.3146 | 37.8845 |
46.6024 | 2.2727 | 200 | 0.3627 | 36.0673 |
33.1722 | 3.4091 | 300 | 0.3700 | 34.6181 |
33.1722 | 4.5455 | 400 | 0.3653 | 34.6646 |
28.6661 | 5.6818 | 500 | 0.3817 | 33.8163 |
25.6461 | 6.8182 | 600 | 0.3686 | 34.5651 |
25.6461 | 7.9545 | 700 | 0.3872 | 34.6853 |
22.9594 | 9.0909 | 800 | 0.4054 | 34.6685 |
20.9253 | 10.2273 | 900 | 0.4176 | 33.7321 |
20.9253 | 11.3636 | 1000 | 0.4234 | 33.3689 |
19.4883 | 12.5 | 1100 | 0.4244 | 34.2274 |
18.0467 | 13.6364 | 1200 | 0.3938 | 36.7450 |
18.0467 | 14.7727 | 1300 | 0.4114 | 35.1488 |
16.8268 | 15.9091 | 1400 | 0.3904 | 36.7316 |
16.15 | 17.0455 | 1500 | 0.3792 | 36.5393 |
16.15 | 18.1818 | 1600 | 0.3835 | 36.0319 |
15.2578 | 19.3182 | 1700 | 0.4292 | 36.3998 |
14.6085 | 20.4545 | 1800 | 0.3808 | 36.4097 |
14.6085 | 21.5909 | 1900 | 0.4252 | 37.5680 |
13.9805 | 22.7273 | 2000 | 0.3874 | 36.9057 |
13.4133 | 23.8636 | 2100 | 0.3949 | 37.5872 |
13.4133 | 25.0 | 2200 | 0.4442 | 36.8461 |
12.9534 | 26.1364 | 2300 | 0.3961 | 37.5527 |
12.4781 | 27.2727 | 2400 | 0.4437 | 36.9928 |
12.4781 | 28.4091 | 2500 | 0.3937 | 38.2435 |
12.0214 | 29.5455 | 2600 | 0.3901 | 39.0895 |
11.791 | 30.6818 | 2700 | 0.3845 | 40.6167 |
11.791 | 31.8182 | 2800 | 0.4469 | 37.9631 |
11.5261 | 32.9545 | 2900 | 0.3883 | 40.8935 |
11.3609 | 34.0909 | 3000 | 0.3869 | 41.0478 |
11.3609 | 35.2273 | 3100 | 0.3972 | 39.0594 |
11.1008 | 36.3636 | 3200 | 0.3880 | 40.7524 |
Framework versions
- Transformers 4.50.0.dev0
- Pytorch 2.4.1
- Datasets 3.3.2
- Tokenizers 0.21.0
- Downloads last month
- 19
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for Dnq2025/swin-large-mask2former-finetuned-Mitoliked-Mito-Innertongue
Base model
facebook/mask2former-swin-large-ade-semantic