segformer-b5-finetuned-ade20k-hgo-coord_40epochs_distortion_ver2_global_norm_with_void_2
This model is a fine-tuned version of NICOPOI-9/segformer-b5-finetuned-ade20k-hgo-coord_40epochs_distortion_ver2_global_norm_with_void on the NICOPOI-9/Modphad_Perlin_two_void_coord_global_norm dataset. It achieves the following results on the evaluation set:
- Loss: 0.5762
- Mean Iou: 0.7337
- Mean Accuracy: 0.8461
- Overall Accuracy: 0.8575
- Accuracy [0,0]: 0.7998
- Accuracy [0,1]: 0.8504
- Accuracy [1,0]: 0.8997
- Accuracy [1,1]: 0.8844
- Accuracy [0,2]: 0.8526
- Accuracy [0,3]: 0.8521
- Accuracy [1,2]: 0.8084
- Accuracy [1,3]: 0.9050
- Accuracy [2,0]: 0.8267
- Accuracy [2,1]: 0.8468
- Accuracy [2,2]: 0.8350
- Accuracy [2,3]: 0.8306
- Accuracy [3,0]: 0.8400
- Accuracy [3,1]: 0.7673
- Accuracy [3,2]: 0.8188
- Accuracy [3,3]: 0.8239
- Accuracy Void: 0.9419
- Iou [0,0]: 0.7337
- Iou [0,1]: 0.7652
- Iou [1,0]: 0.7422
- Iou [1,1]: 0.7281
- Iou [0,2]: 0.7272
- Iou [0,3]: 0.7295
- Iou [1,2]: 0.7277
- Iou [1,3]: 0.7569
- Iou [2,0]: 0.7143
- Iou [2,1]: 0.7429
- Iou [2,2]: 0.6988
- Iou [2,3]: 0.7222
- Iou [3,0]: 0.7301
- Iou [3,1]: 0.6912
- Iou [3,2]: 0.7063
- Iou [3,3]: 0.6497
- Iou Void: 0.9062
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 40
Training results
Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy [0,0] | Accuracy [0,1] | Accuracy [1,0] | Accuracy [1,1] | Accuracy [0,2] | Accuracy [0,3] | Accuracy [1,2] | Accuracy [1,3] | Accuracy [2,0] | Accuracy [2,1] | Accuracy [2,2] | Accuracy [2,3] | Accuracy [3,0] | Accuracy [3,1] | Accuracy [3,2] | Accuracy [3,3] | Accuracy Void | Iou [0,0] | Iou [0,1] | Iou [1,0] | Iou [1,1] | Iou [0,2] | Iou [0,3] | Iou [1,2] | Iou [1,3] | Iou [2,0] | Iou [2,1] | Iou [2,2] | Iou [2,3] | Iou [3,0] | Iou [3,1] | Iou [3,2] | Iou [3,3] | Iou Void |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.217 | 7.3260 | 4000 | 0.6802 | 0.6581 | 0.7921 | 0.8101 | 0.7788 | 0.8741 | 0.8607 | 0.8700 | 0.8868 | 0.7358 | 0.7611 | 0.8486 | 0.6603 | 0.7697 | 0.7525 | 0.6950 | 0.7871 | 0.6547 | 0.7968 | 0.8116 | 0.9222 | 0.6949 | 0.6915 | 0.6969 | 0.6722 | 0.6550 | 0.6221 | 0.6278 | 0.6964 | 0.5823 | 0.6768 | 0.5810 | 0.6195 | 0.6778 | 0.5851 | 0.6295 | 0.5928 | 0.8861 |
0.4679 | 14.6520 | 8000 | 0.6734 | 0.6678 | 0.7989 | 0.8151 | 0.8117 | 0.8561 | 0.8435 | 0.8534 | 0.7939 | 0.8547 | 0.7110 | 0.8527 | 0.7230 | 0.8044 | 0.7351 | 0.7588 | 0.8436 | 0.6400 | 0.7842 | 0.8072 | 0.9072 | 0.7111 | 0.7229 | 0.6768 | 0.6546 | 0.7015 | 0.6760 | 0.6186 | 0.6878 | 0.6497 | 0.6805 | 0.6336 | 0.6463 | 0.6766 | 0.5641 | 0.6259 | 0.5469 | 0.8790 |
0.4097 | 21.9780 | 12000 | 0.5488 | 0.7153 | 0.8322 | 0.8470 | 0.8047 | 0.8701 | 0.9003 | 0.8719 | 0.8662 | 0.8697 | 0.7690 | 0.8969 | 0.7867 | 0.8156 | 0.8258 | 0.8060 | 0.8173 | 0.6883 | 0.7867 | 0.8295 | 0.9423 | 0.7302 | 0.7713 | 0.7330 | 0.7135 | 0.7221 | 0.7232 | 0.6894 | 0.7465 | 0.6896 | 0.7288 | 0.6401 | 0.7135 | 0.7153 | 0.6307 | 0.6919 | 0.6196 | 0.9021 |
0.0841 | 29.3040 | 16000 | 0.5797 | 0.7280 | 0.8404 | 0.8532 | 0.8086 | 0.9026 | 0.9058 | 0.8856 | 0.8771 | 0.8675 | 0.8249 | 0.8914 | 0.7677 | 0.8369 | 0.8095 | 0.8100 | 0.8269 | 0.7680 | 0.8010 | 0.7752 | 0.9284 | 0.7258 | 0.7544 | 0.7290 | 0.6917 | 0.7211 | 0.7154 | 0.7163 | 0.7629 | 0.6782 | 0.7413 | 0.7102 | 0.7191 | 0.7427 | 0.6846 | 0.7210 | 0.6629 | 0.9002 |
0.0578 | 36.6300 | 20000 | 0.5762 | 0.7337 | 0.8461 | 0.8575 | 0.7998 | 0.8504 | 0.8997 | 0.8844 | 0.8526 | 0.8521 | 0.8084 | 0.9050 | 0.8267 | 0.8468 | 0.8350 | 0.8306 | 0.8400 | 0.7673 | 0.8188 | 0.8239 | 0.9419 | 0.7337 | 0.7652 | 0.7422 | 0.7281 | 0.7272 | 0.7295 | 0.7277 | 0.7569 | 0.7143 | 0.7429 | 0.6988 | 0.7222 | 0.7301 | 0.6912 | 0.7063 | 0.6497 | 0.9062 |
Framework versions
- Transformers 4.48.3
- Pytorch 2.1.0
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support