segformer-b5-finetuned-ade20k-hgo-coord_40epochs_distortion_ver2_global_norm_with_void
This model is a fine-tuned version of NICOPOI-9/segformer-b5-finetuned-ade20k-morphpadver1-hgo-coord_40epochs_distortion_ver2_global_norm on the NICOPOI-9/Modphad_Perlin_two_void_coord_global_norm dataset. It achieves the following results on the evaluation set:
- Loss: 0.6311
- Mean Iou: 0.6911
- Mean Accuracy: 0.8163
- Overall Accuracy: 0.8286
- Accuracy [0,0]: 0.7939
- Accuracy [0,1]: 0.8321
- Accuracy [1,0]: 0.8829
- Accuracy [1,1]: 0.8507
- Accuracy [0,2]: 0.8508
- Accuracy [0,3]: 0.8123
- Accuracy [1,2]: 0.7400
- Accuracy [1,3]: 0.8753
- Accuracy [2,0]: 0.8248
- Accuracy [2,1]: 0.7955
- Accuracy [2,2]: 0.8149
- Accuracy [2,3]: 0.8180
- Accuracy [3,0]: 0.8247
- Accuracy [3,1]: 0.6637
- Accuracy [3,2]: 0.7559
- Accuracy [3,3]: 0.8296
- Accuracy Void: 0.9120
- Iou [0,0]: 0.7079
- Iou [0,1]: 0.7563
- Iou [1,0]: 0.6888
- Iou [1,1]: 0.6479
- Iou [0,2]: 0.7244
- Iou [0,3]: 0.6633
- Iou [1,2]: 0.6849
- Iou [1,3]: 0.7019
- Iou [2,0]: 0.6984
- Iou [2,1]: 0.6967
- Iou [2,2]: 0.6674
- Iou [2,3]: 0.6785
- Iou [3,0]: 0.7044
- Iou [3,1]: 0.6013
- Iou [3,2]: 0.6736
- Iou [3,3]: 0.5709
- Iou Void: 0.8820
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 40
Training results
Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy [0,0] | Accuracy [0,1] | Accuracy [1,0] | Accuracy [1,1] | Accuracy [0,2] | Accuracy [0,3] | Accuracy [1,2] | Accuracy [1,3] | Accuracy [2,0] | Accuracy [2,1] | Accuracy [2,2] | Accuracy [2,3] | Accuracy [3,0] | Accuracy [3,1] | Accuracy [3,2] | Accuracy [3,3] | Accuracy Void | Iou [0,0] | Iou [0,1] | Iou [1,0] | Iou [1,1] | Iou [0,2] | Iou [0,3] | Iou [1,2] | Iou [1,3] | Iou [2,0] | Iou [2,1] | Iou [2,2] | Iou [2,3] | Iou [3,0] | Iou [3,1] | Iou [3,2] | Iou [3,3] | Iou Void |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.5689 | 7.3260 | 4000 | 0.9452 | 0.5040 | 0.6606 | 0.6993 | 0.7091 | 0.8427 | 0.8018 | 0.7929 | 0.6478 | 0.6410 | 0.5079 | 0.7260 | 0.5133 | 0.5678 | 0.4949 | 0.5141 | 0.7155 | 0.5155 | 0.5520 | 0.7735 | 0.9149 | 0.5608 | 0.5653 | 0.5446 | 0.5330 | 0.5264 | 0.4943 | 0.4334 | 0.5497 | 0.4396 | 0.4336 | 0.4078 | 0.4455 | 0.5335 | 0.4221 | 0.4396 | 0.4033 | 0.8360 |
0.8173 | 14.6520 | 8000 | 0.7003 | 0.6217 | 0.7654 | 0.7838 | 0.7374 | 0.8480 | 0.8475 | 0.8334 | 0.7976 | 0.7906 | 0.6299 | 0.8280 | 0.7217 | 0.7784 | 0.7006 | 0.6688 | 0.7747 | 0.6278 | 0.7141 | 0.8217 | 0.8917 | 0.6370 | 0.6769 | 0.6321 | 0.6150 | 0.6458 | 0.6219 | 0.5592 | 0.6612 | 0.6041 | 0.5702 | 0.5712 | 0.5747 | 0.6088 | 0.5572 | 0.6203 | 0.5551 | 0.8579 |
0.5925 | 21.9780 | 12000 | 0.6862 | 0.6493 | 0.7872 | 0.8009 | 0.7349 | 0.8587 | 0.8745 | 0.8574 | 0.8111 | 0.7840 | 0.7200 | 0.8547 | 0.7256 | 0.7701 | 0.7584 | 0.7438 | 0.7938 | 0.6107 | 0.7748 | 0.8240 | 0.8854 | 0.6446 | 0.7452 | 0.6555 | 0.6208 | 0.6626 | 0.6337 | 0.6291 | 0.6654 | 0.6211 | 0.6548 | 0.6136 | 0.6445 | 0.6546 | 0.5548 | 0.6200 | 0.5612 | 0.8566 |
0.195 | 29.3040 | 16000 | 0.6138 | 0.6896 | 0.8165 | 0.8283 | 0.7908 | 0.8529 | 0.8958 | 0.8764 | 0.8675 | 0.8016 | 0.7888 | 0.8499 | 0.7803 | 0.8406 | 0.7975 | 0.8032 | 0.7908 | 0.7219 | 0.7600 | 0.7587 | 0.9039 | 0.7050 | 0.7237 | 0.6883 | 0.6719 | 0.7054 | 0.6607 | 0.6947 | 0.7173 | 0.6633 | 0.6986 | 0.6576 | 0.6643 | 0.6869 | 0.6407 | 0.6552 | 0.6147 | 0.8744 |
0.1056 | 36.6300 | 20000 | 0.6311 | 0.6911 | 0.8163 | 0.8286 | 0.7939 | 0.8321 | 0.8829 | 0.8507 | 0.8508 | 0.8123 | 0.7400 | 0.8753 | 0.8248 | 0.7955 | 0.8149 | 0.8180 | 0.8247 | 0.6637 | 0.7559 | 0.8296 | 0.9120 | 0.7079 | 0.7563 | 0.6888 | 0.6479 | 0.7244 | 0.6633 | 0.6849 | 0.7019 | 0.6984 | 0.6967 | 0.6674 | 0.6785 | 0.7044 | 0.6013 | 0.6736 | 0.5709 | 0.8820 |
Framework versions
- Transformers 4.48.3
- Pytorch 2.1.0
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 25
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support