metadata
license: other
base_model: nvidia/mit-b5
tags:
- vision
- image-segmentation
- generated_from_trainer
model-index:
- name: segformer-b5-seed42-outputs
results: []
segformer-b5-seed42-outputs
This model is a fine-tuned version of nvidia/mit-b5 on the unreal-hug/REAL_DATASET_SEG_401_6_lbls dataset. It achieves the following results on the evaluation set:
- Loss: 0.2833
- Mean Iou: 0.3430
- Mean Accuracy: 0.4050
- Overall Accuracy: 0.6546
- Accuracy Unlabeled: nan
- Accuracy Lv: 0.7625
- Accuracy Rv: 0.6171
- Accuracy Ra: 0.7072
- Accuracy La: 0.6623
- Accuracy Vs: 0.0
- Accuracy As: 0.0
- Accuracy Mk: 0.0227
- Accuracy Tk: nan
- Accuracy Asd: 0.3003
- Accuracy Vsd: 0.4268
- Accuracy Ak: 0.5517
- Iou Unlabeled: 0.0
- Iou Lv: 0.7175
- Iou Rv: 0.5629
- Iou Ra: 0.6665
- Iou La: 0.5980
- Iou Vs: 0.0
- Iou As: 0.0
- Iou Mk: 0.0207
- Iou Tk: nan
- Iou Asd: 0.2802
- Iou Vsd: 0.3970
- Iou Ak: 0.5307
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- training_steps: 1000
Training results
Training Loss | Epoch | Step | Validation Loss | Mean Iou | Mean Accuracy | Overall Accuracy | Accuracy Unlabeled | Accuracy Lv | Accuracy Rv | Accuracy Ra | Accuracy La | Accuracy Vs | Accuracy As | Accuracy Mk | Accuracy Tk | Accuracy Asd | Accuracy Vsd | Accuracy Ak | Iou Unlabeled | Iou Lv | Iou Rv | Iou Ra | Iou La | Iou Vs | Iou As | Iou Mk | Iou Tk | Iou Asd | Iou Vsd | Iou Ak |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0.4921 | 0.62 | 100 | 0.4897 | 0.0906 | 0.1245 | 0.3551 | nan | 0.7217 | 0.0098 | 0.0751 | 0.4344 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0043 | 0.0 | 0.5846 | 0.0097 | 0.0746 | 0.3230 | 0.0 | 0.0 | 0.0 | nan | 0.0 | 0.0 | 0.0043 |
0.3534 | 1.25 | 200 | 0.3565 | 0.2420 | 0.3017 | 0.5592 | nan | 0.7947 | 0.4293 | 0.5941 | 0.3492 | 0.0 | 0.0 | 0.0 | nan | 0.1737 | 0.1500 | 0.5262 | 0.0 | 0.7343 | 0.3850 | 0.3997 | 0.3289 | 0.0 | 0.0 | 0.0 | nan | 0.1681 | 0.1456 | 0.5007 |
0.4663 | 1.88 | 300 | 0.3434 | 0.3620 | 0.4497 | 0.7082 | nan | 0.8026 | 0.7564 | 0.6551 | 0.7392 | 0.0 | 0.0 | 0.0 | nan | 0.3118 | 0.5764 | 0.6560 | 0.0 | 0.7545 | 0.6572 | 0.6039 | 0.5921 | 0.0 | 0.0 | 0.0 | nan | 0.2819 | 0.4908 | 0.6019 |
0.1737 | 2.5 | 400 | 0.3055 | 0.3331 | 0.4090 | 0.6394 | nan | 0.7469 | 0.6281 | 0.5765 | 0.6122 | 0.0 | 0.0 | 0.0004 | nan | 0.2401 | 0.6135 | 0.6724 | 0.0 | 0.7075 | 0.5704 | 0.5194 | 0.5310 | 0.0 | 0.0 | 0.0003 | nan | 0.2292 | 0.5279 | 0.5789 |
0.1954 | 3.12 | 500 | 0.3052 | 0.2570 | 0.2980 | 0.5174 | nan | 0.6624 | 0.4973 | 0.4223 | 0.5361 | 0.0 | 0.0 | 0.0022 | nan | 0.1117 | 0.3193 | 0.4284 | 0.0 | 0.6289 | 0.4592 | 0.4113 | 0.4902 | 0.0 | 0.0 | 0.0022 | nan | 0.1107 | 0.3024 | 0.4216 |
0.2666 | 3.75 | 600 | 0.3177 | 0.3808 | 0.4720 | 0.7175 | nan | 0.7675 | 0.7191 | 0.8483 | 0.7341 | 0.0 | 0.0 | 0.0950 | nan | 0.3086 | 0.6065 | 0.6405 | 0.0 | 0.7200 | 0.6353 | 0.6912 | 0.6409 | 0.0 | 0.0 | 0.0845 | nan | 0.2905 | 0.5245 | 0.6024 |
0.2214 | 4.38 | 700 | 0.2971 | 0.3748 | 0.4463 | 0.7178 | nan | 0.8524 | 0.6207 | 0.7488 | 0.7353 | 0.0 | 0.0 | 0.025 | nan | 0.3236 | 0.5440 | 0.6130 | 0.0 | 0.7909 | 0.5707 | 0.6987 | 0.6564 | 0.0 | 0.0 | 0.0235 | nan | 0.3015 | 0.4902 | 0.5907 |
0.2624 | 5.0 | 800 | 0.2833 | 0.3430 | 0.4050 | 0.6546 | nan | 0.7625 | 0.6171 | 0.7072 | 0.6623 | 0.0 | 0.0 | 0.0227 | nan | 0.3003 | 0.4268 | 0.5517 | 0.0 | 0.7175 | 0.5629 | 0.6665 | 0.5980 | 0.0 | 0.0 | 0.0207 | nan | 0.2802 | 0.3970 | 0.5307 |
0.3578 | 5.62 | 900 | 0.2847 | 0.3329 | 0.3926 | 0.6257 | nan | 0.7276 | 0.5712 | 0.6573 | 0.6410 | 0.0016 | 0.0 | 0.0227 | nan | 0.3125 | 0.4450 | 0.5470 | 0.0 | 0.6860 | 0.5210 | 0.6234 | 0.5790 | 0.0015 | 0.0 | 0.0210 | nan | 0.2906 | 0.4122 | 0.5275 |
0.2736 | 6.25 | 1000 | 0.2861 | 0.3393 | 0.4010 | 0.6425 | nan | 0.7587 | 0.5808 | 0.6702 | 0.6477 | 0.0014 | 0.0 | 0.0244 | nan | 0.3087 | 0.4702 | 0.5477 | 0.0 | 0.7133 | 0.5292 | 0.6319 | 0.5844 | 0.0014 | 0.0 | 0.0225 | nan | 0.2877 | 0.4328 | 0.5295 |
Framework versions
- Transformers 4.37.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0