Update README.md
Browse files
README.md
CHANGED
@@ -19,13 +19,6 @@ SegFormer model with a MiT-B2 backbone fine-tuned on Coralscapes at resolution 1
|
|
19 |
|
20 |
### Model Description
|
21 |
|
22 |
-
Training is conducted following the Segformer original [implementation](https://proceedings.neurips.cc/paper_files/paper/2021/file/64f1f27bf1b4ec22924fd0acb550c235-Paper.pdf), using a batch size of 8 for 265 epochs,
|
23 |
-
using the AdamW optimizer with an initial learning rate of 6e-5, weight decay of 1e-2 and polynomial learning rate scheduler with a power of 1.
|
24 |
-
During training, images are randomly scaled within a range of 1 and 2, flipped horizontally with a 0.5 probability and randomly cropped to 1024×1024 pixels.
|
25 |
-
Input images are normalized using the ImageNet mean and standard deviation. For evaluation, a non-overlapping sliding window strategy is employed,
|
26 |
-
using a window size of 1024x1024.
|
27 |
-
|
28 |
-
|
29 |
- **Model type:** SegFormer
|
30 |
- **License:** [More Information Needed]
|
31 |
- **Finetuned from model:** [SegFormer (b2-sized) encoder pre-trained-only (`nvidia/mit-b2`)](https://huggingface.co/nvidia/mit-b2)
|
|
|
19 |
|
20 |
### Model Description
|
21 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
- **Model type:** SegFormer
|
23 |
- **License:** [More Information Needed]
|
24 |
- **Finetuned from model:** [SegFormer (b2-sized) encoder pre-trained-only (`nvidia/mit-b2`)](https://huggingface.co/nvidia/mit-b2)
|