|
--- |
|
license: apache-2.0 |
|
tags: |
|
- generated_from_trainer |
|
datasets: |
|
- imagefolder |
|
metrics: |
|
- accuracy |
|
model-index: |
|
- name: prove_melanomaprova_melanoma |
|
results: |
|
- task: |
|
name: Image Classification |
|
type: image-classification |
|
dataset: |
|
name: imagefolder |
|
type: imagefolder |
|
config: default |
|
split: train |
|
args: default |
|
metrics: |
|
- name: Accuracy |
|
type: accuracy |
|
value: 0.8466666666666667 |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# prove_melanomaprova_melanoma |
|
|
|
This model is a fine-tuned version of [UnipaPolitoUnimore/vit-large-patch32-384-melanoma](https://huggingface.co/UnipaPolitoUnimore/vit-large-patch32-384-melanoma) on the imagefolder dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 0.5191 |
|
- Accuracy: 0.8467 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 1e-05 |
|
- train_batch_size: 16 |
|
- eval_batch_size: 16 |
|
- seed: 42 |
|
- gradient_accumulation_steps: 4 |
|
- total_train_batch_size: 64 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- lr_scheduler_warmup_ratio: 0.1 |
|
- num_epochs: 40 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Accuracy | |
|
|:-------------:|:-----:|:----:|:---------------:|:--------:| |
|
| 0.8964 | 0.99 | 31 | 1.0906 | 0.52 | |
|
| 0.6588 | 1.98 | 62 | 1.0817 | 0.52 | |
|
| 0.6774 | 2.98 | 93 | 0.9474 | 0.52 | |
|
| 0.7785 | 4.0 | 125 | 0.8185 | 0.6267 | |
|
| 0.6732 | 4.99 | 156 | 0.7531 | 0.7267 | |
|
| 0.5438 | 5.98 | 187 | 0.6972 | 0.7333 | |
|
| 0.5497 | 6.98 | 218 | 0.6714 | 0.7533 | |
|
| 0.4161 | 8.0 | 250 | 0.6440 | 0.7667 | |
|
| 0.4968 | 8.99 | 281 | 0.6438 | 0.78 | |
|
| 0.5861 | 9.98 | 312 | 0.6266 | 0.7933 | |
|
| 0.5182 | 10.98 | 343 | 0.6158 | 0.7867 | |
|
| 0.6797 | 12.0 | 375 | 0.6237 | 0.8133 | |
|
| 0.622 | 12.99 | 406 | 0.5858 | 0.8333 | |
|
| 0.6419 | 13.98 | 437 | 0.5735 | 0.8267 | |
|
| 0.3727 | 14.98 | 468 | 0.5641 | 0.8133 | |
|
| 0.3822 | 16.0 | 500 | 0.5520 | 0.8267 | |
|
| 0.4766 | 16.99 | 531 | 0.5642 | 0.8267 | |
|
| 0.4791 | 17.98 | 562 | 0.5309 | 0.8267 | |
|
| 0.3918 | 18.98 | 593 | 0.5749 | 0.8267 | |
|
| 0.3847 | 20.0 | 625 | 0.5317 | 0.84 | |
|
| 0.3722 | 20.99 | 656 | 0.5719 | 0.8267 | |
|
| 0.5402 | 21.98 | 687 | 0.5316 | 0.84 | |
|
| 0.4358 | 22.98 | 718 | 0.5292 | 0.8333 | |
|
| 0.2957 | 24.0 | 750 | 0.5172 | 0.8467 | |
|
| 0.4801 | 24.99 | 781 | 0.5376 | 0.84 | |
|
| 0.3656 | 25.98 | 812 | 0.5118 | 0.8333 | |
|
| 0.3956 | 26.98 | 843 | 0.5081 | 0.8533 | |
|
| 0.3343 | 28.0 | 875 | 0.5198 | 0.8533 | |
|
| 0.3839 | 28.99 | 906 | 0.5269 | 0.8467 | |
|
| 0.4286 | 29.98 | 937 | 0.5163 | 0.8467 | |
|
| 0.2736 | 30.98 | 968 | 0.5359 | 0.8333 | |
|
| 0.3465 | 32.0 | 1000 | 0.5277 | 0.84 | |
|
| 0.4244 | 32.99 | 1031 | 0.5385 | 0.8333 | |
|
| 0.308 | 33.98 | 1062 | 0.5141 | 0.8533 | |
|
| 0.3494 | 34.98 | 1093 | 0.5129 | 0.8533 | |
|
| 0.3851 | 36.0 | 1125 | 0.5199 | 0.84 | |
|
| 0.3949 | 36.99 | 1156 | 0.5250 | 0.84 | |
|
| 0.3235 | 37.98 | 1187 | 0.5142 | 0.8533 | |
|
| 0.3076 | 38.98 | 1218 | 0.5166 | 0.8533 | |
|
| 0.3679 | 39.68 | 1240 | 0.5191 | 0.8467 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.28.1 |
|
- Pytorch 2.0.0+cu118 |
|
- Datasets 2.12.0 |
|
- Tokenizers 0.13.3 |
|
|