pvt-small-224-ConcreteClassifier-PVT

This model is a fine-tuned version of Xrenya/pvt-small-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.9419
  • Accuracy: {'accuracy': 0.17665369649805449}
  • F1: {'f1': 0.04289493575207861}
  • Precision: {'precision': 0.025236242356864926}
  • Recall: {'recall': 0.14285714285714285}

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: constant
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Accuracy F1 Precision Recall
1.981 1.0 1927 1.9584 {'accuracy': 0.1556420233463035} {'f1': 0.03848003848003848} {'precision': 0.022234574763757644} {'recall': 0.14285714285714285}
1.951 2.0 3854 1.9447 {'accuracy': 0.1556420233463035} {'f1': 0.03848003848003848} {'precision': 0.022234574763757644} {'recall': 0.14285714285714285}
1.9799 3.0 5781 1.9498 {'accuracy': 0.13618677042801555} {'f1': 0.03424657534246575} {'precision': 0.019455252918287935} {'recall': 0.14285714285714285}
1.9458 4.0 7708 1.9412 {'accuracy': 0.17665369649805449} {'f1': 0.04289493575207861} {'precision': 0.025236242356864926} {'recall': 0.14285714285714285}
1.9444 5.0 9635 1.9408 {'accuracy': 0.17665369649805449} {'f1': 0.04289493575207861} {'precision': 0.025236242356864926} {'recall': 0.14285714285714285}
1.9441 6.0 11562 1.9427 {'accuracy': 0.17665369649805449} {'f1': 0.04289493575207861} {'precision': 0.025236242356864926} {'recall': 0.14285714285714285}
1.9379 7.0 13489 1.9433 {'accuracy': 0.1556420233463035} {'f1': 0.03848003848003848} {'precision': 0.022234574763757644} {'recall': 0.14285714285714285}
1.9529 8.0 15416 1.9432 {'accuracy': 0.1556420233463035} {'f1': 0.03848003848003848} {'precision': 0.022234574763757644} {'recall': 0.14285714285714285}
1.9305 9.0 17343 1.9463 {'accuracy': 0.1556420233463035} {'f1': 0.03848003848003848} {'precision': 0.022234574763757644} {'recall': 0.14285714285714285}
1.94 10.0 19270 1.9412 {'accuracy': 0.1556420233463035} {'f1': 0.03848003848003848} {'precision': 0.022234574763757644} {'recall': 0.14285714285714285}
1.945 11.0 21197 1.9432 {'accuracy': 0.1556420233463035} {'f1': 0.03848003848003848} {'precision': 0.022234574763757644} {'recall': 0.14285714285714285}
1.9294 12.0 23124 1.9444 {'accuracy': 0.1556420233463035} {'f1': 0.03848003848003848} {'precision': 0.022234574763757644} {'recall': 0.14285714285714285}
1.9339 13.0 25051 1.9415 {'accuracy': 0.1556420233463035} {'f1': 0.03848003848003848} {'precision': 0.022234574763757644} {'recall': 0.14285714285714285}
1.934 14.0 26978 1.9408 {'accuracy': 0.17665369649805449} {'f1': 0.04289493575207861} {'precision': 0.025236242356864926} {'recall': 0.14285714285714285}
1.9275 15.0 28905 1.9423 {'accuracy': 0.17665369649805449} {'f1': 0.04289493575207861} {'precision': 0.025236242356864926} {'recall': 0.14285714285714285}
1.9539 16.0 30832 1.9440 {'accuracy': 0.17665369649805449} {'f1': 0.04289493575207861} {'precision': 0.025236242356864926} {'recall': 0.14285714285714285}
1.9584 17.0 32759 1.9412 {'accuracy': 0.17665369649805449} {'f1': 0.04289493575207861} {'precision': 0.025236242356864926} {'recall': 0.14285714285714285}
1.9409 18.0 34686 1.9405 {'accuracy': 0.1556420233463035} {'f1': 0.03848003848003848} {'precision': 0.022234574763757644} {'recall': 0.14285714285714285}
1.9522 19.0 36613 1.9405 {'accuracy': 0.17665369649805449} {'f1': 0.04289493575207861} {'precision': 0.025236242356864926} {'recall': 0.14285714285714285}
1.9296 20.0 38540 1.9410 {'accuracy': 0.17665369649805449} {'f1': 0.04289493575207861} {'precision': 0.025236242356864926} {'recall': 0.14285714285714285}
1.9272 21.0 40467 1.9412 {'accuracy': 0.1556420233463035} {'f1': 0.03848003848003848} {'precision': 0.022234574763757644} {'recall': 0.14285714285714285}
1.9399 22.0 42394 1.9413 {'accuracy': 0.17665369649805449} {'f1': 0.04289493575207861} {'precision': 0.025236242356864926} {'recall': 0.14285714285714285}
1.9258 23.0 44321 1.9413 {'accuracy': 0.17665369649805449} {'f1': 0.04289493575207861} {'precision': 0.025236242356864926} {'recall': 0.14285714285714285}
1.9481 24.0 46248 1.9422 {'accuracy': 0.1556420233463035} {'f1': 0.03848003848003848} {'precision': 0.022234574763757644} {'recall': 0.14285714285714285}
1.948 25.0 48175 1.9423 {'accuracy': 0.17665369649805449} {'f1': 0.04289493575207861} {'precision': 0.025236242356864926} {'recall': 0.14285714285714285}
1.918 26.0 50102 1.9416 {'accuracy': 0.1556420233463035} {'f1': 0.03848003848003848} {'precision': 0.022234574763757644} {'recall': 0.14285714285714285}
1.938 27.0 52029 1.9414 {'accuracy': 0.17665369649805449} {'f1': 0.04289493575207861} {'precision': 0.025236242356864926} {'recall': 0.14285714285714285}
1.9207 28.0 53956 1.9410 {'accuracy': 0.1556420233463035} {'f1': 0.03848003848003848} {'precision': 0.022234574763757644} {'recall': 0.14285714285714285}
1.9472 29.0 55883 1.9404 {'accuracy': 0.17665369649805449} {'f1': 0.04289493575207861} {'precision': 0.025236242356864926} {'recall': 0.14285714285714285}
1.9355 30.0 57810 1.9419 {'accuracy': 0.17665369649805449} {'f1': 0.04289493575207861} {'precision': 0.025236242356864926} {'recall': 0.14285714285714285}

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.1.0
  • Datasets 2.17.1
  • Tokenizers 0.15.2
Downloads last month
46
Safetensors
Model size
24M params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.

Model tree for mmomm25/pvt-small-224-ConcreteClassifier-PVT

Finetuned
(2)
this model

Evaluation results