hkivancoral's picture
End of training
dc541c5
metadata
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: hushem_40x_deit_base_rms_001_fold4
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: test
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.8095238095238095

hushem_40x_deit_base_rms_001_fold4

This model is a fine-tuned version of facebook/deit-base-patch16-224 on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.7808
  • Accuracy: 0.8095

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.001
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
1.4095 1.0 219 1.4091 0.2381
1.3846 2.0 438 1.3865 0.2381
1.2802 3.0 657 1.3372 0.2381
1.1537 4.0 876 1.4032 0.2619
1.177 5.0 1095 1.3147 0.4286
1.1719 6.0 1314 0.9703 0.6667
1.0403 7.0 1533 1.2271 0.4762
0.9188 8.0 1752 0.9431 0.5714
0.8565 9.0 1971 1.0056 0.5952
0.8519 10.0 2190 0.7845 0.6429
0.7519 11.0 2409 0.7049 0.6905
0.8514 12.0 2628 0.6628 0.7857
0.8808 13.0 2847 0.8006 0.7381
0.796 14.0 3066 0.7332 0.6905
0.7213 15.0 3285 0.7486 0.6905
0.663 16.0 3504 0.4390 0.7857
0.5845 17.0 3723 0.9856 0.5952
0.5228 18.0 3942 0.6588 0.7381
0.5581 19.0 4161 0.6093 0.8571
0.518 20.0 4380 0.5316 0.6905
0.5058 21.0 4599 0.7052 0.7381
0.453 22.0 4818 0.6155 0.7143
0.4128 23.0 5037 0.7141 0.7381
0.44 24.0 5256 0.6896 0.7619
0.3933 25.0 5475 0.6353 0.7619
0.3648 26.0 5694 0.7225 0.8095
0.2677 27.0 5913 0.6987 0.8810
0.3023 28.0 6132 0.8143 0.8333
0.332 29.0 6351 0.8300 0.8333
0.2772 30.0 6570 0.6339 0.7619
0.1878 31.0 6789 0.6694 0.8333
0.2152 32.0 7008 0.7930 0.7619
0.2378 33.0 7227 0.7856 0.7619
0.1874 34.0 7446 0.6614 0.8571
0.2043 35.0 7665 0.7218 0.8095
0.122 36.0 7884 1.0415 0.8333
0.1837 37.0 8103 1.2016 0.7381
0.1148 38.0 8322 0.8289 0.7857
0.0825 39.0 8541 1.4711 0.7381
0.0828 40.0 8760 0.9405 0.8810
0.0736 41.0 8979 1.4104 0.8810
0.0864 42.0 9198 1.1297 0.8333
0.0176 43.0 9417 1.2293 0.7857
0.0392 44.0 9636 1.3878 0.8095
0.0272 45.0 9855 1.2021 0.8571
0.0125 46.0 10074 2.3102 0.7619
0.0149 47.0 10293 1.8621 0.7857
0.0032 48.0 10512 1.7899 0.8333
0.0016 49.0 10731 1.9528 0.8095
0.0001 50.0 10950 1.7808 0.8095

Framework versions

  • Transformers 4.32.1
  • Pytorch 2.1.0+cu121
  • Datasets 2.12.0
  • Tokenizers 0.13.2