vit-base-food101
This model is a fine-tuned version of google/vit-base-patch16-224 on the ethz/food101 dataset. It achieves the following results on the evaluation set:
- Loss: 0.7395
- Accuracy: 0.8017
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 8
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy |
---|---|---|---|---|
2.5327 | 0.1320 | 500 | 2.3914 | 0.5946 |
1.5713 | 0.2640 | 1000 | 1.5558 | 0.6978 |
1.2869 | 0.3960 | 1500 | 1.2575 | 0.7271 |
1.1479 | 0.5280 | 2000 | 1.1093 | 0.7476 |
1.0838 | 0.6600 | 2500 | 1.0286 | 0.7571 |
0.9623 | 0.7920 | 3000 | 0.9798 | 0.7641 |
0.9855 | 0.9240 | 3500 | 0.9395 | 0.7670 |
0.9263 | 1.0560 | 4000 | 0.9113 | 0.7723 |
0.8691 | 1.1880 | 4500 | 0.8844 | 0.7782 |
0.8025 | 1.3200 | 5000 | 0.8694 | 0.7768 |
0.7783 | 1.4520 | 5500 | 0.8574 | 0.7820 |
0.7774 | 1.5839 | 6000 | 0.8457 | 0.7799 |
0.7716 | 1.7159 | 6500 | 0.8309 | 0.7871 |
0.8445 | 1.8479 | 7000 | 0.8230 | 0.7868 |
0.8214 | 1.9799 | 7500 | 0.8107 | 0.7902 |
0.7226 | 2.1119 | 8000 | 0.8077 | 0.7897 |
0.7712 | 2.2439 | 8500 | 0.8015 | 0.7914 |
0.7306 | 2.3759 | 9000 | 0.7970 | 0.7889 |
0.6829 | 2.5079 | 9500 | 0.7919 | 0.7912 |
0.7593 | 2.6399 | 10000 | 0.7883 | 0.7901 |
0.6856 | 2.7719 | 10500 | 0.7802 | 0.7943 |
0.7156 | 2.9039 | 11000 | 0.7765 | 0.7976 |
0.6688 | 3.0359 | 11500 | 0.7735 | 0.7978 |
0.6245 | 3.1679 | 12000 | 0.7711 | 0.7972 |
0.668 | 3.2999 | 12500 | 0.7679 | 0.7989 |
0.6732 | 3.4319 | 13000 | 0.7657 | 0.7985 |
0.686 | 3.5639 | 13500 | 0.7645 | 0.7982 |
0.7121 | 3.6959 | 14000 | 0.7612 | 0.7984 |
0.6513 | 3.8279 | 14500 | 0.7599 | 0.7993 |
0.6963 | 3.9599 | 15000 | 0.7585 | 0.7993 |
0.7219 | 4.0919 | 15500 | 0.7554 | 0.7999 |
0.6253 | 4.2239 | 16000 | 0.7526 | 0.8016 |
0.6278 | 4.3559 | 16500 | 0.7504 | 0.8026 |
0.6605 | 4.4879 | 17000 | 0.7502 | 0.8028 |
0.6447 | 4.6199 | 17500 | 0.7493 | 0.8028 |
0.6469 | 4.7518 | 18000 | 0.7463 | 0.8040 |
0.6745 | 4.8838 | 18500 | 0.7462 | 0.8028 |
0.5882 | 5.0158 | 19000 | 0.7463 | 0.7995 |
0.6241 | 5.1478 | 19500 | 0.7428 | 0.8046 |
0.62 | 5.2798 | 20000 | 0.7439 | 0.8013 |
0.6435 | 5.4118 | 20500 | 0.7422 | 0.8018 |
0.6273 | 5.5438 | 21000 | 0.7418 | 0.8030 |
0.623 | 5.6758 | 21500 | 0.7415 | 0.8050 |
0.6181 | 5.8078 | 22000 | 0.7385 | 0.8055 |
0.6382 | 5.9398 | 22500 | 0.7388 | 0.8071 |
0.587 | 6.0718 | 23000 | 0.7379 | 0.8058 |
0.603 | 6.2038 | 23500 | 0.7374 | 0.8038 |
0.6334 | 6.3358 | 24000 | 0.7366 | 0.8054 |
0.613 | 6.4678 | 24500 | 0.7364 | 0.8048 |
0.5917 | 6.5998 | 25000 | 0.7355 | 0.8051 |
0.6167 | 6.7318 | 25500 | 0.7352 | 0.8059 |
0.6121 | 6.8638 | 26000 | 0.7347 | 0.8066 |
0.6133 | 6.9958 | 26500 | 0.7342 | 0.8059 |
0.6304 | 7.1278 | 27000 | 0.7338 | 0.8057 |
0.6041 | 7.2598 | 27500 | 0.7342 | 0.8063 |
0.6333 | 7.3918 | 28000 | 0.7334 | 0.8059 |
0.6234 | 7.5238 | 28500 | 0.7335 | 0.8061 |
0.5961 | 7.6558 | 29000 | 0.7334 | 0.8073 |
0.61 | 7.7878 | 29500 | 0.7333 | 0.8070 |
0.6586 | 7.9197 | 30000 | 0.7331 | 0.8070 |
Framework versions
- Transformers 4.50.0
- Pytorch 2.6.0
- Datasets 3.4.1
- Tokenizers 0.21.1
- Downloads last month
- 117
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for alimoh02/vit-base-food101
Base model
google/vit-base-patch16-224