dinov2-base-imagenet1k-1-layer-head-finetuned-100-galaxy_mnist
This model is a fine-tuned version of facebook/dinov2-base-imagenet1k-1-layer on the matthieulel/galaxy_mnist dataset. It achieves the following results on the evaluation set:
- Loss: 0.4478
- Accuracy: 0.8335
- Precision: 0.8342
- Recall: 0.8335
- F1: 0.8337
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 30
Training results
Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
---|---|---|---|---|---|---|---|
1.5876 | 0.99 | 31 | 1.5424 | 0.257 | 0.2412 | 0.257 | 0.2234 |
1.2929 | 1.98 | 62 | 1.2250 | 0.4485 | 0.4644 | 0.4485 | 0.4223 |
0.9596 | 2.98 | 93 | 0.9329 | 0.677 | 0.6826 | 0.677 | 0.6683 |
0.7585 | 4.0 | 125 | 0.7491 | 0.759 | 0.7590 | 0.759 | 0.7590 |
0.6636 | 4.99 | 156 | 0.6671 | 0.78 | 0.7816 | 0.78 | 0.7797 |
0.621 | 5.98 | 187 | 0.6224 | 0.7895 | 0.7923 | 0.7895 | 0.7889 |
0.6004 | 6.98 | 218 | 0.5860 | 0.7895 | 0.7916 | 0.7895 | 0.7901 |
0.5454 | 8.0 | 250 | 0.5620 | 0.797 | 0.8009 | 0.797 | 0.7969 |
0.5357 | 8.99 | 281 | 0.5372 | 0.804 | 0.8045 | 0.804 | 0.8039 |
0.5137 | 9.98 | 312 | 0.5223 | 0.805 | 0.8061 | 0.805 | 0.8050 |
0.4968 | 10.98 | 343 | 0.5123 | 0.812 | 0.8125 | 0.812 | 0.8122 |
0.5295 | 12.0 | 375 | 0.5009 | 0.8165 | 0.8182 | 0.8165 | 0.8169 |
0.4882 | 12.99 | 406 | 0.4921 | 0.8185 | 0.8197 | 0.8185 | 0.8178 |
0.4839 | 13.98 | 437 | 0.4868 | 0.817 | 0.8184 | 0.817 | 0.8175 |
0.4857 | 14.98 | 468 | 0.4819 | 0.818 | 0.8207 | 0.818 | 0.8182 |
0.4692 | 16.0 | 500 | 0.4781 | 0.8205 | 0.8233 | 0.8205 | 0.8210 |
0.4559 | 16.99 | 531 | 0.4689 | 0.8265 | 0.8276 | 0.8265 | 0.8265 |
0.4689 | 17.98 | 562 | 0.4675 | 0.825 | 0.8267 | 0.825 | 0.8251 |
0.4695 | 18.98 | 593 | 0.4666 | 0.82 | 0.8229 | 0.82 | 0.8204 |
0.4772 | 20.0 | 625 | 0.4631 | 0.821 | 0.8238 | 0.821 | 0.8214 |
0.4757 | 20.99 | 656 | 0.4571 | 0.8315 | 0.8323 | 0.8315 | 0.8318 |
0.4633 | 21.98 | 687 | 0.4537 | 0.832 | 0.8324 | 0.832 | 0.8320 |
0.4407 | 22.98 | 718 | 0.4547 | 0.826 | 0.8285 | 0.826 | 0.8261 |
0.4525 | 24.0 | 750 | 0.4508 | 0.831 | 0.8319 | 0.831 | 0.8313 |
0.4556 | 24.99 | 781 | 0.4494 | 0.8305 | 0.8317 | 0.8305 | 0.8307 |
0.4468 | 25.98 | 812 | 0.4478 | 0.8335 | 0.8342 | 0.8335 | 0.8337 |
0.4579 | 26.98 | 843 | 0.4481 | 0.8325 | 0.8336 | 0.8325 | 0.8328 |
0.4749 | 28.0 | 875 | 0.4472 | 0.832 | 0.8331 | 0.832 | 0.8323 |
0.4427 | 28.99 | 906 | 0.4472 | 0.8325 | 0.8337 | 0.8325 | 0.8328 |
0.4652 | 29.76 | 930 | 0.4468 | 0.8325 | 0.8336 | 0.8325 | 0.8328 |
Framework versions
- Transformers 4.37.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.15.1
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for matthieulel/dinov2-base-imagenet1k-1-layer-head-finetuned-100-galaxy_mnist
Base model
facebook/dinov2-base-imagenet1k-1-layer