--- library_name: transformers license: apache-2.0 base_model: timm/levit_128.fb_dist_in1k tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: levit_128.fb_dist_in1k-finetuned-stroke-binary results: [] datasets: - BTX24/tekno21-brain-stroke-dataset-binary pipeline_tag: image-classification --- # levit_128.fb_dist_in1k-finetuned-stroke-binary This model is a fine-tuned version of [timm/levit_128.fb_dist_in1k](https://huggingface.co/timm/levit_128.fb_dist_in1k) on an binary stroke detection dataset. It achieves the following results on the evaluation set: - Accuracy: 0.8598 - F1: 0.8577 - Precision: 0.8602 - Recall: 0.8598 ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 36 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.7002 | 0.6202 | 100 | nan | 0.5690 | 0.5387 | 0.5349 | 0.5690 | | 0.681 | 1.2357 | 200 | nan | 0.5834 | 0.5331 | 0.5372 | 0.5834 | | 0.6874 | 1.8558 | 300 | nan | 0.6002 | 0.5596 | 0.5665 | 0.6002 | | 0.6774 | 2.4713 | 400 | nan | 0.6124 | 0.5811 | 0.5867 | 0.6124 | | 0.6533 | 3.0868 | 500 | nan | 0.6852 | 0.6694 | 0.6767 | 0.6852 | | 0.6368 | 3.7070 | 600 | nan | 0.7205 | 0.7153 | 0.7153 | 0.7205 | | 0.6196 | 4.3225 | 700 | nan | 0.7603 | 0.7471 | 0.7650 | 0.7603 | | 0.5663 | 4.9426 | 800 | nan | 0.7883 | 0.7843 | 0.7864 | 0.7883 | | 0.5196 | 5.5581 | 900 | nan | 0.8078 | 0.7972 | 0.8206 | 0.8078 | | 0.4704 | 6.1736 | 1000 | nan | 0.8363 | 0.8317 | 0.8396 | 0.8363 | | 0.4715 | 6.7938 | 1100 | nan | 0.8349 | 0.8292 | 0.8409 | 0.8349 | | 0.452 | 7.4093 | 1200 | nan | 0.8503 | 0.8479 | 0.8505 | 0.8503 | | 0.4538 | 8.0248 | 1300 | nan | 0.8598 | 0.8577 | 0.8602 | 0.8598 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.6.0+cu124 - Datasets 3.4.0 - Tokenizers 0.21.0