deberta-semeval25_noHINDI08_fold1
This model is a fine-tuned version of microsoft/deberta-v3-base on the None dataset. It achieves the following results on the evaluation set:
- Loss: 8.6334
- Precision Samples: 0.1425
- Recall Samples: 0.6027
- F1 Samples: 0.2138
- Precision Macro: 0.7631
- Recall Macro: 0.3912
- F1 Macro: 0.2407
- Precision Micro: 0.1417
- Recall Micro: 0.5017
- F1 Micro: 0.2210
- Precision Weighted: 0.4982
- Recall Weighted: 0.5017
- F1 Weighted: 0.1534
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Precision Samples | Recall Samples | F1 Samples | Precision Macro | Recall Macro | F1 Macro | Precision Micro | Recall Micro | F1 Micro | Precision Weighted | Recall Weighted | F1 Weighted |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
10.2396 | 1.0 | 16 | 10.0868 | 0.2276 | 0.2620 | 0.2274 | 0.9818 | 0.2040 | 0.1893 | 0.1991 | 0.1488 | 0.1703 | 0.8781 | 0.1488 | 0.0504 |
9.7888 | 2.0 | 32 | 9.7800 | 0.1572 | 0.3097 | 0.1939 | 0.9713 | 0.2159 | 0.1910 | 0.1572 | 0.2007 | 0.1763 | 0.8323 | 0.2007 | 0.0564 |
9.8301 | 3.0 | 48 | 9.5462 | 0.1417 | 0.4021 | 0.1913 | 0.9185 | 0.2486 | 0.1981 | 0.1374 | 0.2734 | 0.1829 | 0.7274 | 0.2734 | 0.0726 |
8.8954 | 4.0 | 64 | 9.3043 | 0.1299 | 0.4700 | 0.1906 | 0.8807 | 0.2806 | 0.2071 | 0.1314 | 0.3356 | 0.1889 | 0.6371 | 0.3356 | 0.0915 |
8.2962 | 5.0 | 80 | 9.0484 | 0.1491 | 0.5053 | 0.1960 | 0.8385 | 0.2988 | 0.2153 | 0.1425 | 0.3772 | 0.2068 | 0.5943 | 0.3772 | 0.1132 |
9.058 | 6.0 | 96 | 8.8965 | 0.1614 | 0.5379 | 0.2029 | 0.8023 | 0.3286 | 0.2287 | 0.1379 | 0.4152 | 0.2071 | 0.5466 | 0.4152 | 0.1299 |
8.838 | 7.0 | 112 | 8.7711 | 0.1740 | 0.5536 | 0.1997 | 0.7904 | 0.3338 | 0.2278 | 0.1366 | 0.4291 | 0.2072 | 0.5285 | 0.4291 | 0.1370 |
8.6318 | 8.0 | 128 | 8.6842 | 0.1418 | 0.5808 | 0.2103 | 0.7841 | 0.3657 | 0.2368 | 0.1402 | 0.4706 | 0.2160 | 0.5197 | 0.4706 | 0.1469 |
8.0185 | 9.0 | 144 | 8.6595 | 0.1406 | 0.5987 | 0.2115 | 0.7743 | 0.3924 | 0.2405 | 0.1393 | 0.5017 | 0.2180 | 0.5082 | 0.5017 | 0.1528 |
8.3047 | 10.0 | 160 | 8.6334 | 0.1425 | 0.6027 | 0.2138 | 0.7631 | 0.3912 | 0.2407 | 0.1417 | 0.5017 | 0.2210 | 0.4982 | 0.5017 | 0.1534 |
Framework versions
- Transformers 4.46.0
- Pytorch 2.3.1
- Datasets 2.21.0
- Tokenizers 0.20.1
- Downloads last month
- 31
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for g-assismoraes/deberta-semeval25_noHINDI08_fold1
Base model
microsoft/deberta-v3-base