xml-roberta-large-finetuned-corregido-tokenizadorES-mama
This model is a fine-tuned version of FacebookAI/xlm-roberta-large. It achieves the following results on the evaluation set:
- Loss: 0.0460
- Precision: 0.9110
- Recall: 0.9394
- F1: 0.9250
- Accuracy: 0.9866
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 30
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
2.6643 | 1.0 | 86 | 2.5267 | 0.0 | 0.0 | 0.0 | 0.5837 |
1.9359 | 2.0 | 172 | 1.9628 | 0.0 | 0.0 | 0.0 | 0.6009 |
1.6777 | 3.0 | 258 | 1.6615 | 0.0357 | 0.0242 | 0.0289 | 0.6589 |
1.4374 | 4.0 | 344 | 1.3392 | 0.0945 | 0.0474 | 0.0632 | 0.7071 |
1.1194 | 5.0 | 430 | 1.0558 | 0.1712 | 0.1270 | 0.1458 | 0.7528 |
0.9551 | 6.0 | 516 | 0.8403 | 0.2826 | 0.2219 | 0.2486 | 0.7989 |
0.7683 | 7.0 | 602 | 0.6553 | 0.3573 | 0.3927 | 0.3742 | 0.8331 |
0.6506 | 8.0 | 688 | 0.4905 | 0.5 | 0.5087 | 0.5043 | 0.8727 |
0.5431 | 9.0 | 774 | 0.3954 | 0.5765 | 0.5941 | 0.5852 | 0.8949 |
0.4028 | 10.0 | 860 | 0.3061 | 0.6303 | 0.6632 | 0.6463 | 0.9178 |
0.3332 | 11.0 | 946 | 0.2540 | 0.6569 | 0.7296 | 0.6913 | 0.9313 |
0.2715 | 12.0 | 1032 | 0.2007 | 0.7223 | 0.7707 | 0.7457 | 0.9461 |
0.2678 | 13.0 | 1118 | 0.1619 | 0.7506 | 0.8013 | 0.7751 | 0.9557 |
0.2267 | 14.0 | 1204 | 0.1468 | 0.7608 | 0.8318 | 0.7948 | 0.9603 |
0.1875 | 15.0 | 1290 | 0.1357 | 0.7759 | 0.8413 | 0.8073 | 0.9640 |
0.1753 | 16.0 | 1376 | 0.1166 | 0.8112 | 0.8651 | 0.8372 | 0.9692 |
0.1616 | 17.0 | 1462 | 0.0967 | 0.8204 | 0.8788 | 0.8486 | 0.9731 |
0.1337 | 18.0 | 1548 | 0.0854 | 0.8389 | 0.8951 | 0.8661 | 0.9762 |
0.1298 | 19.0 | 1634 | 0.0676 | 0.8623 | 0.9014 | 0.8814 | 0.9804 |
0.1115 | 20.0 | 1720 | 0.0701 | 0.8687 | 0.9135 | 0.8905 | 0.9808 |
0.1139 | 21.0 | 1806 | 0.0602 | 0.8916 | 0.9278 | 0.9093 | 0.9830 |
0.114 | 22.0 | 1892 | 0.0543 | 0.8957 | 0.9278 | 0.9114 | 0.9842 |
0.0944 | 23.0 | 1978 | 0.0569 | 0.8922 | 0.9341 | 0.9127 | 0.9843 |
0.0893 | 24.0 | 2064 | 0.0517 | 0.8986 | 0.9346 | 0.9163 | 0.9852 |
0.0836 | 25.0 | 2150 | 0.0476 | 0.9057 | 0.9367 | 0.9210 | 0.9862 |
0.0841 | 26.0 | 2236 | 0.0489 | 0.9062 | 0.9367 | 0.9212 | 0.9859 |
0.0865 | 27.0 | 2322 | 0.0459 | 0.9095 | 0.9378 | 0.9234 | 0.9866 |
0.0859 | 28.0 | 2408 | 0.0464 | 0.9096 | 0.9394 | 0.9243 | 0.9866 |
0.0796 | 29.0 | 2494 | 0.0461 | 0.9101 | 0.9394 | 0.9245 | 0.9866 |
0.0774 | 29.6550 | 2550 | 0.0460 | 0.9110 | 0.9394 | 0.9250 | 0.9866 |
Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
- Downloads last month
- 10
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for anvorja/xml-roberta-large-finetuned-corregido-tokenizadorES-mama
Base model
FacebookAI/xlm-roberta-large