NER-finetuning-BETO-PRO
This model is a fine-tuned version of NazaGara/NER-fine-tuned-BETO on the conll2002 dataset. It achieves the following results on the evaluation set:
- Loss: 0.2388
- Precision: 0.8489
- Recall: 0.8557
- F1: 0.8523
- Accuracy: 0.9697
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
---|---|---|---|---|---|---|---|
0.0507 | 1.0 | 1041 | 0.1411 | 0.8326 | 0.8536 | 0.8430 | 0.9707 |
0.0308 | 2.0 | 2082 | 0.1721 | 0.8263 | 0.8405 | 0.8334 | 0.9679 |
0.0205 | 3.0 | 3123 | 0.1766 | 0.8446 | 0.8516 | 0.8481 | 0.9692 |
0.0139 | 4.0 | 4164 | 0.2043 | 0.8422 | 0.8460 | 0.8441 | 0.9684 |
0.0127 | 5.0 | 5205 | 0.1907 | 0.8414 | 0.8548 | 0.8481 | 0.9698 |
0.0084 | 6.0 | 6246 | 0.2069 | 0.8427 | 0.8470 | 0.8448 | 0.9696 |
0.0056 | 7.0 | 7287 | 0.2275 | 0.8533 | 0.8610 | 0.8571 | 0.9700 |
0.0044 | 8.0 | 8328 | 0.2307 | 0.8408 | 0.8534 | 0.8471 | 0.9698 |
0.0026 | 9.0 | 9369 | 0.2343 | 0.8469 | 0.8504 | 0.8487 | 0.9695 |
0.0024 | 10.0 | 10410 | 0.2388 | 0.8489 | 0.8557 | 0.8523 | 0.9697 |
Framework versions
- Transformers 4.44.0
- Pytorch 2.4.0
- Datasets 2.20.0
- Tokenizers 0.19.1
- Downloads last month
- 2
Model tree for raulgdp/NER-finetuning-BETO-PRO
Base model
NazaGara/NER-fine-tuned-BETODataset used to train raulgdp/NER-finetuning-BETO-PRO
Evaluation results
- Precision on conll2002validation set self-reported0.849
- Recall on conll2002validation set self-reported0.856
- F1 on conll2002validation set self-reported0.852
- Accuracy on conll2002validation set self-reported0.970