Update README.md
Browse files
README.md
CHANGED
@@ -52,7 +52,11 @@ It achieves the following results on the evaluation set:
|
|
52 |
|
53 |
## Model description
|
54 |
|
55 |
-
|
|
|
|
|
|
|
|
|
56 |
|
57 |
## Intended uses & limitations
|
58 |
|
@@ -60,7 +64,8 @@ More information needed
|
|
60 |
|
61 |
## Training and evaluation data
|
62 |
|
63 |
-
|
|
|
64 |
|
65 |
## Training procedure
|
66 |
|
|
|
52 |
|
53 |
## Model description
|
54 |
|
55 |
+
The model described here is a fine-tuned version of the BETO (BERT-based Spanish language model) for Named Entity Recognition (NER) tasks,
|
56 |
+
trained on the CoNLL-2002 dataset. BETO is a pre-trained language model specifically designed for the Spanish language, based on the BERT architecture.
|
57 |
+
By fine-tuning BETO on the CoNLL-2002 dataset, the model has been adapted to recognize and classify named entities such as persons, organizations,
|
58 |
+
locations, and other miscellaneous entities within Spanish text. The fine-tuning process involves adjusting the pre-trained model weights to better
|
59 |
+
fit the specific task of NER, thereby improving its performance and accuracy on Spanish text.
|
60 |
|
61 |
## Intended uses & limitations
|
62 |
|
|
|
64 |
|
65 |
## Training and evaluation data
|
66 |
|
67 |
+
The training was performed using a GPU with 22.5 GB of RAM, 53 GB of system RAM, and 200 GB of disk space.
|
68 |
+
This setup ensured efficient handling of the large dataset and the computational demands of fine-tuning the model.
|
69 |
|
70 |
## Training procedure
|
71 |
|