ROUBERTa cased

This is a RoBERTa-base LM trained from scratch exclusively on Uruguayan press [1].

Cite this work

@inproceedings{rouberta2024,
  title={A Language Model Trained on Uruguayan Spanish News Text},
  author={Filevich, Juan Pablo and Marco, Gonzalo and Castro, Santiago and Chiruzzo, Luis and Ros{\'a}, Aiala},
  booktitle={Proceedings of the Second International Workshop Towards Digital Language Equality (TDLE): Focusing on Sustainability@ LREC-COLING 2024},
  pages={53--60},
  year={2024}
}

[1] huggingface.co/datasets/pln-udelar/uy22

Downloads last month
24
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train pln-udelar/rouberta-base-uy22-cased