Transformer language model for Croatian and Serbian

Trained on 6GB datasets that contain Croatian and Serbian language for two epochs (500k steps). Leipzig, OSCAR and srWac datasets

Model #params Arch. Training data
Andrija/SRoBERTa-L 80M Third Leipzig Corpus, OSCAR and srWac (6 GB of text)
Downloads last month
24
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Datasets used to train Andrija/SRoBERTa-L