roberta-base-tibetan

Model Description

This is a RoBERTa model pre-trained on Tibetan texts. NVIDIA A100-SXM4-40GB took 40 hours 44 minutes for training. You can fine-tune roberta-base-tibetan for downstream tasks, such as POS-tagging, dependency-parsing, and so on.

How to Use

from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-tibetan")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-tibetan")
Downloads last month
17
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for KoichiYasuoka/roberta-base-tibetan

Finetunes
1 model

Dataset used to train KoichiYasuoka/roberta-base-tibetan