modernbert-base-classical-chinese-traditional
Model Description
This is a ModernBERT model pre-trained on Kanripo texts. NVIDIA A100-SXM4-40GB×8 took 14 hours 2 minutes for training. You can fine-tune modernbert-base-classical-chinese-traditional
for downstream tasks, such as sentence-segmentation, POS-tagging, dependency-parsing, and so on.
How to Use
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/modernbert-base-classical-chinese-traditional")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/modernbert-base-classical-chinese-traditional")
- Downloads last month
- 15
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support