--- language: - "lzh" tags: - "classical chinese" - "literary chinese" - "ancient chinese" - "masked-lm" license: "apache-2.0" pipeline_tag: "fill-mask" mask_token: "[MASK]" widget: - text: "孟子[MASK]梁惠王" --- # modernbert-small-classical-chinese-traditional ## Model Description This is a ModernBERT model pre-trained on [Kanripo](https://www.kanripo.org) texts. NVIDIA A100-SXM4-40GB×8 took 7 hours 8 minutes for training. You can fine-tune `modernbert-small-classical-chinese-traditional` for downstream tasks, such as sentence-segmentation, POS-tagging, dependency-parsing, and so on. ## How to Use ```py from transformers import AutoTokenizer,AutoModelForMaskedLM tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/modernbert-small-classical-chinese-traditional") model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/modernbert-small-classical-chinese-traditional") ```