roberta-base-chinese

Model Description

This is a RoBERTa model pre-trained on Chinese Wikipedia texts (both simplified and traditional). NVIDIA A100-SXM4-40GB took 48 hours 56 minutes for training. You can fine-tune roberta-base-chinese for downstream tasks, such as POS-tagging, dependency-parsing, and so on.

How to Use

from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/roberta-base-chinese")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/roberta-base-chinese")
Downloads last month
110
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for KoichiYasuoka/roberta-base-chinese

Finetunes
1 model