metadata
language: ja
tags:
- exbert
license: cc-by-sa-4.0
widget:
- text: 早稲田 大学 で 自然 言語 処理 を [MASK] する 。
nlp-waseda/roberta-base-japanese
Model description
This is a Japanese RoBERTa model pretrained on Japanese Wikipedia and the Japanese portion of CC-100.
How to use
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("nlp-waseda/roberta-base-japanese")
model=AutoModelForMaskedLM.from_pretrained("nlp-waseda/roberta-base-japanese")
Tokenization
The input text should be segmented into words by Juman++ in advance. Each word is tokenized into subwords by sentencepiece.