dkawahara's picture
Created README.md.
fb4343c
|
raw
history blame
No virus
806 Bytes
metadata
language: ja
tags:
  - exbert
license: cc-by-sa-4.0
widget:
  - text: 早稲田 大学  自然 言語 処理  [MASK] する 

nlp-waseda/roberta-base-japanese

Model description

This is a Japanese RoBERTa model pretrained on Japanese Wikipedia and the Japanese portion of CC-100.

How to use

from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("nlp-waseda/roberta-base-japanese")
model=AutoModelForMaskedLM.from_pretrained("nlp-waseda/roberta-base-japanese")

Tokenization

The input text should be segmented into words by Juman++ in advance. Each word is tokenized into subwords by sentencepiece.

Vocabulary

Training procedure