File size: 806 Bytes
fb4343c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
---
language: ja
tags:
- exbert
license: cc-by-sa-4.0
widget:
- text: "早稲田 大学 で 自然 言語 処理 を [MASK] する 。"
---
# nlp-waseda/roberta-base-japanese
## Model description
This is a Japanese RoBERTa model pretrained on Japanese Wikipedia and the Japanese portion of CC-100.
## How to use
```python
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("nlp-waseda/roberta-base-japanese")
model=AutoModelForMaskedLM.from_pretrained("nlp-waseda/roberta-base-japanese")
```
## Tokenization
The input text should be segmented into words by [Juman++](https://github.com/ku-nlp/jumanpp) in advance. Each word is tokenized into subwords by [sentencepiece](https://github.com/google/sentencepiece).
## Vocabulary
## Training procedure
|