Created README.md.
Browse files
README.md
ADDED
@@ -0,0 +1,30 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: ja
|
3 |
+
tags:
|
4 |
+
- exbert
|
5 |
+
license: cc-by-sa-4.0
|
6 |
+
widget:
|
7 |
+
- text: "早稲田 大学 で 自然 言語 処理 を [MASK] する 。"
|
8 |
+
---
|
9 |
+
|
10 |
+
# nlp-waseda/roberta-base-japanese
|
11 |
+
|
12 |
+
## Model description
|
13 |
+
|
14 |
+
This is a Japanese RoBERTa model pretrained on Japanese Wikipedia and the Japanese portion of CC-100.
|
15 |
+
|
16 |
+
## How to use
|
17 |
+
|
18 |
+
```python
|
19 |
+
from transformers import AutoTokenizer,AutoModelForMaskedLM
|
20 |
+
tokenizer=AutoTokenizer.from_pretrained("nlp-waseda/roberta-base-japanese")
|
21 |
+
model=AutoModelForMaskedLM.from_pretrained("nlp-waseda/roberta-base-japanese")
|
22 |
+
```
|
23 |
+
|
24 |
+
## Tokenization
|
25 |
+
|
26 |
+
The input text should be segmented into words by [Juman++](https://github.com/ku-nlp/jumanpp) in advance. Each word is tokenized into subwords by [sentencepiece](https://github.com/google/sentencepiece).
|
27 |
+
|
28 |
+
## Vocabulary
|
29 |
+
|
30 |
+
## Training procedure
|