dkawahara commited on
Commit
7245ff3
1 Parent(s): f2626e8

Updated README.md.

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -20,8 +20,8 @@ This is a Japanese RoBERTa large model pretrained on Japanese Wikipedia and the
20
  You can use this model for masked language modeling as follows:
21
  ```python
22
  from transformers import AutoTokenizer, AutoModelForMaskedLM
23
- tokenizer = AutoTokenizer.from_pretrained("nlp-waseda/roberta-base-japanese")
24
- model = AutoModelForMaskedLM.from_pretrained("nlp-waseda/roberta-base-japanese")
25
 
26
  sentence = '早稲田 大学 で 自然 言語 処理 を [MASK] する 。' # input should be segmented into words by Juman++ in advance
27
  encoding = tokenizer(sentence, return_tensors='pt')
 
20
  You can use this model for masked language modeling as follows:
21
  ```python
22
  from transformers import AutoTokenizer, AutoModelForMaskedLM
23
+ tokenizer = AutoTokenizer.from_pretrained("nlp-waseda/roberta-large-japanese")
24
+ model = AutoModelForMaskedLM.from_pretrained("nlp-waseda/roberta-large-japanese")
25
 
26
  sentence = '早稲田 大学 で 自然 言語 処理 を [MASK] する 。' # input should be segmented into words by Juman++ in advance
27
  encoding = tokenizer(sentence, return_tensors='pt')