mkmkmkmk commited on
Commit
c2c54c2
1 Parent(s): f81704c

initial readme.md(not perfect at all)

Browse files
Files changed (1) hide show
  1. README.md +61 -0
README.md CHANGED
@@ -1,3 +1,64 @@
1
  ---
 
2
  license: cc-by-sa-4.0
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: ja
3
  license: cc-by-sa-4.0
4
+ datasets:
5
+ - wikipedia
6
+ - cc100
7
+ - oscar
8
+ mask_token: "[MASK]"
9
+ widget:
10
+ - text: "早稲田 大学 で 自然 言語 処理 を [MASK] する 。"
11
  ---
12
+
13
+ # nlp-waseda/roberta-base-japanese
14
+
15
+ ## Model description
16
+
17
+ This is a Japanese BigBird base model pretrained on Japanese Wikipedia, the Japanese portion of CC-100 and the Japanese portion of oscar.
18
+
19
+ ## How to use
20
+
21
+ You can use this model for masked language modeling as follows:
22
+ ```python
23
+ from transformers import AutoTokenizer, AutoModelForMaskedLM
24
+ tokenizer = AutoTokenizer.from_pretrained("nlp-waseda/bigbird-base-japanese")
25
+ model = AutoModelForMaskedLM.from_pretrained("nlp-waseda/bigbird-base-japanese")
26
+
27
+ sentence = '早稲田 大学 で 自然 言語 処理 を [MASK] する 。' # input should be segmented into words by Juman++ in advance
28
+ encoding = tokenizer(sentence, return_tensors='pt')
29
+ ...
30
+ ```
31
+
32
+ You can fine-tune this model on downstream tasks.
33
+
34
+ ## Tokenization
35
+
36
+ The input text should be segmented into words by [Juman++](https://github.com/ku-nlp/jumanpp) in advance. Juman++ 2.0.0-rc3 was used for pretraining. Each word is tokenized into tokens by [sentencepiece](https://github.com/google/sentencepiece).
37
+
38
+ `BertJapaneseTokenizer` now supports automatic `JumanppTokenizer` and `SentencepieceTokenizer`. You can use [this model](https://huggingface.co/nlp-waseda/roberta-base-japanese-with-auto-jumanpp) without any data preprocessing.
39
+
40
+ ## Vocabulary
41
+
42
+ The vocabulary consists of 32000 tokens including words ([JumanDIC](https://github.com/ku-nlp/JumanDIC)) and subwords induced by the unigram language model of [sentencepiece](https://github.com/google/sentencepiece).
43
+
44
+ ## Training procedure
45
+
46
+ This model was trained on Japanese Wikipedia (as of 20210920) and the Japanese portion of CC-100. It took a week using eight NVIDIA A100 GPUs.
47
+
48
+ The following hyperparameters were used during pretraining:
49
+ - learning_rate: 1e-4
50
+ - per_device_train_batch_size: 256
51
+ - distributed_type: multi-GPU
52
+ - num_devices: 8
53
+ - gradient_accumulation_steps: 2
54
+ - total_train_batch_size: 4096
55
+ - max_seq_length: 128
56
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
57
+ - lr_scheduler_type: linear
58
+ - training_steps: 700000
59
+ - warmup_steps: 10000
60
+ - mixed_precision_training: Native AMP
61
+
62
+ ## Performance on JGLUE
63
+
64
+ See the [Baseline Scores](https://github.com/yahoojapan/JGLUE#baseline-scores) of JGLUE.