Go Inoue commited on
Commit
aed98c2
1 Parent(s): 73ccda1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -5
README.md CHANGED
@@ -6,12 +6,12 @@ widget:
6
  - text: "الهدف من الحياة هو [MASK] ."
7
  ---
8
 
9
- # CAMeLBERT-DA
10
 
11
  ## Model description
12
 
13
- **CAMeLBERT** is a BERT model pre-trained on Arabic texts with different sizes and variants.
14
- The details are described in the paper *"The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models."*
15
  We release eight models with different sizes and variants as follows:
16
 
17
  ||Model|Variant|Size|#Word|
@@ -25,7 +25,7 @@ We release eight models with different sizes and variants as follows:
25
  ||`bert-base-camelbert-msa-eighth`|MSA|14GB|1.6B|
26
  ||`bert-base-camelbert-msa-sixteenth`|MSA|6GB|746M|
27
 
28
- This model card describes `bert-base-camelbert-da`, a model pre-trained on the DA dataset.
29
 
30
  ## Intended uses
31
  You can use the released model for either masked language modeling or next sentence prediction.
@@ -60,6 +60,8 @@ You can use this model directly with a pipeline for masked language modeling:
60
  'token_str': 'الحب'}]
61
  ```
62
 
 
 
63
  Here is how to use this model to get the features of a given text in PyTorch:
64
  ```python
65
  from transformers import AutoTokenizer, AutoModel
@@ -82,7 +84,7 @@ output = model(encoded_input)
82
 
83
  ## Training data
84
  - DA
85
- - A collection of dialectal Arabic data described in our paper.
86
 
87
  ## Training procedure
88
  We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training.
 
6
  - text: "الهدف من الحياة هو [MASK] ."
7
  ---
8
 
9
+ # CAMeLBERT: A collection of pre-trained models for Arabic NLP tasks
10
 
11
  ## Model description
12
 
13
+ **CAMeLBERT** is a collection of BERT models pre-trained on Arabic texts with different sizes and variants.
14
+ The details are described in the paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."*
15
  We release eight models with different sizes and variants as follows:
16
 
17
  ||Model|Variant|Size|#Word|
 
25
  ||`bert-base-camelbert-msa-eighth`|MSA|14GB|1.6B|
26
  ||`bert-base-camelbert-msa-sixteenth`|MSA|6GB|746M|
27
 
28
+ This model card describes **CAMeLBERT-DA** (`bert-base-camelbert-da`), a model pre-trained on the DA dataset.
29
 
30
  ## Intended uses
31
  You can use the released model for either masked language modeling or next sentence prediction.
 
60
  'token_str': 'الحب'}]
61
  ```
62
 
63
+ *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually.
64
+
65
  Here is how to use this model to get the features of a given text in PyTorch:
66
  ```python
67
  from transformers import AutoTokenizer, AutoModel
 
84
 
85
  ## Training data
86
  - DA
87
+ - A collection of dialectal Arabic data described in [our paper](https://github.com/CAMeL-Lab/CAMeLBERT).
88
 
89
  ## Training procedure
90
  We use [the original implementation](https://github.com/google-research/bert) released by Google for pre-training.