Update README.md
Browse files
README.md
CHANGED
@@ -20,7 +20,7 @@ widget:
|
|
20 |
example_title: Example 4
|
21 |
---
|
22 |
|
23 |
-
# roberta-
|
24 |
|
25 |
This model has the same architecture as [xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) and was pretrained from scratch using the Amharic subsets of the [oscar](https://huggingface.co/datasets/oscar), [mc4](https://huggingface.co/datasets/mc4), and [amharic-sentences-corpus](https://huggingface.co/datasets/rasyosef/amharic-sentences-corpus) datasets, on a total of **290 Million tokens**. The tokenizer was trained from scratch on the same text corpus, and had a vocabulary size of 32k.
|
26 |
|
|
|
20 |
example_title: Example 4
|
21 |
---
|
22 |
|
23 |
+
# roberta-medium-amharic
|
24 |
|
25 |
This model has the same architecture as [xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) and was pretrained from scratch using the Amharic subsets of the [oscar](https://huggingface.co/datasets/oscar), [mc4](https://huggingface.co/datasets/mc4), and [amharic-sentences-corpus](https://huggingface.co/datasets/rasyosef/amharic-sentences-corpus) datasets, on a total of **290 Million tokens**. The tokenizer was trained from scratch on the same text corpus, and had a vocabulary size of 32k.
|
26 |
|