File size: 3,483 Bytes
cb07be5 02ef041 f4ec00c c115642 d0d0c06 cb07be5 e11e256 d0d0c06 5b905a5 cb07be5 8df03fc 5b905a5 fa01c06 b38fdd1 26a244b 8df03fc f0e0348 8df03fc f0e0348 5b905a5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 |
---
dataset_info:
features:
- name: non_vocalized
dtype: string
- name: vocalized
dtype: string
- name: source
dtype: string
splits:
- name: train
num_bytes: 3609941776
num_examples: 1463790
- name: valid
num_bytes: 74699622
num_examples: 30181
- name: test
num_bytes: 37176837
num_examples: 15091
download_size: 3516498736
dataset_size: 3721818235
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
- split: test
path: data/test-*
license: mit
language:
- ar
pretty_name: Arabic Tashkeel Dataset
---
# Arabic Tashkeel Dataset
This is a fairly large dataset gathered from five main sources:
- [`tashkeela`](https://huggingface.co/datasets/community-datasets/tashkeela) **(1.79GB - 45.05%)**: The entire Tashkeela dataset, repurposed in sentences. Some rows were omitted as they contain low diacritic (tashkeel characters) rate.
- `shamela` **(1.67GB - 42.10%)**: Random pages from over 2,000 books on the [Shamela Library](https://shamela.ws/). Pages were selected using the below function (high diacritics rate)
- `wikipedia` **(269.94MB - 6.64%)**: A collection of Wikipedia articles. Diacritics were added using OpenAI's [GPT-4o mini](https://openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/) model. At the time of writing this, many other LLMs were tried (such as GPT-4o, Claude 3 Haiku, Claude 3.5 Sonnet, Llama 3.1 70b, among others), and this one (surprisingly) scored the highest in a subset of the tashkeela dataset.
- `ashaar` **(117.86MB - 2.90%)**: [APCD](https://huggingface.co/datasets/arbml/APCD), [APCDv2](https://huggingface.co/datasets/arbml/APCDv2), [Ashaar_diacritized](https://huggingface.co/datasets/arbml/Ashaar_diacritized), [Ashaar_meter](https://huggingface.co/datasets/arbml/Ashaar_meter) merged. Most rows from these datasets were excluded, and only those with sufficient diacritics were retained.
- [`quran-riwayat`](https://huggingface.co/datasets/Abdou/quran-riwayat) **(71.73MB - 1.77%)**: Six different riwayat of Quran.
- [`hadith`](https://huggingface.co/datasets/arbml/LK_Hadith) **(62.69MB - 1.54%)**: Leeds University and King Saud University (LK) Hadith Corpus.
To filter out samples that contain partial or no tashkeel, we only retain sentences where diacritic characters make up 70% or more of the Arabic characters, using this function:
```python
import pyarabic.araby as araby
# a function that determines whether a text contains Arabic diacritics
def has_diacritics(text):
tashkeel_chars = set(araby.TASHKEEL)
arabic_chars = set("ابتثجحخدذرزسشصضطظعغفقكلمنهويىءآأؤإئ")
# if tashkeel characters's count is greater than 70% of the Arabic characters' count, then the text has diacritics
return sum(1 for c in text if c in tashkeel_chars) >= 0.7 * sum(1 for c in text if c in arabic_chars)
```
We make use of the `pyarabic` library, make sure to install it:
```
$ pip install pyarabic
```
## Main Uses
This dataset can be used to train models to automatically add diacritics (perform tashkeel) to Arabic text.
## Limitations
Over 90% of the dataset consists primarily of religious texts in Classical Arabic. As a result, models trained on this data are well-suited for vocalizing such texts but may struggle with Modern Standard Arabic. Wikipedia articles were added to help ease this issue, though they may not entirely resolve it. |