dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: rater_profile
sequence: float64
splits:
- name: train
num_bytes: 1198658
num_examples: 3847
download_size: 640645
dataset_size: 1198658
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-4.0
task_categories:
- text-classification
language:
- fa
pretty_name: Persian Text Readability Dataset
size_categories:
- 1K<n<10K
Dataset Summary
This is a re-upload of the Persian Text Readability Dataset, originally created and published by Mohammadi & Khasteh (2020). It provides sentence-level readability annotations for Persian (Farsi) texts. Each data point includes:
- A text in Persian
- A label (readability level):
0
for easy1
for medium2
for hard
- A rater profile: the average readability label distribution of the raters who annotated that specific text
All texts included have over 80% agreement between at least three human annotators. The dataset is intended for training and evaluating readability assessment models in the Persian language.
Supported Tasks and Leaderboards
Task: Readability Classification
Input: A Persian sentence
Output: A readability level (0
, 1
, or 2
)
This dataset supports both standard classification models and those that take annotator bias into account (using the rater_profile
).
Languages
- Text Language:
fa
(Persian / Farsi)
Dataset Structure
Each data point is a dictionary with the following fields:
{
"text": "متن فارسی نمونه",
"label": 1,
"rater_profile": [0.1, 0.5, 0.4]
}
text
: A single Persian sentence or short passage.label
: The final readability level (0
: easy,1
: medium,2
: hard).rater_profile
: A 3-element float list showing the average readability preferences of the annotators.
Dataset Stats
Level | # of texts | Avg. words per text |
---|---|---|
0 (easy) | 2,953 | 28.8 |
1 (medium) | 572 | 39.8 |
2 (hard) | 322 | 62.1 |
Total | 3,847 | 33.2 |
Source Data
Annotation Process
Texts were manually rated by undergraduate students at the K. N. Toosi University of Technology. Each text was rated by at least three annotators. Only texts where at least 80% of the raters agreed on the label were included.
Rater Profile
The rater_profile
field helps capture rater bias. For example, [0.1, 0.5, 0.4]
means the raters of that text tend to give:
- 10% of their scores as "easy"
- 50% as "medium"
- 40% as "hard"
This can be useful in modeling subjective readability with annotator-specific information.
Citation
Please cite the following if you use this dataset:
@inproceedings{mohammadi2020machine,
title={A machine learning approach to Persian text readability assessment using a crowdsourced dataset},
author={Mohammadi, Hamid and Khasteh, Seyed Hossein},
booktitle={2020 28th Iranian Conference on Electrical Engineering (ICEE)},
pages={1--7},
year={2020},
organization={IEEE}
}
Acknowledgements
We express deep appreciation to the undergraduate computer engineering students at the K. N. Toosi University of Technology who annotated the dataset.
Licensing
This dataset is a re-upload. Licensing terms are inherited from the original work. Please ensure compliance with any applicable usage conditions described in the original publication or source repository: https://github.com/sandstorm12/persian_readability_dataset