CAMeLBERT+Word+CE Readability Model

Model description

CAMeLBERT+Word+CE is a readability assessment model that was built by fine-tuning the CAMeLBERT-msa model with cross-entropy loss (CE). For the fine-tuning, we used the Word input variant from BAREC-Corpus-v1.0. Our fine-tuning procedure and the hyperparameters we used can be found in our paper "A Large and Balanced Corpus for Fine-grained Arabic Readability Assessment."

Intended uses

You can use the CAMeLBERT+Word+CE model as part of the transformers pipeline.

How to use

To use the model with a transformers pipeline:

>>> from transformers import pipeline
>>> readability = pipeline("text-classification", model="CAMeL-Lab/readability-camelbert-word-CE")
>>> text = 'ูˆ ู‚ุงู„ ู„ู‡ ุงู†ู‡ ูŠุญุจ ุงูƒู„ ุงู„ุทุนุงู… ุจูƒุซุฑู‡'
>>> readability_level = int(readability(text)[0]['label'][6:])+1
>>> print("readability level: {}".format(readability_level))
readability level: 10

Citation

@inproceedings{elmadani-etal-2025-readability,
    title = "A Large and Balanced Corpus for Fine-grained Arabic Readability Assessment",
    author = "Elmadani, Khalid N.  and
      Habash, Nizar  and
      Taha-Thomure, Hanada",
    booktitle = "Findings of the Association for Computational Linguistics: ACL 2025",
    year = "2025",
    address = "Vienna, Austria",
    publisher = "Association for Computational Linguistics"
}
Downloads last month
6
Safetensors
Model size
109M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for CAMeL-Lab/readability-camelbert-word-CE

Finetuned
(6)
this model

Collection including CAMeL-Lab/readability-camelbert-word-CE