Edit model card

Model Card for DistilBERT German Text Complexity

This model is version of distilbert-base-german-cased fine-tuned for text complexity prediction on a scale between 1 and 7.

Direct Use

To use this model, use our eval_distilbert.py script.

Training Details

The model is a fine-tuned version of the distilbert-base-german-cased and a contribution to the GermEval 2022 shared task on text complexity prediction. It was fine-tuned on the dataset by Naderi et al, 2019. For further details, visit our KONVENS paper.

Citation

Please cite our INLG 2023 paper, if you use our model. BibTeX:

@inproceedings{anschutz-groh-2022-tum,
    title = "{TUM} Social Computing at {G}erm{E}val 2022: Towards the Significance of Text Statistics and Neural Embeddings in Text Complexity Prediction",
    author = {Ansch{\"u}tz, Miriam  and
      Groh, Georg},
    booktitle = "Proceedings of the GermEval 2022 Workshop on Text Complexity Assessment of German Text",
    month = sep,
    year = "2022",
    address = "Potsdam, Germany",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2022.germeval-1.4",
    pages = "21--26",
}
Downloads last month
19
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.