sbx/KB-bert-base-swedish-cased_PI-detection-basic

This model was developed as part of the Mormor Karl research environment. It assesses whether each token is likely to belong to a piece of personal or sensitive information and assigns it the appropriate class (sensitive or not).
It should be noted that this model performs best on the domain it was trained on, i.e. second-language learner essays; neither does it guarantee that all of the personal information will be detected and should only be used to assist personal information detection and classification, not completely automatize it.

Model Details

Model Description

Model Sources [optional]

  • Paper: The Devil’s in the Details: the Detailedness of Classes Influences Personal Information Detection and Labeling (link)

Uses

Direct Use

This model should be used to assist annotation or de-identification efforts conducted by humans as a pre-annotation step. The detection and classification of personal information can be followed by removal or replacement of said elements.

Bias, Risks, and Limitations

This model has only been trained on one domain (L2 learner essays). It does not necessarily perform well on other domains. Due to the limitations of the training data, it may also perform worse on rare or previously unseen types of personal information.
Given how high-stakes personal information detection is and that 100% accuracy cannot be guaranteed, this model should be used to assist, not automatize annotation or de-identification procedures.

Recommendations

Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.

Citation [optional]

BibTeX:

@inproceedings{szawerna-etal-2025-devils,
    title = "The Devil{'}s in the Details: the Detailedness of Classes Influences Personal Information Detection and Labeling",
    author = "Szawerna, Maria Irena  and
      Dobnik, Simon  and
      Mu{\~n}oz S{\'a}nchez, Ricardo  and
      Volodina, Elena",
    editor = "Johansson, Richard  and
      Stymne, Sara",
    booktitle = "Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025)",
    month = mar,
    year = "2025",
    address = "Tallinn, Estonia",
    publisher = "University of Tartu Library",
    url = "https://aclanthology.org/2025.nodalida-1.70/",
    pages = "697--708",
    ISBN = "978-9908-53-109-0",
    abstract = "In this paper, we experiment with the effect of different levels of detailedness or granularity{---}understood as i) the number of classes, and ii) the classes' semantic depth in the sense of hypernym and hyponym relations {---} of the annotation of Personally Identifiable Information (PII) on automatic detection and labeling of such information. We fine-tune a Swedish BERT model on a corpus of Swedish learner essays annotated with a total of six PII tagsets at varying levels of granularity. We also investigate whether the presence of grammatical and lexical correction annotation in the tokens and class prevalence have an effect on predictions. We observe that the fewer total categories there are, the better the overall results are, but having a more diverse annotation facilitates fewer misclassifications for tokens containing correction annotation. We also note that the classes' internal diversity has an effect on labeling. We conclude from the results that while labeling based on the detailed annotation is difficult because of the number of classes, it is likely that models trained on such annotation rely more on the semantic content captured by contextual word embeddings rather than just the form of the tokens, making them more robust against nonstandard language."
}

APA:

Maria Irena Szawerna, Simon Dobnik, Ricardo Muñoz Sánchez, and Elena Volodina. 2025. The Devil’s in the Details: the Detailedness of Classes Influences Personal Information Detection and Labeling. In Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025), pages 697–708, Tallinn, Estonia. University of Tartu Library.

Model Card Authors

Maria Irena Szawerna (Turtilla)

Downloads last month
5
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for sbx/KB-bert-base-swedish-cased_PI-detection-basic

Finetuned
(10)
this model