sbx/KB-bert-base-swedish-cased_PI-detection-detailed
This model was developed as part of the Mormor Karl research environment.
It assesses whether each token is likely to belong to a piece of personal or sensitive information and assigns it the appropriate class (firstname male, firstname female, firstname unknown, initials, middlename, surname, school, work, other institution, area, city, geo, country, place, region, street nr, zip code, transport name, transport nr, age digits, age string, date digits, day, month digit, month word, year, phone nr, email, url, personid nr, account nr, license nr, other nr seq, extra, prof, edu, fam, sensitive, or none).
It should be noted that this model performs best on the domain it was trained on, i.e. second-language learner essays; neither does it guarantee that all of the personal information will be detected and should only be used to assist personal information detection and classification, not completely automatize it.
Model Details
Model Description
- Developed by: Språkbanken Text, as part of the Mormor Karl research environment
- Shared by: Maria Irena Szawerna (Turtilla)
- Model type: BERT for token classification
- Language(s): Swedish
- License: GPL-3.0
- Finetuned from model: KB/bert-base-swedish-cased
Model Sources [optional]
- Paper: The Devil’s in the Details: the Detailedness of Classes Influences Personal Information Detection and Labeling (link)
Uses
Direct Use
This model should be used to assist annotation or de-identification efforts conducted by humans as a pre-annotation step. The detection and classification of personal information can be followed by removal or replacement of said elements.
Bias, Risks, and Limitations
This model has only been trained on one domain (L2 learner essays). It does not necessarily perform well on other domains. Due to the limitations of the training data, it may also perform worse on rare or previously unseen types of personal information.
Given how high-stakes personal information detection is and that 100% accuracy cannot be guaranteed, this model should be used to assist, not automatize annotation or de-identification procedures.
Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
Citation [optional]
BibTeX:
@inproceedings{szawerna-etal-2025-devils, title = "The Devil{'}s in the Details: the Detailedness of Classes Influences Personal Information Detection and Labeling", author = "Szawerna, Maria Irena and Dobnik, Simon and Mu{\~n}oz S{\'a}nchez, Ricardo and Volodina, Elena", editor = "Johansson, Richard and Stymne, Sara", booktitle = "Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025)", month = mar, year = "2025", address = "Tallinn, Estonia", publisher = "University of Tartu Library", url = "https://aclanthology.org/2025.nodalida-1.70/", pages = "697--708", ISBN = "978-9908-53-109-0", abstract = "In this paper, we experiment with the effect of different levels of detailedness or granularity{---}understood as i) the number of classes, and ii) the classes' semantic depth in the sense of hypernym and hyponym relations {---} of the annotation of Personally Identifiable Information (PII) on automatic detection and labeling of such information. We fine-tune a Swedish BERT model on a corpus of Swedish learner essays annotated with a total of six PII tagsets at varying levels of granularity. We also investigate whether the presence of grammatical and lexical correction annotation in the tokens and class prevalence have an effect on predictions. We observe that the fewer total categories there are, the better the overall results are, but having a more diverse annotation facilitates fewer misclassifications for tokens containing correction annotation. We also note that the classes' internal diversity has an effect on labeling. We conclude from the results that while labeling based on the detailed annotation is difficult because of the number of classes, it is likely that models trained on such annotation rely more on the semantic content captured by contextual word embeddings rather than just the form of the tokens, making them more robust against nonstandard language." }
APA:
Maria Irena Szawerna, Simon Dobnik, Ricardo Muñoz Sánchez, and Elena Volodina. 2025. The Devil’s in the Details: the Detailedness of Classes Influences Personal Information Detection and Labeling. In Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025), pages 697–708, Tallinn, Estonia. University of Tartu Library.
Model Card Authors
Maria Irena Szawerna (Turtilla)
- Downloads last month
- 6
Model tree for sbx/KB-bert-base-swedish-cased_PI-detection-detailed
Base model
KB/bert-base-swedish-cased