TSebbag commited on
Commit
6f2dc0a
·
verified ·
1 Parent(s): 82da452

Update citation

Browse files
Files changed (1) hide show
  1. README.md +21 -8
README.md CHANGED
@@ -44,13 +44,26 @@ It have been amnually annotated by 5 non expert annotators using Label Studio.
44
 
45
  If you use this dataset, please cite the following paper:
46
 
47
- ```
48
- Thomas Sebbag, Solen Quiniou, Nicolas Stucky, Emmanuel Morin.
49
- AdminSet and AdminBERT: a Dataset and a Pre-trained Language Model to Explore the Unstructured Maze of French Administrative Documents,
50
- Proceedings of the 31st International Conference on Computational Linguistics (COLING 2025).
51
- ```
52
-
53
-
54
  <!-- ```bibtex
55
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
56
  ``` -->
 
44
 
45
  If you use this dataset, please cite the following paper:
46
 
 
 
 
 
 
 
 
47
  <!-- ```bibtex
48
+ @inproceedings{sebbag-etal-2025-adminset,
49
+ title = "{A}dmin{S}et and {A}dmin{BERT}: a Dataset and a Pre-trained Language Model to Explore the Unstructured Maze of {F}rench Administrative Documents",
50
+ author = "Sebbag, Thomas and
51
+ Quiniou, Solen and
52
+ Stucky, Nicolas and
53
+ Morin, Emmanuel",
54
+ editor = "Rambow, Owen and
55
+ Wanner, Leo and
56
+ Apidianaki, Marianna and
57
+ Al-Khalifa, Hend and
58
+ Eugenio, Barbara Di and
59
+ Schockaert, Steven",
60
+ booktitle = "Proceedings of the 31st International Conference on Computational Linguistics",
61
+ month = jan,
62
+ year = "2025",
63
+ address = "Abu Dhabi, UAE",
64
+ publisher = "Association for Computational Linguistics",
65
+ url = "https://aclanthology.org/2025.coling-main.27/",
66
+ pages = "392--406",
67
+ abstract = "In recent years, Pre-trained Language Models(PLMs) have been widely used to analyze various documents, playing a crucial role in Natural Language Processing (NLP). However, administrative texts have rarely been used in information extraction tasks, even though this resource is available as open data in many countries. Most of these texts contain many specific domain terms. Moreover, especially in France, they are unstructured because many administrations produce them without a standardized framework. Due to this fact, current language models do not process these documents correctly. In this paper, we propose AdminBERT, the first French pre-trained language models for the administrative domain. Since interesting information in such texts corresponds to named entities and the relations between them, we compare this PLM with general domain language models, fine-tuned on the Named Entity Recognition (NER) task applied to administrative texts, as well as to a Large Language Model (LLM) and to a language model with an architecture different from the BERT one. We show that taking advantage of a PLM for French administrative data increases the performance in the administrative and general domains, on these texts. We also release AdminBERT as well as AdminSet, the pre-training corpus of administrative texts in French and the subset AdminSet-NER, the first NER dataset consisting exclusively of administrative texts in French."
68
+ }
69
  ``` -->