Goader commited on
Commit
ae6ccb9
·
verified ·
1 Parent(s): 9e8d2bc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +20 -0
README.md CHANGED
@@ -97,6 +97,26 @@ encoded = tokenizer('Тарас мав чотири яблука. Марічка
97
  output = model(**encoded)
98
  ```
99
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
100
  <!-- ## Citation -->
101
 
102
  <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
 
97
  output = model(**encoded)
98
  ```
99
 
100
+ ## 📚 Citation
101
+
102
+ ```bibtex
103
+ @inproceedings{haltiuk-smywinski-pohl-2025-path,
104
+ title = "On the Path to Make {U}krainian a High-Resource Language",
105
+ author = "Haltiuk, Mykola and
106
+ Smywi{\'n}ski-Pohl, Aleksander",
107
+ editor = "Romanyshyn, Mariana",
108
+ booktitle = "Proceedings of the Fourth Ukrainian Natural Language Processing Workshop (UNLP 2025)",
109
+ month = jul,
110
+ year = "2025",
111
+ address = "Vienna, Austria (online)",
112
+ publisher = "Association for Computational Linguistics",
113
+ url = "https://aclanthology.org/2025.unlp-1.14/",
114
+ pages = "120--130",
115
+ ISBN = "979-8-89176-269-5",
116
+ abstract = "Recent advances in multilingual language modeling have highlighted the importance of high-quality, large-scale datasets in enabling robust performance across languages. However, many low- and mid-resource languages, including Ukrainian, remain significantly underrepresented in existing pretraining corpora. We present Kobza, a large-scale Ukrainian text corpus containing nearly 60 billion tokens, aimed at improving the quality and scale of Ukrainian data available for training multilingual language models. We constructed Kobza from diverse, high-quality sources and applied rigorous deduplication to maximize data utility. Using this dataset, we pre-trained Modern-LiBERTa, the first Ukrainian transformer encoder capable of handling long contexts (up to 8192 tokens). Modern-LiBERTa achieves competitive results on various standard Ukrainian NLP benchmarks, particularly benefiting tasks that require broader contextual understanding or background knowledge. Our goal is to support future efforts to develop robust Ukrainian language models and to encourage greater inclusion of Ukrainian data in multilingual NLP research."
117
+ }
118
+ ```
119
+
120
  <!-- ## Citation -->
121
 
122
  <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->