Update README.md
Browse files
README.md
CHANGED
@@ -9,14 +9,15 @@ pipeline_tag: fill-mask
|
|
9 |
tags: []
|
10 |
---
|
11 |
|
12 |
-
|
13 |
|
|
|
14 |
|
15 |
|
16 |
<!-- Provide a quick summary of what the model is/does. -->
|
17 |
-
Modern-LiBERTa is a ModernBERT encoder model designed specifically for **Ukrainian**, with support for **long contexts up to 8,192 tokens**. It was introduced in the paper On the Path to Make Ukrainian a High-Resource Language presented at the [UNLP](https://unlp.org.ua/) @ [ACL 2025](https://2025.aclweb.org/).
|
18 |
|
19 |
-
The model is pre-trained on **Kobza
|
20 |
|
21 |
The goal of this work is to **make Ukrainian a first-class citizen in multilingual and monolingual NLP**, enabling robust performance on complex tasks that require broader context and knowledge access.
|
22 |
|
@@ -97,7 +98,7 @@ encoded = tokenizer('Тарас мав чотири яблука. Марічка
|
|
97 |
output = model(**encoded)
|
98 |
```
|
99 |
|
100 |
-
##
|
101 |
|
102 |
```bibtex
|
103 |
@inproceedings{haltiuk-smywinski-pohl-2025-path,
|
@@ -117,30 +118,6 @@ output = model(**encoded)
|
|
117 |
}
|
118 |
```
|
119 |
|
120 |
-
<!-- ## Citation -->
|
121 |
-
|
122 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
123 |
-
|
124 |
-
<!-- ```
|
125 |
-
@inproceedings{haltiuk-smywinski-pohl-2024-liberta,
|
126 |
-
title = "{L}i{BERT}a: Advancing {U}krainian Language Modeling through Pre-training from Scratch",
|
127 |
-
author = "Haltiuk, Mykola and
|
128 |
-
Smywi{\'n}ski-Pohl, Aleksander",
|
129 |
-
editor = "Romanyshyn, Mariana and
|
130 |
-
Romanyshyn, Nataliia and
|
131 |
-
Hlybovets, Andrii and
|
132 |
-
Ignatenko, Oleksii",
|
133 |
-
booktitle = "Proceedings of the Third Ukrainian Natural Language Processing Workshop (UNLP) @ LREC-COLING 2024",
|
134 |
-
month = may,
|
135 |
-
year = "2024",
|
136 |
-
address = "Torino, Italia",
|
137 |
-
publisher = "ELRA and ICCL",
|
138 |
-
url = "https://aclanthology.org/2024.unlp-1.14",
|
139 |
-
pages = "120--128",
|
140 |
-
abstract = "Recent advancements in Natural Language Processing (NLP) have spurred remarkable progress in language modeling, predominantly benefiting English. While Ukrainian NLP has long grappled with significant challenges due to limited data and computational resources, recent years have seen a shift with the emergence of new corpora, marking a pivotal moment in addressing these obstacles. This paper introduces LiBERTa Large, the inaugural BERT Large model pre-trained entirely from scratch only on Ukrainian texts. Leveraging extensive multilingual text corpora, including a substantial Ukrainian subset, LiBERTa Large establishes a foundational resource for Ukrainian NLU tasks. Our model outperforms existing multilingual and monolingual models pre-trained from scratch for Ukrainian, demonstrating competitive performance against those relying on cross-lingual transfer from English. This achievement underscores our ability to achieve superior performance through pre-training from scratch with additional enhancements, obviating the need to rely on decisions made for English models to efficiently transfer weights. We establish LiBERTa Large as a robust baseline, paving the way for future advancements in Ukrainian language modeling.",
|
141 |
-
}
|
142 |
-
``` -->
|
143 |
-
|
144 |
## Licence
|
145 |
|
146 |
CC-BY 4.0
|
|
|
9 |
tags: []
|
10 |
---
|
11 |
|
12 |
+
<h1 align="center">Modern-LiBERTa</h1>
|
13 |
|
14 |
+
<h2 align="center">On the Path to Make Ukrainian a High-Resource Language <a href="https://aclanthology.org/2025.unlp-1.14/">[paper]</a></h2>
|
15 |
|
16 |
|
17 |
<!-- Provide a quick summary of what the model is/does. -->
|
18 |
+
Modern-LiBERTa is a ModernBERT encoder model designed specifically for **Ukrainian**, with support for **long contexts up to 8,192 tokens**. It was introduced in the paper [On the Path to Make Ukrainian a High-Resource Language](https://aclanthology.org/2025.unlp-1.14/) presented at the [UNLP](https://unlp.org.ua/) @ [ACL 2025](https://2025.aclweb.org/).
|
19 |
|
20 |
+
The model is pre-trained on **Kobza** [[HF](https://huggingface.co/datasets/Goader/kobza)], a large-scale Ukrainian corpus of nearly 60 billion tokens. Modern-LiBERTa builds on the [ModernBERT](https://arxiv.org/abs/2412.13663) architecture and is the first Ukrainian language model to support long-context encoding efficiently.
|
21 |
|
22 |
The goal of this work is to **make Ukrainian a first-class citizen in multilingual and monolingual NLP**, enabling robust performance on complex tasks that require broader context and knowledge access.
|
23 |
|
|
|
98 |
output = model(**encoded)
|
99 |
```
|
100 |
|
101 |
+
## Citation
|
102 |
|
103 |
```bibtex
|
104 |
@inproceedings{haltiuk-smywinski-pohl-2025-path,
|
|
|
118 |
}
|
119 |
```
|
120 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
121 |
## Licence
|
122 |
|
123 |
CC-BY 4.0
|