jjzha commited on
Commit
a00dc3f
·
verified ·
1 Parent(s): acd45a9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -4
README.md CHANGED
@@ -114,16 +114,41 @@ Total training time accounted to 8,928 GPU hours, with an average carbon efficie
114
 
115
  SnakModel was continuously pre-train on a diverse collection of Danish corpora comprising 350M documents and 13.6B words. The `instruct` version is further tuned on 3.7M Danish instruction-answer pairs.
116
 
117
- [Details to follow in Q1 2025]
118
-
119
  **Data Freshness**
120
 
121
  The pre-training data has a cutoff of January 2024.
122
 
123
  ## Evaluation Results
124
 
125
- [Released in Q1 2025]
 
 
 
 
 
 
 
 
126
 
127
  ## Citation
128
 
129
- [Released in Q1 2025]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
114
 
115
  SnakModel was continuously pre-train on a diverse collection of Danish corpora comprising 350M documents and 13.6B words. The `instruct` version is further tuned on 3.7M Danish instruction-answer pairs.
116
 
 
 
117
  **Data Freshness**
118
 
119
  The pre-training data has a cutoff of January 2024.
120
 
121
  ## Evaluation Results
122
 
123
+ | Model | LA (mF1) | NER (μF1) | Senti (mF1) | Summ (BERTScore) | CSR (Acc.) | QA (F1) | TM (Acc.) | CT (Acc.) | AVG |
124
+ | -------------------------- | --------: | --------: | ----------: | ---------------: | ---------: | --------: | --------: | --------: | --------: |
125
+ | LLaMA2-7B\_base | 33.43 | 22.31 | 61.54 | 65.50 | 29.76 | 63.54 | 38.69 | 57.05 | 46.48 |
126
+ | LLaMA2-7B\_chat | 47.42 | 24.63 | 62.35 | 66.15 | **32.24** | 61.34 | 46.67 | 55.18 | 49.50 |
127
+ | LLaMA2-7B\_base + INST₍d₎ₐ | 36.10 | 28.48 | 62.86 | 66.43 | 29.04 | 64.40 | 49.10 | 58.46 | 49.35 |
128
+ | LLaMA2-7B\_chat + INST₍d₎ₐ | 43.40 | 29.70 | 65.92 | 65.81 | 30.95 | 62.46 | 57.26 | 55.59 | 51.39 |
129
+ | Viking-7B | 33.67 | 17.18 | 49.48 | 61.96 | 25.11 | 56.29 | 23.97 | 34.90 | 37.82 |
130
+ | SnakModel-7B\_base | **56.28** | 19.91 | 57.42 | 58.95 | 30.47 | 18.52 | **69.14** | 60.93 | 46.45 |
131
+ | SnakModel-7B\_inst | 52.91 | **29.76** | **66.70** | **66.61** | 29.46 | **64.66** | **71.05** | **71.88** | **56.63** |
132
 
133
  ## Citation
134
 
135
+ ```
136
+ @inproceedings{zhang-etal-2025-snakmodel,
137
+ title = "{SnakModel}: {Lessons} Learned from Training an Open {Danish} Large Language Model",
138
+ author = {Zhang, Mike and
139
+ M{\"u}ller-Eberstein, Max and
140
+ Bassignana, Elisa and
141
+ Goot, Rob van der},
142
+ editor = "Johansson, Richard and
143
+ Stymne, Sara",
144
+ booktitle = "Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025)",
145
+ month = mar,
146
+ year = "2025",
147
+ address = "Tallinn, Estonia",
148
+ publisher = "University of Tartu Library",
149
+ url = "https://aclanthology.org/2025.nodalida-1.80/",
150
+ pages = "812--825",
151
+ ISBN = "978-9908-53-109-0",
152
+ abstract = "We present SnakModel, a Danish large language model (LLM) based on Llama2-7B, which we continuously pre-train on 13.6B Danish words, and further tune on 3.7M Danish instructions. As best practices for creating LLMs for smaller language communities have yet to be established, we examine the effects of early modeling and training decisions on downstream performance throughout the entire training pipeline, including (1) the creation of a strictly curated corpus of Danish text from diverse sources; (2) the language modeling and instruction-tuning training process itself, including the analysis of intermediate training dynamics, and ablations across different hyperparameters; (3) an evaluation on eight language and culturally-specific tasks. Across these experiments SnakModel achieves the highest overall performance, outperforming multiple contemporary Llama2-7B-based models. By making SnakModel, the majority of our pre-training corpus, and the associated code available under open licenses, we hope to foster further research and development in Danish Natural Language Processing, and establish training guidelines for languages with similar resource constraints."
153
+ }
154
+ ```