jjzha commited on
Commit
e848422
·
verified ·
1 Parent(s): ddc106b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -4
README.md CHANGED
@@ -73,16 +73,41 @@ Total training time accounted to 8,928 GPU hours, with an average carbon efficie
73
 
74
  SnakModel was continuously pre-train on a diverse collection of Danish corpora comprising 350M documents and 13.6B words. The `instruct` version is further tuned on 3.7M Danish instruction-answer pairs.
75
 
76
- [Details to follow in Q1 2025]
77
-
78
  **Data Freshness**
79
 
80
  The pre-training data has a cutoff of January 2024.
81
 
82
  ## Evaluation Results
83
 
84
- [Released in Q1 2025]
 
 
 
 
 
 
 
 
85
 
86
  ## Citation
87
 
88
- [Released in Q1 2025]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73
 
74
  SnakModel was continuously pre-train on a diverse collection of Danish corpora comprising 350M documents and 13.6B words. The `instruct` version is further tuned on 3.7M Danish instruction-answer pairs.
75
 
 
 
76
  **Data Freshness**
77
 
78
  The pre-training data has a cutoff of January 2024.
79
 
80
  ## Evaluation Results
81
 
82
+ | Model | LA (mF1) | NER (μF1) | Senti (mF1) | Summ (BERTScore) | CSR (Acc.) | QA (F1) | TM (Acc.) | CT (Acc.) | AVG |
83
+ | -------------------------- | --------: | --------: | ----------: | ---------------: | ---------: | --------: | --------: | --------: | --------: |
84
+ | LLaMA2-7B\_base | 33.43 | 22.31 | 61.54 | 65.50 | 29.76 | 63.54 | 38.69 | 57.05 | 46.48 |
85
+ | LLaMA2-7B\_chat | 47.42 | 24.63 | 62.35 | 66.15 | **32.24** | 61.34 | 46.67 | 55.18 | 49.50 |
86
+ | LLaMA2-7B\_base + INST₍d₎ₐ | 36.10 | 28.48 | 62.86 | 66.43 | 29.04 | 64.40 | 49.10 | 58.46 | 49.35 |
87
+ | LLaMA2-7B\_chat + INST₍d₎ₐ | 43.40 | 29.70 | 65.92 | 65.81 | 30.95 | 62.46 | 57.26 | 55.59 | 51.39 |
88
+ | Viking-7B | 33.67 | 17.18 | 49.48 | 61.96 | 25.11 | 56.29 | 23.97 | 34.90 | 37.82 |
89
+ | SnakModel-7B\_base | **56.28** | 19.91 | 57.42 | 58.95 | 30.47 | 18.52 | **69.14** | 60.93 | 46.45 |
90
+ | SnakModel-7B\_inst | 52.91 | **29.76** | **66.70** | **66.61** | 29.46 | **64.66** | **71.05** | **71.88** | **56.63** |
91
 
92
  ## Citation
93
 
94
+ ```
95
+ @inproceedings{zhang-etal-2025-snakmodel,
96
+ title = "{SnakModel}: {Lessons} Learned from Training an Open {Danish} Large Language Model",
97
+ author = {Zhang, Mike and
98
+ M{\"u}ller-Eberstein, Max and
99
+ Bassignana, Elisa and
100
+ Goot, Rob van der},
101
+ editor = "Johansson, Richard and
102
+ Stymne, Sara",
103
+ booktitle = "Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025)",
104
+ month = mar,
105
+ year = "2025",
106
+ address = "Tallinn, Estonia",
107
+ publisher = "University of Tartu Library",
108
+ url = "https://aclanthology.org/2025.nodalida-1.80/",
109
+ pages = "812--825",
110
+ ISBN = "978-9908-53-109-0",
111
+ abstract = "We present SnakModel, a Danish large language model (LLM) based on Llama2-7B, which we continuously pre-train on 13.6B Danish words, and further tune on 3.7M Danish instructions. As best practices for creating LLMs for smaller language communities have yet to be established, we examine the effects of early modeling and training decisions on downstream performance throughout the entire training pipeline, including (1) the creation of a strictly curated corpus of Danish text from diverse sources; (2) the language modeling and instruction-tuning training process itself, including the analysis of intermediate training dynamics, and ablations across different hyperparameters; (3) an evaluation on eight language and culturally-specific tasks. Across these experiments SnakModel achieves the highest overall performance, outperforming multiple contemporary Llama2-7B-based models. By making SnakModel, the majority of our pre-training corpus, and the associated code available under open licenses, we hope to foster further research and development in Danish Natural Language Processing, and establish training guidelines for languages with similar resource constraints."
112
+ }
113
+ ```