CmdCody commited on
Commit
e2fc9aa
·
verified ·
1 Parent(s): 7ab7bd1

Create README.md

Browse files

Create model card

Files changed (1) hide show
  1. README.md +114 -0
README.md ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-4.0
3
+ language:
4
+ - de
5
+ - frr
6
+ base_model:
7
+ - facebook/nllb-200-distilled-600M
8
+ pipeline_tag: translation
9
+ ---
10
+
11
+ # Northern Frisian translation model
12
+ This is an [NLLB-200-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) model fine-tuned for translating between German and
13
+ the Northern Frisian dialects of Mooringer Frasch and Wiringhiirder Freesk following [this great blogpost](https://cointegrated.medium.com/a37fc706b865).
14
+
15
+ ## Data
16
+
17
+ 1. Mooring <-> German
18
+ The Mooring dataset for finetuning consisted of 9339 sentence pairs.
19
+ Most examples (roughly 5100) were taken directly from
20
+ ["Rüm Hart"](https://www.nordfriiskfutuur.eu/fileadmin/Content/Nordfriisk_Futuur/E-Books/N._A._Johannsen__Ruem_hart.pdf)
21
+ published by the Nordfriisk Instituut. For sentence splitting the python
22
+ [sentence-splitting library](https://pypi.org/project/sentence-splitter/) was used. The splitting wasn't perfect,
23
+ especially in cases of direct speech, so that manual re-alignment and further splitting was necessary.
24
+ Further, the texts about larks from Föögle önj Nordfraschlönj, Marie Tångeberg, 1992 were added, a translation of the
25
+ story Bulemanns Haus by Theodor Storm, as well as roughly 3000 examples taken from the Frasch Uurdebök,
26
+ Friesisches Wörterbuch, Neumünster 1988.
27
+ Finally, a little under 180 very simple self-written examples were used as evaluation data set.
28
+
29
+ 3. Wiringhiirder <-> German
30
+ The Wiringhiirder dataset consisted of 7529 sentence pairs taken from the books
31
+ ["Di muon fuon e halie"](https://www.nordfriiskfutuur.eu/fileadmin/Content/Nordfriisk_Futuur/E-Books/Peter_Jensen__Di_muon_fuon_e_halie.pdf)
32
+ and ["Di tofel"](https://www.nordfriiskfutuur.eu/fileadmin/Content/Nordfriisk_Futuur/E-Books/Peter_Jensen__Di_tofel.pdf)
33
+ by Peter Jensen published by the Nordfriisk Instituut. Similar measures were taken as for Rüm Hart above.
34
+ For evaluation sentences were collected from Wikipedia, however the evaluation set remains very small and is barely enough to detect
35
+ overfitting.
36
+
37
+
38
+ ## Usage
39
+ How to use the model:
40
+ ```python
41
+ !pip install transformers==4.33
42
+
43
+ from transformers import AutoModelForSeq2SeqLM, NllbTokenizer
44
+
45
+ def create_tokenizer_with_new_langs(model_id, new_langs):
46
+ tokenizer = NllbTokenizer.from_pretrained(model_id)
47
+ for lang in new_langs:
48
+ old_len = len(tokenizer) - int(new_lang in tokenizer.added_tokens_encoder)
49
+ new_token_id = old_len - 1
50
+ if new_lang in tokenizer.added_tokens_encoder:
51
+ new_token_id = tokenizer.added_tokens_encoder[new_lang] - 1
52
+ tokenizer.lang_code_to_id[new_lang] = new_token_id
53
+ tokenizer.id_to_lang_code[new_token_id] = new_lang
54
+ # always move "mask" to the last position
55
+ tokenizer.fairseq_tokens_to_ids["<mask>"] = len(tokenizer.sp_model) + len(tokenizer.lang_code_to_id) + tokenizer.fairseq_offset
56
+
57
+ tokenizer.fairseq_tokens_to_ids.update(tokenizer.lang_code_to_id)
58
+ tokenizer.fairseq_ids_to_tokens = {v: k for k, v in tokenizer.fairseq_tokens_to_ids.items()}
59
+ if new_lang not in tokenizer._additional_special_tokens:
60
+ tokenizer._additional_special_tokens.append(new_lang)
61
+ # clear the added token encoder; otherwise a new token may end up there by mistake
62
+ tokenizer.added_tokens_encoder = {}
63
+ tokenizer.added_tokens_decoder = {}
64
+
65
+ return tokenizer
66
+
67
+ def translate(
68
+ text,
69
+ tokenizer,
70
+ model,
71
+ src_lang='moo_Latn',
72
+ tgt_lang='deu_Latn',
73
+ a=32,
74
+ b=3,
75
+ max_input_length=1024,
76
+ num_beams=4,
77
+ **kwargs
78
+ ):
79
+ tokenizer.src_lang = src_lang
80
+ tokenizer.tgt_lang = tgt_lang
81
+ inputs = tokenizer(text, return_tensors='pt', padding=True, truncation=True, max_length=max_input_length)
82
+ result = model.generate(
83
+ **inputs.to(model.device),
84
+ forced_bos_token_id=tokenizer.convert_tokens_to_ids(tgt_lang),
85
+ max_new_tokens=int(a + b * inputs.input_ids.shape[1]),
86
+ num_beams=num_beams,
87
+ **kwargs
88
+ )
89
+ return tokenizer.batch_decode(result, skip_special_tokens=True)
90
+
91
+ path = "CmdCody/nllb-deu-frr"
92
+ tokenizer = create_tokenizer_with_new_langs(path, ['moo_Latn', 'wir_Latn'])
93
+ model = AutoModelForSeq2SeqLM.from_pretrained(path)
94
+
95
+ translate("Momme booget önj Naibel", tokenizer=tokenizer, model=model)
96
+ ```
97
+
98
+ ## Training
99
+ The model was trained in a Google Colab notebook for 4 epochs and a batch size of 16 following the above mentioned blog post with two notable adaptations:
100
+ 1. The data iteration was changed to make sure that the model sees each example in the dataset exactly once per epoch.
101
+ 2. After tokenization and batching the complete data set is shuffled before each epoch so that all translation directions are mixed. However, each batch only contains examples for one direction.
102
+
103
+ ## Evaluation
104
+ Metrics on the evaluation data sets:
105
+
106
+ | | Bleu | ChrF++ |
107
+ |------------|-------|--------|
108
+ | Moo -> Deu | 55.78 | 70.73 |
109
+ | Deu -> Moo | 50.19 | 67.76 |
110
+ | Wir -> Deu | 67.22 | 80.16 |
111
+ | Deu -> Wir | 42.35 | 61.08 |
112
+
113
+ Note: As mentioned above the Wiringhiirder evaluation set is very small and the resulting metrics should not be compared with the Mooring
114
+ metrics.