lbourdois commited on
Commit
19a5e4d
·
verified ·
1 Parent(s): 9822fa5

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +74 -61
README.md CHANGED
@@ -1,61 +1,74 @@
1
- ---
2
- base_model:
3
- - Lawnakk/BBALAW1.6
4
- - Qwen/Qwen2.5-7B
5
- library_name: transformers
6
- tags:
7
- - mergekit
8
- - merge
9
-
10
- ---
11
- # merge
12
-
13
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
14
-
15
- ## Merge Details
16
- ### Merge Method
17
-
18
- This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
19
-
20
- ### Models Merged
21
-
22
- The following models were included in the merge:
23
- * [Lawnakk/BBALAW1.6](https://huggingface.co/Lawnakk/BBALAW1.6)
24
- * [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B)
25
-
26
- ### Configuration
27
-
28
- The following YAML configuration was used to produce this model:
29
-
30
- ```yaml
31
- slices:
32
- - sources:
33
- - model: Lawnakk/BBALAW1.6
34
- layer_range:
35
- - 0
36
- - 28
37
- - model: Qwen/Qwen2.5-7B
38
- layer_range:
39
- - 0
40
- - 28
41
- merge_method: slerp
42
- base_model: Qwen/Qwen2.5-7B
43
- parameters:
44
- t:
45
- - filter: self_attn
46
- value:
47
- - 0
48
- - 0.5
49
- - 0.3
50
- - 0.7
51
- - 1
52
- - filter: mlp
53
- value:
54
- - 1
55
- - 0.5
56
- - 0.7
57
- - 0.3
58
- - 0
59
- - value: 0.5
60
- dtype: bfloat16
61
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - Lawnakk/BBALAW1.6
4
+ - Qwen/Qwen2.5-7B
5
+ library_name: transformers
6
+ tags:
7
+ - mergekit
8
+ - merge
9
+ language:
10
+ - zho
11
+ - eng
12
+ - fra
13
+ - spa
14
+ - por
15
+ - deu
16
+ - ita
17
+ - rus
18
+ - jpn
19
+ - kor
20
+ - vie
21
+ - tha
22
+ - ara
23
+ ---
24
+ # merge
25
+
26
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
27
+
28
+ ## Merge Details
29
+ ### Merge Method
30
+
31
+ This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
32
+
33
+ ### Models Merged
34
+
35
+ The following models were included in the merge:
36
+ * [Lawnakk/BBALAW1.6](https://huggingface.co/Lawnakk/BBALAW1.6)
37
+ * [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B)
38
+
39
+ ### Configuration
40
+
41
+ The following YAML configuration was used to produce this model:
42
+
43
+ ```yaml
44
+ slices:
45
+ - sources:
46
+ - model: Lawnakk/BBALAW1.6
47
+ layer_range:
48
+ - 0
49
+ - 28
50
+ - model: Qwen/Qwen2.5-7B
51
+ layer_range:
52
+ - 0
53
+ - 28
54
+ merge_method: slerp
55
+ base_model: Qwen/Qwen2.5-7B
56
+ parameters:
57
+ t:
58
+ - filter: self_attn
59
+ value:
60
+ - 0
61
+ - 0.5
62
+ - 0.3
63
+ - 0.7
64
+ - 1
65
+ - filter: mlp
66
+ value:
67
+ - 1
68
+ - 0.5
69
+ - 0.7
70
+ - 0.3
71
+ - 0
72
+ - value: 0.5
73
+ dtype: bfloat16
74
+ ```