lbourdois commited on
Commit
914bac4
·
verified ·
1 Parent(s): 9500be0

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +113 -100
README.md CHANGED
@@ -1,100 +1,113 @@
1
- ---
2
- base_model:
3
- - allura-org/Qwen2.5-32b-RP-Ink
4
- - huihui-ai/QwQ-32B-Preview-abliterated
5
- - AiCloser/Qwen2.5-32B-AGI
6
- - Kaoeiri/Qwenwify-32B-v3
7
- - Saxo/Linkbricks-Horizon-AI-Japanese-Base-32B
8
- - Sao10K/32B-Qwen2.5-Kunou-v1
9
- - EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
10
- - huihui-ai/Qwen2.5-32B-Instruct-abliterated
11
- - OpenBuddy/openbuddy-qwq-32b-v24.2-200k
12
- - Qwen/Qwen2.5-32B-Instruct
13
- - Dans-DiscountModels/Qwen2.5-32B-ChatML
14
- library_name: transformers
15
- tags:
16
- - mergekit
17
- - merge
18
-
19
- ---
20
- # merge
21
-
22
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
23
-
24
- ## Merge Details
25
- ### Merge Method
26
-
27
- This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) as a base.
28
-
29
- ### Models Merged
30
-
31
- The following models were included in the merge:
32
- * [allura-org/Qwen2.5-32b-RP-Ink](https://huggingface.co/allura-org/Qwen2.5-32b-RP-Ink)
33
- * [huihui-ai/QwQ-32B-Preview-abliterated](https://huggingface.co/huihui-ai/QwQ-32B-Preview-abliterated)
34
- * [AiCloser/Qwen2.5-32B-AGI](https://huggingface.co/AiCloser/Qwen2.5-32B-AGI)
35
- * [Kaoeiri/Qwenwify-32B-v3](https://huggingface.co/Kaoeiri/Qwenwify-32B-v3)
36
- * [Saxo/Linkbricks-Horizon-AI-Japanese-Base-32B](https://huggingface.co/Saxo/Linkbricks-Horizon-AI-Japanese-Base-32B)
37
- * [Sao10K/32B-Qwen2.5-Kunou-v1](https://huggingface.co/Sao10K/32B-Qwen2.5-Kunou-v1)
38
- * [EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2)
39
- * [huihui-ai/Qwen2.5-32B-Instruct-abliterated](https://huggingface.co/huihui-ai/Qwen2.5-32B-Instruct-abliterated)
40
- * [OpenBuddy/openbuddy-qwq-32b-v24.2-200k](https://huggingface.co/OpenBuddy/openbuddy-qwq-32b-v24.2-200k)
41
- * [Dans-DiscountModels/Qwen2.5-32B-ChatML](https://huggingface.co/Dans-DiscountModels/Qwen2.5-32B-ChatML)
42
-
43
- ### Configuration
44
-
45
- The following YAML configuration was used to produce this model:
46
-
47
- ```yaml
48
- models:
49
- - model: Kaoeiri/Qwenwify-32B-v3
50
- parameters:
51
- weight: 1.0
52
- density: 0.86
53
- - model: EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
54
- parameters:
55
- weight: 0.50
56
- density: 0.42
57
- - model: Sao10K/32B-Qwen2.5-Kunou-v1
58
- parameters:
59
- weight: 0.30
60
- density: 0.75
61
- - model: Dans-DiscountModels/Qwen2.5-32B-ChatML
62
- parameters:
63
- weight: 0.10
64
- density: 0.65
65
- - model: OpenBuddy/openbuddy-qwq-32b-v24.2-200k
66
- parameters:
67
- weight: 0.25
68
- density: 0.75
69
- - model: Saxo/Linkbricks-Horizon-AI-Japanese-Base-32B
70
- parameters:
71
- weight: 0.20
72
- density: 0.82
73
- - model: allura-org/Qwen2.5-32b-RP-Ink
74
- parameters:
75
- weight: 0.28
76
- density: 0.78
77
- - model: AiCloser/Qwen2.5-32B-AGI
78
- parameters:
79
- weight: 0.12
80
- density: 0.68
81
- - model: huihui-ai/QwQ-32B-Preview-abliterated
82
- parameters:
83
- weight: 0.14
84
- density: 0.65
85
- - model: huihui-ai/Qwen2.5-32B-Instruct-abliterated
86
- parameters:
87
- weight: 0.23
88
- density: 0.75
89
-
90
- merge_method: dare_ties
91
- base_model: Qwen/Qwen2.5-32B-Instruct
92
- parameters:
93
- density: 0.90
94
- epsilon: 0.07
95
- lambda: 1.35
96
- random_seed: 42
97
- dtype: bfloat16
98
- tokenizer_source: union
99
-
100
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - allura-org/Qwen2.5-32b-RP-Ink
4
+ - huihui-ai/QwQ-32B-Preview-abliterated
5
+ - AiCloser/Qwen2.5-32B-AGI
6
+ - Kaoeiri/Qwenwify-32B-v3
7
+ - Saxo/Linkbricks-Horizon-AI-Japanese-Base-32B
8
+ - Sao10K/32B-Qwen2.5-Kunou-v1
9
+ - EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
10
+ - huihui-ai/Qwen2.5-32B-Instruct-abliterated
11
+ - OpenBuddy/openbuddy-qwq-32b-v24.2-200k
12
+ - Qwen/Qwen2.5-32B-Instruct
13
+ - Dans-DiscountModels/Qwen2.5-32B-ChatML
14
+ library_name: transformers
15
+ tags:
16
+ - mergekit
17
+ - merge
18
+ language:
19
+ - zho
20
+ - eng
21
+ - fra
22
+ - spa
23
+ - por
24
+ - deu
25
+ - ita
26
+ - rus
27
+ - jpn
28
+ - kor
29
+ - vie
30
+ - tha
31
+ - ara
32
+ ---
33
+ # merge
34
+
35
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
36
+
37
+ ## Merge Details
38
+ ### Merge Method
39
+
40
+ This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-32B-Instruct](https://huggingface.co/Qwen/Qwen2.5-32B-Instruct) as a base.
41
+
42
+ ### Models Merged
43
+
44
+ The following models were included in the merge:
45
+ * [allura-org/Qwen2.5-32b-RP-Ink](https://huggingface.co/allura-org/Qwen2.5-32b-RP-Ink)
46
+ * [huihui-ai/QwQ-32B-Preview-abliterated](https://huggingface.co/huihui-ai/QwQ-32B-Preview-abliterated)
47
+ * [AiCloser/Qwen2.5-32B-AGI](https://huggingface.co/AiCloser/Qwen2.5-32B-AGI)
48
+ * [Kaoeiri/Qwenwify-32B-v3](https://huggingface.co/Kaoeiri/Qwenwify-32B-v3)
49
+ * [Saxo/Linkbricks-Horizon-AI-Japanese-Base-32B](https://huggingface.co/Saxo/Linkbricks-Horizon-AI-Japanese-Base-32B)
50
+ * [Sao10K/32B-Qwen2.5-Kunou-v1](https://huggingface.co/Sao10K/32B-Qwen2.5-Kunou-v1)
51
+ * [EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2)
52
+ * [huihui-ai/Qwen2.5-32B-Instruct-abliterated](https://huggingface.co/huihui-ai/Qwen2.5-32B-Instruct-abliterated)
53
+ * [OpenBuddy/openbuddy-qwq-32b-v24.2-200k](https://huggingface.co/OpenBuddy/openbuddy-qwq-32b-v24.2-200k)
54
+ * [Dans-DiscountModels/Qwen2.5-32B-ChatML](https://huggingface.co/Dans-DiscountModels/Qwen2.5-32B-ChatML)
55
+
56
+ ### Configuration
57
+
58
+ The following YAML configuration was used to produce this model:
59
+
60
+ ```yaml
61
+ models:
62
+ - model: Kaoeiri/Qwenwify-32B-v3
63
+ parameters:
64
+ weight: 1.0
65
+ density: 0.86
66
+ - model: EVA-UNIT-01/EVA-Qwen2.5-32B-v0.2
67
+ parameters:
68
+ weight: 0.50
69
+ density: 0.42
70
+ - model: Sao10K/32B-Qwen2.5-Kunou-v1
71
+ parameters:
72
+ weight: 0.30
73
+ density: 0.75
74
+ - model: Dans-DiscountModels/Qwen2.5-32B-ChatML
75
+ parameters:
76
+ weight: 0.10
77
+ density: 0.65
78
+ - model: OpenBuddy/openbuddy-qwq-32b-v24.2-200k
79
+ parameters:
80
+ weight: 0.25
81
+ density: 0.75
82
+ - model: Saxo/Linkbricks-Horizon-AI-Japanese-Base-32B
83
+ parameters:
84
+ weight: 0.20
85
+ density: 0.82
86
+ - model: allura-org/Qwen2.5-32b-RP-Ink
87
+ parameters:
88
+ weight: 0.28
89
+ density: 0.78
90
+ - model: AiCloser/Qwen2.5-32B-AGI
91
+ parameters:
92
+ weight: 0.12
93
+ density: 0.68
94
+ - model: huihui-ai/QwQ-32B-Preview-abliterated
95
+ parameters:
96
+ weight: 0.14
97
+ density: 0.65
98
+ - model: huihui-ai/Qwen2.5-32B-Instruct-abliterated
99
+ parameters:
100
+ weight: 0.23
101
+ density: 0.75
102
+
103
+ merge_method: dare_ties
104
+ base_model: Qwen/Qwen2.5-32B-Instruct
105
+ parameters:
106
+ density: 0.90
107
+ epsilon: 0.07
108
+ lambda: 1.35
109
+ random_seed: 42
110
+ dtype: bfloat16
111
+ tokenizer_source: union
112
+
113
+ ```