lbourdois commited on
Commit
31c8e85
·
verified ·
1 Parent(s): eff185b

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +54 -40
README.md CHANGED
@@ -1,40 +1,54 @@
1
- ---
2
- license: other
3
- license_name: qwen
4
- license_link: https://huggingface.co/Qwen/Qwen2.5-14B-Instruct/blob/main/LICENSE
5
- base_model:
6
- - Qwen/Qwen2.5-14B-Instruct
7
- base_model_relation: quantized
8
- tags:
9
- - VPTQ
10
- - Quantized
11
- - Quantization
12
- ---
13
-
14
- **Disclaimer**:
15
-
16
- The model is reproduced based on the paper *VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models* [github](https://github.com/microsoft/vptq) and [arXiv](https://arxiv.org/abs/2409.17066)
17
-
18
- The model itself is sourced from a community release.
19
-
20
- It is intended only for experimental purposes.
21
-
22
- Users are responsible for any consequences arising from the use of this model.
23
-
24
- **Note**:
25
-
26
- The PPL test results are for reference only and were collected using GPTQ testing script.
27
-
28
- ```json
29
- {
30
- "ctx_2048": {
31
- "wikitext2": 5.8772149085998535
32
- },
33
- "ctx_4096": {
34
- "wikitext2": 5.4326276779174805
35
- },
36
- "ctx_8192": {
37
- "wikitext2": 5.163432598114014
38
- }
39
- }
40
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: qwen
4
+ license_link: https://huggingface.co/Qwen/Qwen2.5-14B-Instruct/blob/main/LICENSE
5
+ base_model:
6
+ - Qwen/Qwen2.5-14B-Instruct
7
+ base_model_relation: quantized
8
+ tags:
9
+ - VPTQ
10
+ - Quantized
11
+ - Quantization
12
+ language:
13
+ - zho
14
+ - eng
15
+ - fra
16
+ - spa
17
+ - por
18
+ - deu
19
+ - ita
20
+ - rus
21
+ - jpn
22
+ - kor
23
+ - vie
24
+ - tha
25
+ - ara
26
+ ---
27
+
28
+ **Disclaimer**:
29
+
30
+ The model is reproduced based on the paper *VPTQ: Extreme Low-bit Vector Post-Training Quantization for Large Language Models* [github](https://github.com/microsoft/vptq) and [arXiv](https://arxiv.org/abs/2409.17066)
31
+
32
+ The model itself is sourced from a community release.
33
+
34
+ It is intended only for experimental purposes.
35
+
36
+ Users are responsible for any consequences arising from the use of this model.
37
+
38
+ **Note**:
39
+
40
+ The PPL test results are for reference only and were collected using GPTQ testing script.
41
+
42
+ ```json
43
+ {
44
+ "ctx_2048": {
45
+ "wikitext2": 5.8772149085998535
46
+ },
47
+ "ctx_4096": {
48
+ "wikitext2": 5.4326276779174805
49
+ },
50
+ "ctx_8192": {
51
+ "wikitext2": 5.163432598114014
52
+ }
53
+ }
54
+ ```