lbourdois commited on
Commit
307b822
·
verified ·
1 Parent(s): 36a74b0

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +50 -38
README.md CHANGED
@@ -1,38 +1,50 @@
1
- ---
2
- base_model: Qwen/Qwen2.5-72B-Instruct
3
- language:
4
- - en
5
- license: other
6
- license_name: qwen
7
- license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
8
- pipeline_tag: text-generation
9
- tags:
10
- - chat
11
- - mlx
12
- ---
13
-
14
- # mlx-community/Qwen2.5-72B-Instruct-4bit
15
-
16
- The Model [mlx-community/Qwen2.5-72B-Instruct-4bit](https://huggingface.co/mlx-community/Qwen2.5-72B-Instruct-4bit) was converted to MLX format from [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) using mlx-lm version **0.18.2**.
17
-
18
- ## Use with mlx
19
-
20
- ```bash
21
- pip install mlx-lm
22
- ```
23
-
24
- ```python
25
- from mlx_lm import load, generate
26
-
27
- model, tokenizer = load("mlx-community/Qwen2.5-72B-Instruct-4bit")
28
-
29
- prompt="hello"
30
-
31
- if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
32
- messages = [{"role": "user", "content": prompt}]
33
- prompt = tokenizer.apply_chat_template(
34
- messages, tokenize=False, add_generation_prompt=True
35
- )
36
-
37
- response = generate(model, tokenizer, prompt=prompt, verbose=True)
38
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Qwen/Qwen2.5-72B-Instruct
3
+ language:
4
+ - zho
5
+ - eng
6
+ - fra
7
+ - spa
8
+ - por
9
+ - deu
10
+ - ita
11
+ - rus
12
+ - jpn
13
+ - kor
14
+ - vie
15
+ - tha
16
+ - ara
17
+ license: other
18
+ license_name: qwen
19
+ license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
20
+ pipeline_tag: text-generation
21
+ tags:
22
+ - chat
23
+ - mlx
24
+ ---
25
+
26
+ # mlx-community/Qwen2.5-72B-Instruct-4bit
27
+
28
+ The Model [mlx-community/Qwen2.5-72B-Instruct-4bit](https://huggingface.co/mlx-community/Qwen2.5-72B-Instruct-4bit) was converted to MLX format from [Qwen/Qwen2.5-72B-Instruct](https://huggingface.co/Qwen/Qwen2.5-72B-Instruct) using mlx-lm version **0.18.2**.
29
+
30
+ ## Use with mlx
31
+
32
+ ```bash
33
+ pip install mlx-lm
34
+ ```
35
+
36
+ ```python
37
+ from mlx_lm import load, generate
38
+
39
+ model, tokenizer = load("mlx-community/Qwen2.5-72B-Instruct-4bit")
40
+
41
+ prompt="hello"
42
+
43
+ if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
44
+ messages = [{"role": "user", "content": prompt}]
45
+ prompt = tokenizer.apply_chat_template(
46
+ messages, tokenize=False, add_generation_prompt=True
47
+ )
48
+
49
+ response = generate(model, tokenizer, prompt=prompt, verbose=True)
50
+ ```