Update README.md
Browse files
README.md
CHANGED
@@ -1,11 +1,11 @@
|
|
1 |
-
---
|
2 |
-
license: mit
|
3 |
-
library_name: mlx
|
4 |
-
tags:
|
5 |
-
- mlx
|
6 |
-
base_model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
|
7 |
-
pipeline_tag: text-generation
|
8 |
-
---
|
9 |
|
10 |
# mlx-community/DeepSeek-R1-0528-Qwen3-8B-4bit-AWQ
|
11 |
|
@@ -13,6 +13,8 @@ This model [mlx-community/DeepSeek-R1-0528-Qwen3-8B-4bit-AWQ](https://huggingfac
|
|
13 |
converted to MLX format from [deepseek-ai/DeepSeek-R1-0528-Qwen3-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B)
|
14 |
using mlx-lm version **0.25.2**.
|
15 |
|
|
|
|
|
16 |
## Use with mlx
|
17 |
|
18 |
```bash
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
library_name: mlx
|
4 |
+
tags:
|
5 |
+
- mlx
|
6 |
+
base_model: deepseek-ai/DeepSeek-R1-0528-Qwen3-8B
|
7 |
+
pipeline_tag: text-generation
|
8 |
+
---
|
9 |
|
10 |
# mlx-community/DeepSeek-R1-0528-Qwen3-8B-4bit-AWQ
|
11 |
|
|
|
13 |
converted to MLX format from [deepseek-ai/DeepSeek-R1-0528-Qwen3-8B](https://huggingface.co/deepseek-ai/DeepSeek-R1-0528-Qwen3-8B)
|
14 |
using mlx-lm version **0.25.2**.
|
15 |
|
16 |
+
AWQ Parameters: --bits 4 --group-size 64 --embed-bits 4 --embed-group-size 32 --num-samples 256 --sequence-length 1024 --n-grid 50
|
17 |
+
|
18 |
## Use with mlx
|
19 |
|
20 |
```bash
|