File size: 1,309 Bytes
c2f2406
 
 
 
 
 
 
 
 
 
 
 
bb8d3b8
 
265af6b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c2f2406
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3-0.6B
tags:
- mlx
---

# Qwen3-0.6B-dwq6c-mlx

Tuned for fast output, no emoticons, trained for 168 samples, loss avg 0.2032, final loss 0.1682

Performance evaluation:

```bash
arc_challenge:
acc 0.273, acc_norm 0.290, stderr: 0.013

arc_easy:
acc 0.412, acc_norm 0.360, stderr 0.0098

boolq:
acc 0.378, stderr 0.0084

hellaswag:
acc 0.366, acc_norm 0.425, stderr: 0.0049

openbookqa:
acc 0.190, acc_norm 0.342, stderr: 0.0212

piqa:
acc 0.657, acc_norm 0.649, stderr: 0.011

winogrande:
acc 0.542, stderr: 0.014
```

This model [Qwen3-0.6B-dwq6c-mlx](https://huggingface.co/Qwen3-0.6B-dwq6c-mlx) was
converted to MLX format from [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B)
using mlx-lm version **0.26.0**.

## Use with mlx

```bash
pip install mlx-lm
```

```python
from mlx_lm import load, generate

model, tokenizer = load("Qwen3-0.6B-dwq6c-mlx")

prompt = "hello"

if tokenizer.chat_template is not None:
    messages = [{"role": "user", "content": prompt}]
    prompt = tokenizer.apply_chat_template(
        messages, add_generation_prompt=True
    )

response = generate(model, tokenizer, prompt=prompt, verbose=True)
```