--- library_name: mlx license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-0.6B/blob/main/LICENSE pipeline_tag: text-generation base_model: Qwen/Qwen3-0.6B tags: - mlx --- # Qwen3-0.6B-dwq6c-mlx Tuned for fast output, no emoticons, trained for 168 samples, loss avg 0.2032, final loss 0.1682 Performance evaluation: ```bash arc_challenge: acc 0.273, acc_norm 0.290, stderr: 0.013 arc_easy: acc 0.412, acc_norm 0.360, stderr 0.0098 boolq: acc 0.378, stderr 0.0084 hellaswag: acc 0.366, acc_norm 0.425, stderr: 0.0049 openbookqa: acc 0.190, acc_norm 0.342, stderr: 0.0212 piqa: acc 0.657, acc_norm 0.649, stderr: 0.011 winogrande: acc 0.542, stderr: 0.014 ``` This model [Qwen3-0.6B-dwq6c-mlx](https://huggingface.co/Qwen3-0.6B-dwq6c-mlx) was converted to MLX format from [Qwen/Qwen3-0.6B](https://huggingface.co/Qwen/Qwen3-0.6B) using mlx-lm version **0.26.0**. ## Use with mlx ```bash pip install mlx-lm ``` ```python from mlx_lm import load, generate model, tokenizer = load("Qwen3-0.6B-dwq6c-mlx") prompt = "hello" if tokenizer.chat_template is not None: messages = [{"role": "user", "content": prompt}] prompt = tokenizer.apply_chat_template( messages, add_generation_prompt=True ) response = generate(model, tokenizer, prompt=prompt, verbose=True) ```