--- library_name: mlx pipeline_tag: text-generation inference: false # MLX is macOS-only; HF Inference API won't run it license: apache-2.0 base_model: openai/gpt-oss-20b base_model_relation: quantized language: - en - ro tags: - apple-silicon - metal - arm64 - 6-bit - group-size-32 - moe - mpx4 - openai - halley-ai --- # gpt-oss-20b — MLX 6-bit (group size 32) **Summary.** This is a 6-bit (**Q6**) **MLX** quantization of **gpt-oss-20B** (sparse Mixture-of-Experts, MPx4). Group size is **32**. Built for **Apple Silicon** with Metal acceleration. - **Base model:** `openai/gpt-oss-20b` (Apache-2.0) - **Quantization:** MLX Q6, `q_group_size=32` (some tensors remain FP16 for stability) - **Files:** MLX weight shards + `config.json`; tokenizer files included for drop-in use - **Footprint:** ~**18.38 GB** on disk - **Intended use:** local inference / research on M-series Macs - **Not intended for:** safety-critical decisions; outputs may be inaccurate or biased ## Requirements **Runs on:** Apple Silicon (M1 or newer) with **macOS ≥ 13.5** via **MLX (Metal)**. **Not supported:** Intel macOS / Linux / Windows (use a GGUF build + llama.cpp instead). **RAM guidance:** 32 GB minimum for Q6 (gs=32). 24 GB MacBook Pro **won’t run it**. Extra RAM improves headroom. ## How to use (MLX) ```bash pip install mlx-lm transformers ``` ```python # Python API (uses tokenizer bundled with this repo) from mlx_lm import load, generate model, tokenizer = load("halley-ai/gpt-oss-20b-MLX-6bit-gs32") print(generate( model, tokenizer, prompt="Explain the Chudnovsky algorithm to compute π.", max_tokens=256, max_kv_size=512 )) ``` ## Performance (Apple Silicon, real-world) LM Studio / CLI (MLX, Q6 gs=32): ~49–55 tok/s, TTFB ~0.35–0.45 s (≈2k-token responses) – measured on M1 Max 32 GB (short fixed-length runs show lower t/s due to startup overhead). Throughput varies with Mac model, context, and sampler settings. ## Evaluation Perplexity (PPL) streaming evaluation on WikiText-2; window=stride=4096, ~100k tokens, EOS inserted between docs.
Variant | PPL (ctx=4096) |
---|---|
MLX 8-bit (reference) | 10.75 |
MLX 6-bit (gs=32) | 10.46 (−2.7% vs 8-bit/gs64) |
MLX 5-bit (gs=32) | 11.11 (+3.3% vs 8-bit/gs64, +6.2% vs 6-bit/gs32) |
MLX 4-bit (gs=32) | 13.70 (+27.4% vs 8-bit/gs64, +31.0% vs 6-bit/gs32) |