--- library_name: mlx pipeline_tag: text-generation inference: false # MLX is macOS-only; HF Inference API won't run it license: apache-2.0 base_model: openai/gpt-oss-20b base_model_relation: quantized language: - en - ro tags: - apple-silicon - metal - arm64 - 4-bit - group-size-32 - moe - mpx4 - openai - halley-ai --- # gpt-oss-20b — MLX 4-bit (group size 32) **Summary.** This is a 4-bit (**Q4**) **MLX** quantization of **gpt-oss-20B** (sparse Mixture-of-Experts, MPx4). Group size is **32**. Built for **Apple Silicon** with Metal acceleration. - **Base model:** `openai/gpt-oss-20b` (Apache-2.0) - **Quantization:** MLX Q4, `q_group_size=32` (some tensors remain FP16 for stability) - **Files:** MLX weight shards + `config.json`; tokenizer files included for drop-in use - **Footprint:** ~**13.11 GB** on disk - **Intended use:** local inference / research on M-series Macs - **Not intended for:** safety-critical decisions; outputs may be inaccurate or biased ## Requirements **Runs on:** Apple Silicon (M1 or newer) with **macOS ≥ 13.5** via **MLX (Metal)**. **Not supported:** Intel macOS / Linux / Windows (use a GGUF build + llama.cpp instead). **RAM guidance:** 24 GB+ **required** for Q4 gs=32 (does **not work on Mac with only 16 GB**). ## How to use (MLX) ```bash pip install mlx-lm transformers ``` ```python # Python API (uses tokenizer bundled with this repo) from mlx_lm import load, generate model, tokenizer = load("halley-ai/gpt-oss-20b-MLX-4bit-gs32") print(generate( model, tokenizer, prompt="Explain the Chudnovsky algorithm to compute π.", max_tokens=256, max_kv_size=512 )) ``` ```bash # CLI python -m mlx_lm generate --model halley-ai/gpt-oss-20b-MLX-4bit-gs32 \ --prompt "Explain the Chudnovsky algorithm to compute pi." \ --max-kv-size 512 --max-tokens 256 ``` ## Performance (Apple Silicon, real-world) LM Studio / CLI (MLX, Q4 gs=32) ≈2k-token responses: - M1 Max (32 GB): ~62–72 tok/s, 0.30–0.40 s TTFB - M4 Pro (24 GB): ~80-90 tok/s, 0.40–0.60 TTFB - M3 Ultra (256 GB): ~130–140 tok/s, 0.20–0.30 s TTFB Throughput varies with Mac model, context, and sampler settings. ## Evaluation Perplexity (PPL) streaming evaluation on WikiText-2; window=stride=4096, ~100k tokens, EOS inserted between docs.
Variant | PPL (ctx=4096) |
---|---|
MLX 8-bit (reference) | 10.75 |
MLX 6-bit (gs=32) | 10.46 (−2.7% vs 8-bit/gs64) |
MLX 5-bit (gs=32) | 11.11 (+3.3% vs 8-bit/gs64, +6.2% vs 6-bit/gs32) |
MLX 4-bit (gs=32) | 13.70 (+27.4% vs 8-bit/gs64, +31.0% vs 6-bit/gs32) |