gpt-oss-120B (6‑bit quantized via MLX‑LM)

A 6‑bit quantized version of openai/gpt-oss-120b created using MLX‑LM.
This version significantly reduces inference memory requirements (~90 GB RAM), while retaining most of the model’s original capabilities.

🛠️ Quantization Process

The model was created using the following steps:

These commands use the latest MLX‑LM converter to apply a consistent 6‑bit quantization across model weights.

Downloads last month
1,169
Safetensors
Model size
117B params
Tensor type
BF16
·
U32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for prayanksai/gpt-oss-120b-MLX-6bit

Quantized
(31)
this model