gpt-oss-120B (6‑bit quantized via MLX‑LM)
A 6‑bit quantized version of openai/gpt-oss-120b
created using MLX‑LM.
This version significantly reduces inference memory requirements (~90 GB RAM), while retaining most of the model’s original capabilities.
⸻
🛠️ Quantization Process
The model was created using the following steps:
- pip uninstall mlx-lm
- pip install git+https://github.com/ml-explore/mlx-lm.git@main
- mlx_lm.convert
--hf-path openai/gpt-oss-120b
--quantize
--q-bits 6
--output-dir gpt-oss-120b-MLX-6bit
These commands use the latest MLX‑LM converter to apply a consistent 6‑bit quantization across model weights.
- Downloads last month
- 1,169
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for prayanksai/gpt-oss-120b-MLX-6bit
Base model
openai/gpt-oss-120b