File size: 830 Bytes
af2e854
 
 
fe7ac02
af2e854
 
 
fe7ac02
af2e854
 
fe7ac02
 
 
 
 
 
 
 
 
 
 
 
6a22b1e
 
 
fe7ac02
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
- vllm
- mlx
- quantized
base_model: openai/gpt-oss-120b
---

# gpt-oss-120B (6‑bit quantized via MLX‑LM)

**A 6‑bit quantized version of `openai/gpt-oss-120b` created using MLX‑LM.**  
This version significantly reduces inference memory requirements (~90 GB RAM), while retaining most of the model’s original capabilities.

⸻

🛠️ Quantization Process

The model was created using the following steps:

* pip uninstall mlx-lm
* pip install git+https://github.com/ml-explore/mlx-lm.git@main
* mlx_lm.convert \
  --hf-path openai/gpt-oss-120b \
  --quantize \
  --q-bits 6 \
  --output-dir gpt-oss-120b-MLX-6bit

These commands use the latest MLX‑LM converter to apply a consistent 6‑bit quantization across model weights.