prayanksai's picture
Update README.md
6a22b1e verified
metadata
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
  - vllm
  - mlx
  - quantized
base_model: openai/gpt-oss-120b

gpt-oss-120B (6‑bit quantized via MLX‑LM)

A 6‑bit quantized version of openai/gpt-oss-120b created using MLX‑LM.
This version significantly reduces inference memory requirements (~90 GB RAM), while retaining most of the model’s original capabilities.

🛠️ Quantization Process

The model was created using the following steps:

These commands use the latest MLX‑LM converter to apply a consistent 6‑bit quantization across model weights.