Quatized yasserrmd/gpt-oss-coder-20b with llama.cpp commit 54a241f

Thanks for the awesome finetuning work by yasserrmd and OpenAI for the original model.

Multiple ways to use:

  • run with llama.cpp: ./llama-server -m "gpt-oss-coder-MXFP4_MOE.gguf" -ngl 25 --jinja
  • use with lmstudio: just pull from benhaotang/gpt-oss-coder-20B-GGUF
Downloads last month
208
GGUF
Model size
20.9B params
Architecture
gpt-oss
Hardware compatibility
Log In to view the estimation

4-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for benhaotang/gpt-oss-coder-20B-GGUF

Base model

openai/gpt-oss-20b
Quantized
(3)
this model

Dataset used to train benhaotang/gpt-oss-coder-20B-GGUF