|
--- |
|
license: apache-2.0 |
|
pipeline_tag: text-generation |
|
library_name: transformers |
|
tags: |
|
- vllm |
|
--- |
|
|
|
# <span style="color: #7FFF7F;">gpt-oss-20b GGUF Models</span> |
|
|
|
|
|
## <span style="color: #7F7FFF;">Model Generation Details</span> |
|
|
|
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`cd6983d5`](https://github.com/ggerganov/llama.cpp/commit/cd6983d56d2cce94ecb86bb114ae8379a609073c). |
|
|
|
|
|
|
|
|
|
|
|
|
|
--- |
|
|
|
<a href="https://readyforquantum.com/huggingface_gguf_selection_guide.html" style="color: #7FFF7F;"> |
|
Click here to get info on choosing the right GGUF model format |
|
</a> |
|
|
|
--- |
|
|
|
|
|
|
|
<!--Begin Original Model Card--> |
|
|
|
|
|
# <span style="color: #7FFF7F;">gpt-oss-20b GGUF Models</span> |
|
|
|
|
|
## <span style="color: #7F7FFF;">Model Generation Details</span> |
|
|
|
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`cd6983d5`](https://github.com/ggerganov/llama.cpp/commit/cd6983d56d2cce94ecb86bb114ae8379a609073c). |
|
|
|
|
|
|
|
|
|
|
|
|
|
--- |
|
|
|
<a href="https://readyforquantum.com/huggingface_gguf_selection_guide.html" style="color: #7FFF7F;"> |
|
Click here to get info on choosing the right GGUF model format |
|
</a> |
|
|
|
--- |
|
|
|
|
|
|
|
<!--Begin Original Model Card--> |
|
|
|
|
|
# <span style="color: #7FFF7F;">gpt-oss-20b GGUF Models</span> |
|
|
|
|
|
## <span style="color: #7F7FFF;">Model Generation Details</span> |
|
|
|
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`cd6983d5`](https://github.com/ggerganov/llama.cpp/commit/cd6983d56d2cce94ecb86bb114ae8379a609073c). |
|
|
|
|
|
|
|
|
|
|
|
|
|
--- |
|
|
|
<a href="https://readyforquantum.com/huggingface_gguf_selection_guide.html" style="color: #7FFF7F;"> |
|
Click here to get info on choosing the right GGUF model format |
|
</a> |
|
|
|
--- |
|
|
|
|
|
|
|
<!--Begin Original Model Card--> |
|
|
|
|
|
<p align="center"> |
|
<img alt="gpt-oss-20b" src="https://raw.githubusercontent.com/openai/gpt-oss/main/docs/gpt-oss-20b.svg"> |
|
</p> |
|
|
|
<p align="center"> |
|
<a href="https://gpt-oss.com"><strong>Try gpt-oss</strong></a> Β· |
|
<a href="https://cookbook.openai.com/topic/gpt-oss"><strong>Guides</strong></a> Β· |
|
<a href="https://openai.com/index/gpt-oss-model-card"><strong>Model card</strong></a> Β· |
|
<a href="https://openai.com/index/introducing-gpt-oss/"><strong>OpenAI blog</strong></a> |
|
</p> |
|
|
|
<br> |
|
|
|
Welcome to the gpt-oss series, [OpenAIβs open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases. |
|
|
|
Weβre releasing two flavors of these open models: |
|
- `gpt-oss-120b` β for production, general purpose, high reasoning use cases that fit into a single H100 GPU (117B parameters with 5.1B active parameters) |
|
- `gpt-oss-20b` β for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters) |
|
|
|
Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise. |
|
|
|
|
|
> [!NOTE] |
|
> This model card is dedicated to the smaller `gpt-oss-20b` model. Check out [`gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) for the larger model. |
|
|
|
# Highlights |
|
|
|
* **Permissive Apache 2.0 license:** Build freely without copyleft restrictions or patent riskβideal for experimentation, customization, and commercial deployment. |
|
* **Configurable reasoning effort:** Easily adjust the reasoning effort (low, medium, high) based on your specific use case and latency needs. |
|
* **Full chain-of-thought:** Gain complete access to the modelβs reasoning process, facilitating easier debugging and increased trust in outputs. Itβs not intended to be shown to end users. |
|
* **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning. |
|
* **Agentic capabilities:** Use the modelsβ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs. |
|
* **Native MXFP4 quantization:** The models are trained with native MXFP4 precision for the MoE layer, making `gpt-oss-120b` run on a single H100 GPU and the `gpt-oss-20b` model run within 16GB of memory. |
|
|
|
--- |
|
|
|
# Inference examples |
|
|
|
## Transformers |
|
|
|
You can use `gpt-oss-120b` and `gpt-oss-20b` with Transformers. If you use the Transformers chat template, it will automatically apply the [harmony response format](https://github.com/openai/harmony). If you use `model.generate` directly, you need to apply the harmony format manually using the chat template or use our [openai-harmony](https://github.com/openai/harmony) package. |
|
|
|
To get started, install the necessary dependencies to setup your environment: |
|
|
|
``` |
|
pip install -U transformers kernels torch |
|
``` |
|
|
|
Once, setup you can proceed to run the model by running the snippet below: |
|
|
|
```py |
|
from transformers import pipeline |
|
import torch |
|
|
|
model_id = "openai/gpt-oss-20b" |
|
|
|
pipe = pipeline( |
|
"text-generation", |
|
model=model_id, |
|
torch_dtype="auto", |
|
device_map="auto", |
|
) |
|
|
|
messages = [ |
|
{"role": "user", "content": "Explain quantum mechanics clearly and concisely."}, |
|
] |
|
|
|
outputs = pipe( |
|
messages, |
|
max_new_tokens=256, |
|
) |
|
print(outputs[0]["generated_text"][-1]) |
|
``` |
|
|
|
Alternatively, you can run the model via [`Transformers Serve`](https://huggingface.co/docs/transformers/main/serving) to spin up a OpenAI-compatible webserver: |
|
|
|
``` |
|
transformers serve |
|
transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-20b |
|
``` |
|
|
|
[Learn more about how to use gpt-oss with Transformers.](https://cookbook.openai.com/articles/gpt-oss/run-transformers) |
|
|
|
## vLLM |
|
|
|
vLLM recommends using [uv](https://docs.astral.sh/uv/) for Python dependency management. You can use vLLM to spin up an OpenAI-compatible webserver. The following command will automatically download the model and start the server. |
|
|
|
```bash |
|
uv pip install --pre vllm==0.10.1+gptoss \ |
|
--extra-index-url https://wheels.vllm.ai/gpt-oss/ \ |
|
--extra-index-url https://download.pytorch.org/whl/nightly/cu128 \ |
|
--index-strategy unsafe-best-match |
|
|
|
vllm serve openai/gpt-oss-20b |
|
``` |
|
|
|
[Learn more about how to use gpt-oss with vLLM.](https://cookbook.openai.com/articles/gpt-oss/run-vllm) |
|
|
|
## PyTorch / Triton |
|
|
|
To learn about how to use this model with PyTorch and Triton, check out our [reference implementations in the gpt-oss repository](https://github.com/openai/gpt-oss?tab=readme-ov-file#reference-pytorch-implementation). |
|
|
|
## Ollama |
|
|
|
If you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after [installing Ollama](https://ollama.com/download). |
|
|
|
```bash |
|
# gpt-oss-20b |
|
ollama pull gpt-oss:20b |
|
ollama run gpt-oss:20b |
|
``` |
|
|
|
[Learn more about how to use gpt-oss with Ollama.](https://cookbook.openai.com/articles/gpt-oss/run-locally-ollama) |
|
|
|
#### LM Studio |
|
|
|
If you are using [LM Studio](https://lmstudio.ai/) you can use the following commands to download. |
|
|
|
```bash |
|
# gpt-oss-20b |
|
lms get openai/gpt-oss-20b |
|
``` |
|
|
|
Check out our [awesome list](https://github.com/openai/gpt-oss/blob/main/awesome-gpt-oss.md) for a broader collection of gpt-oss resources and inference partners. |
|
|
|
--- |
|
|
|
# Download the model |
|
|
|
You can download the model weights from the [Hugging Face Hub](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4) directly from Hugging Face CLI: |
|
|
|
```shell |
|
# gpt-oss-20b |
|
huggingface-cli download openai/gpt-oss-20b --include "original/*" --local-dir gpt-oss-20b/ |
|
pip install gpt-oss |
|
python -m gpt_oss.chat model/ |
|
``` |
|
|
|
# Reasoning levels |
|
|
|
You can adjust the reasoning level that suits your task across three levels: |
|
|
|
* **Low:** Fast responses for general dialogue. |
|
* **Medium:** Balanced speed and detail. |
|
* **High:** Deep and detailed analysis. |
|
|
|
The reasoning level can be set in the system prompts, e.g., "Reasoning: high". |
|
|
|
# Tool use |
|
|
|
The gpt-oss models are excellent for: |
|
* Web browsing (using built-in browsing tools) |
|
* Function calling with defined schemas |
|
* Agentic operations like browser tasks |
|
|
|
# Fine-tuning |
|
|
|
Both gpt-oss models can be fine-tuned for a variety of specialized use cases. |
|
|
|
This smaller model `gpt-oss-20b` can be fine-tuned on consumer hardware, whereas the larger [`gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) can be fine-tuned on a single H100 node. |
|
|
|
|
|
<!--End Original Model Card--> |
|
|
|
--- |
|
|
|
# <span id="testllm" style="color: #7F7FFF;">π If you find these models useful</span> |
|
|
|
Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**: |
|
|
|
π [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) |
|
|
|
|
|
The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder) |
|
|
|
π¬ **How to test**: |
|
Choose an **AI assistant type**: |
|
- `TurboLLM` (GPT-4.1-mini) |
|
- `HugLLM` (Hugginface Open-source models) |
|
- `TestLLM` (Experimental CPU-only) |
|
|
|
### **What Iβm Testing** |
|
Iβm pushing the limits of **small open-source models for AI network monitoring**, specifically: |
|
- **Function calling** against live network services |
|
- **How small can a model go** while still handling: |
|
- Automated **Nmap security scans** |
|
- **Quantum-readiness checks** |
|
- **Network Monitoring tasks** |
|
|
|
π‘ **TestLLM** β Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space): |
|
- β
**Zero-configuration setup** |
|
- β³ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low. |
|
- π§ **Help wanted!** If youβre into **edge-device AI**, letβs collaborate! |
|
|
|
### **Other Assistants** |
|
π’ **TurboLLM** β Uses **gpt-4.1-mini** : |
|
- **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited. |
|
- **Create custom cmd processors to run .net code on Quantum Network Monitor Agents** |
|
- **Real-time network diagnostics and monitoring** |
|
- **Security Audits** |
|
- **Penetration testing** (Nmap/Metasploit) |
|
|
|
π΅ **HugLLM** β Latest Open-source models: |
|
- π Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita. |
|
|
|
### π‘ **Example commands you could test**: |
|
1. `"Give me info on my websites SSL certificate"` |
|
2. `"Check if my server is using quantum safe encyption for communication"` |
|
3. `"Run a comprehensive security audit on my server"` |
|
4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution! |
|
|
|
### Final Word |
|
|
|
I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIβall out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful. |
|
|
|
If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) β. Your support helps cover service costs and allows me to raise token limits for everyone. |
|
|
|
I'm also open to job opportunities or sponsorship. |
|
|
|
Thank you! π |
|
|
|
|
|
<!--End Original Model Card--> |
|
|
|
--- |
|
|
|
# <span id="testllm" style="color: #7F7FFF;">π If you find these models useful</span> |
|
|
|
Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**: |
|
|
|
π [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) |
|
|
|
|
|
The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder) |
|
|
|
π¬ **How to test**: |
|
Choose an **AI assistant type**: |
|
- `TurboLLM` (GPT-4.1-mini) |
|
- `HugLLM` (Hugginface Open-source models) |
|
- `TestLLM` (Experimental CPU-only) |
|
|
|
### **What Iβm Testing** |
|
Iβm pushing the limits of **small open-source models for AI network monitoring**, specifically: |
|
- **Function calling** against live network services |
|
- **How small can a model go** while still handling: |
|
- Automated **Nmap security scans** |
|
- **Quantum-readiness checks** |
|
- **Network Monitoring tasks** |
|
|
|
π‘ **TestLLM** β Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space): |
|
- β
**Zero-configuration setup** |
|
- β³ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low. |
|
- π§ **Help wanted!** If youβre into **edge-device AI**, letβs collaborate! |
|
|
|
### **Other Assistants** |
|
π’ **TurboLLM** β Uses **gpt-4.1-mini** : |
|
- **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited. |
|
- **Create custom cmd processors to run .net code on Quantum Network Monitor Agents** |
|
- **Real-time network diagnostics and monitoring** |
|
- **Security Audits** |
|
- **Penetration testing** (Nmap/Metasploit) |
|
|
|
π΅ **HugLLM** β Latest Open-source models: |
|
- π Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita. |
|
|
|
### π‘ **Example commands you could test**: |
|
1. `"Give me info on my websites SSL certificate"` |
|
2. `"Check if my server is using quantum safe encyption for communication"` |
|
3. `"Run a comprehensive security audit on my server"` |
|
4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution! |
|
|
|
### Final Word |
|
|
|
I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIβall out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful. |
|
|
|
If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) β. Your support helps cover service costs and allows me to raise token limits for everyone. |
|
|
|
I'm also open to job opportunities or sponsorship. |
|
|
|
Thank you! π |
|
|
|
|
|
<!--End Original Model Card--> |
|
|
|
--- |
|
|
|
# <span id="testllm" style="color: #7F7FFF;">π If you find these models useful</span> |
|
|
|
Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**: |
|
|
|
π [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) |
|
|
|
|
|
The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder) |
|
|
|
π¬ **How to test**: |
|
Choose an **AI assistant type**: |
|
- `TurboLLM` (GPT-4.1-mini) |
|
- `HugLLM` (Hugginface Open-source models) |
|
- `TestLLM` (Experimental CPU-only) |
|
|
|
### **What Iβm Testing** |
|
Iβm pushing the limits of **small open-source models for AI network monitoring**, specifically: |
|
- **Function calling** against live network services |
|
- **How small can a model go** while still handling: |
|
- Automated **Nmap security scans** |
|
- **Quantum-readiness checks** |
|
- **Network Monitoring tasks** |
|
|
|
π‘ **TestLLM** β Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space): |
|
- β
**Zero-configuration setup** |
|
- β³ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low. |
|
- π§ **Help wanted!** If youβre into **edge-device AI**, letβs collaborate! |
|
|
|
### **Other Assistants** |
|
π’ **TurboLLM** β Uses **gpt-4.1-mini** : |
|
- **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited. |
|
- **Create custom cmd processors to run .net code on Quantum Network Monitor Agents** |
|
- **Real-time network diagnostics and monitoring** |
|
- **Security Audits** |
|
- **Penetration testing** (Nmap/Metasploit) |
|
|
|
π΅ **HugLLM** β Latest Open-source models: |
|
- π Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita. |
|
|
|
### π‘ **Example commands you could test**: |
|
1. `"Give me info on my websites SSL certificate"` |
|
2. `"Check if my server is using quantum safe encyption for communication"` |
|
3. `"Run a comprehensive security audit on my server"` |
|
4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution! |
|
|
|
### Final Word |
|
|
|
I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAIβall out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful. |
|
|
|
If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) β. Your support helps cover service costs and allows me to raise token limits for everyone. |
|
|
|
I'm also open to job opportunities or sponsorship. |
|
|
|
Thank you! π |
|
|