smartguy0505 commited on
Commit
a70726e
·
verified ·
1 Parent(s): ab8b14c

Fixed readme

Browse files
Files changed (1) hide show
  1. README.md +15 -15
README.md CHANGED
@@ -7,7 +7,7 @@ tags:
7
  ---
8
 
9
  <p align="center">
10
- <img alt="gpt-oss-120b" src="https://raw.githubusercontent.com/openai/gpt-oss/main/docs/gpt-oss-120b.svg">
11
  </p>
12
 
13
  <p align="center">
@@ -22,14 +22,14 @@ tags:
22
  Welcome to the gpt-oss series, [OpenAI’s open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases.
23
 
24
  We’re releasing two flavors of these open models:
25
- - `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single H100 GPU (117B parameters with 5.1B active parameters)
26
  - `gpt-oss-20b` — for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters)
27
 
28
  Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise.
29
 
30
 
31
  > [!NOTE]
32
- > This model card is dedicated to the larger `gpt-oss-120b` model. Check out [`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) for the smaller model.
33
 
34
  # Highlights
35
 
@@ -38,7 +38,7 @@ Both models were trained on our [harmony response format](https://github.com/ope
38
  * **Full chain-of-thought:** Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users.
39
  * **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning.
40
  * **Agentic capabilities:** Use the models’ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs.
41
- * **Native MXFP4 quantization:** The models are trained with native MXFP4 precision for the MoE layer, making `gpt-oss-120b` run on a single H100 GPU and the `gpt-oss-20b` model run within 16GB of memory.
42
 
43
  ---
44
 
@@ -60,7 +60,7 @@ Once, setup you can proceed to run the model by running the snippet below:
60
  from transformers import pipeline
61
  import torch
62
 
63
- model_id = "openai/gpt-oss-120b"
64
 
65
  pipe = pipeline(
66
  "text-generation",
@@ -84,7 +84,7 @@ Alternatively, you can run the model via [`Transformers Serve`](https://huggingf
84
 
85
  ```
86
  transformers serve
87
- transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-120b
88
  ```
89
 
90
  [Learn more about how to use gpt-oss with Transformers.](https://cookbook.openai.com/articles/gpt-oss/run-transformers)
@@ -99,7 +99,7 @@ uv pip install --pre vllm==0.10.1+gptoss \
99
  --extra-index-url https://download.pytorch.org/whl/nightly/cu128 \
100
  --index-strategy unsafe-best-match
101
 
102
- vllm serve openai/gpt-oss-120b
103
  ```
104
 
105
  [Learn more about how to use gpt-oss with vLLM.](https://cookbook.openai.com/articles/gpt-oss/run-vllm)
@@ -113,9 +113,9 @@ To learn about how to use this model with PyTorch and Triton, check out our [ref
113
  If you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after [installing Ollama](https://ollama.com/download).
114
 
115
  ```bash
116
- # gpt-oss-120b
117
- ollama pull gpt-oss:120b
118
- ollama run gpt-oss:120b
119
  ```
120
 
121
  [Learn more about how to use gpt-oss with Ollama.](https://cookbook.openai.com/articles/gpt-oss/run-locally-ollama)
@@ -125,8 +125,8 @@ ollama run gpt-oss:120b
125
  If you are using [LM Studio](https://lmstudio.ai/) you can use the following commands to download.
126
 
127
  ```bash
128
- # gpt-oss-120b
129
- lms get openai/gpt-oss-120b
130
  ```
131
 
132
  Check out our [awesome list](https://github.com/openai/gpt-oss/blob/main/awesome-gpt-oss.md) for a broader collection of gpt-oss resources and inference partners.
@@ -138,8 +138,8 @@ Check out our [awesome list](https://github.com/openai/gpt-oss/blob/main/awesome
138
  You can download the model weights from the [Hugging Face Hub](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4) directly from Hugging Face CLI:
139
 
140
  ```shell
141
- # gpt-oss-120b
142
- huggingface-cli download openai/gpt-oss-120b --include "original/*" --local-dir gpt-oss-120b/
143
  pip install gpt-oss
144
  python -m gpt_oss.chat model/
145
  ```
@@ -165,4 +165,4 @@ The gpt-oss models are excellent for:
165
 
166
  Both gpt-oss models can be fine-tuned for a variety of specialized use cases.
167
 
168
- This larger model `gpt-oss-120b` can be fine-tuned on a single H100 node, whereas the smaller [`gpt-oss-20b`](https://huggingface.co/openai/gpt-oss-20b) can even be fine-tuned on consumer hardware.
 
7
  ---
8
 
9
  <p align="center">
10
+ <img alt="gpt-oss-20b" src="https://raw.githubusercontent.com/openai/gpt-oss/main/docs/gpt-oss-20b.svg">
11
  </p>
12
 
13
  <p align="center">
 
22
  Welcome to the gpt-oss series, [OpenAI’s open-weight models](https://openai.com/open-models) designed for powerful reasoning, agentic tasks, and versatile developer use cases.
23
 
24
  We’re releasing two flavors of these open models:
25
+ - `gpt-oss-120b` — for production, general purpose, high reasoning use cases that fit into a single 80GB GPU (like NVIDIA H100 or AMD MI300X) (117B parameters with 5.1B active parameters)
26
  - `gpt-oss-20b` — for lower latency, and local or specialized use cases (21B parameters with 3.6B active parameters)
27
 
28
  Both models were trained on our [harmony response format](https://github.com/openai/harmony) and should only be used with the harmony format as it will not work correctly otherwise.
29
 
30
 
31
  > [!NOTE]
32
+ > This model card is dedicated to the smaller `gpt-oss-20b` model. Check out [`gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) for the larger model.
33
 
34
  # Highlights
35
 
 
38
  * **Full chain-of-thought:** Gain complete access to the model’s reasoning process, facilitating easier debugging and increased trust in outputs. It’s not intended to be shown to end users.
39
  * **Fine-tunable:** Fully customize models to your specific use case through parameter fine-tuning.
40
  * **Agentic capabilities:** Use the models’ native capabilities for function calling, [web browsing](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#browser), [Python code execution](https://github.com/openai/gpt-oss/tree/main?tab=readme-ov-file#python), and Structured Outputs.
41
+ * **Native MXFP4 quantization:** The models are trained with native MXFP4 precision for the MoE layer, making `gpt-oss-120b` run on a single 80GB GPU (like NVIDIA H100 or AMD MI300X) and the `gpt-oss-20b` model run within 16GB of memory.
42
 
43
  ---
44
 
 
60
  from transformers import pipeline
61
  import torch
62
 
63
+ model_id = "openai/gpt-oss-20b"
64
 
65
  pipe = pipeline(
66
  "text-generation",
 
84
 
85
  ```
86
  transformers serve
87
+ transformers chat localhost:8000 --model-name-or-path openai/gpt-oss-20b
88
  ```
89
 
90
  [Learn more about how to use gpt-oss with Transformers.](https://cookbook.openai.com/articles/gpt-oss/run-transformers)
 
99
  --extra-index-url https://download.pytorch.org/whl/nightly/cu128 \
100
  --index-strategy unsafe-best-match
101
 
102
+ vllm serve openai/gpt-oss-20b
103
  ```
104
 
105
  [Learn more about how to use gpt-oss with vLLM.](https://cookbook.openai.com/articles/gpt-oss/run-vllm)
 
113
  If you are trying to run gpt-oss on consumer hardware, you can use Ollama by running the following commands after [installing Ollama](https://ollama.com/download).
114
 
115
  ```bash
116
+ # gpt-oss-20b
117
+ ollama pull gpt-oss:20b
118
+ ollama run gpt-oss:20b
119
  ```
120
 
121
  [Learn more about how to use gpt-oss with Ollama.](https://cookbook.openai.com/articles/gpt-oss/run-locally-ollama)
 
125
  If you are using [LM Studio](https://lmstudio.ai/) you can use the following commands to download.
126
 
127
  ```bash
128
+ # gpt-oss-20b
129
+ lms get openai/gpt-oss-20b
130
  ```
131
 
132
  Check out our [awesome list](https://github.com/openai/gpt-oss/blob/main/awesome-gpt-oss.md) for a broader collection of gpt-oss resources and inference partners.
 
138
  You can download the model weights from the [Hugging Face Hub](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4) directly from Hugging Face CLI:
139
 
140
  ```shell
141
+ # gpt-oss-20b
142
+ huggingface-cli download openai/gpt-oss-20b --include "original/*" --local-dir gpt-oss-20b/
143
  pip install gpt-oss
144
  python -m gpt_oss.chat model/
145
  ```
 
165
 
166
  Both gpt-oss models can be fine-tuned for a variety of specialized use cases.
167
 
168
+ This smaller model `gpt-oss-20b` can be fine-tuned on consumer hardware, whereas the larger [`gpt-oss-120b`](https://huggingface.co/openai/gpt-oss-120b) can be fine-tuned on a single H100 node.