Commit
Β·
a4ee9cd
1
Parent(s):
10ec4e7
Add reasoning_effort parameter to minimal script
Browse files- Implements official reasoning_effort parameter (low/medium/high)
- Uses tokenizer.apply_chat_template with reasoning_effort
- Stores reasoning effort level in output dataset
- Updated to transformers>=4.55.0 for proper support
- README.md +139 -139
- gpt_oss_minimal.py +18 -3
README.md
CHANGED
@@ -1,193 +1,193 @@
|
|
1 |
-
# π OpenAI GPT OSS Models -
|
2 |
|
3 |
-
Generate
|
4 |
|
5 |
-
##
|
6 |
|
7 |
-
|
8 |
-
# Install huggingface-hub CLI using uv
|
9 |
-
uv tool install huggingface-hub
|
10 |
|
11 |
-
|
12 |
-
huggingface-cli login
|
13 |
|
14 |
-
|
|
|
|
|
15 |
```
|
16 |
|
17 |
-
|
18 |
-
|
19 |
-
## π Try It Now! Copy & Run This Command:
|
20 |
-
|
21 |
```bash
|
22 |
-
# Generate
|
23 |
-
|
24 |
-
|
25 |
--input-dataset davanstrien/haiku_dpo \
|
26 |
-
--output-dataset haiku-reasoning \
|
27 |
--prompt-column question \
|
28 |
--max-samples 50
|
29 |
```
|
30 |
|
31 |
-
That's it! Your dataset will be generated and pushed to `your-username/haiku-reasoning`. π
|
32 |
-
|
33 |
## π‘ What You Get
|
34 |
|
35 |
The models output structured reasoning in separate channels:
|
36 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
```json
|
38 |
{
|
39 |
-
"prompt": "Write a haiku about
|
40 |
-
"think": "I need to
|
41 |
-
"content": "Silent peaks
|
42 |
"reasoning_level": "high",
|
43 |
"model": "openai/gpt-oss-20b"
|
44 |
}
|
45 |
```
|
46 |
|
47 |
-
##
|
48 |
-
|
49 |
-
### Use Your Own Dataset
|
50 |
-
|
51 |
-
```bash
|
52 |
-
# Process your entire dataset
|
53 |
-
huggingface-cli job run --gpu-flavor a10g-small \
|
54 |
-
uv run https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_vllm.py \
|
55 |
-
--input-dataset your-prompts \
|
56 |
-
--output-dataset my-responses
|
57 |
-
|
58 |
-
# Use the larger 120B model
|
59 |
-
huggingface-cli job run --gpu-flavor a100-large \
|
60 |
-
uv run https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_vllm.py \
|
61 |
-
--input-dataset your-prompts \
|
62 |
-
--output-dataset my-responses-120b \
|
63 |
-
--model-id openai/gpt-oss-120b
|
64 |
-
```
|
65 |
-
|
66 |
-
### Process Different Dataset Types
|
67 |
-
|
68 |
-
```bash
|
69 |
-
# Math problems with step-by-step reasoning
|
70 |
-
huggingface-cli job run --gpu-flavor a10g-small \
|
71 |
-
uv run https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_vllm.py \
|
72 |
-
--input-dataset math-problems \
|
73 |
-
--output-dataset math-solutions \
|
74 |
-
--reasoning-level high
|
75 |
-
|
76 |
-
# Code generation with explanation
|
77 |
-
huggingface-cli job run --gpu-flavor a10g-small \
|
78 |
-
uv run https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_vllm.py \
|
79 |
-
--input-dataset code-prompts \
|
80 |
-
--output-dataset code-explained \
|
81 |
-
--max-tokens 1024
|
82 |
-
|
83 |
-
# Test with just 10 samples
|
84 |
-
huggingface-cli job run --gpu-flavor a10g-small \
|
85 |
-
uv run https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_vllm.py \
|
86 |
-
--input-dataset your-dataset \
|
87 |
-
--output-dataset quick-test \
|
88 |
-
--max-samples 10
|
89 |
-
```
|
90 |
|
91 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
92 |
|
93 |
-
|
94 |
-
|
|
|
95 |
|
96 |
-
|
97 |
|
|
|
98 |
```bash
|
99 |
-
#
|
100 |
-
|
101 |
-
uv run https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_transformers.py \
|
102 |
--input-dataset davanstrien/haiku_dpo \
|
103 |
-
--output-dataset haiku-reasoning \
|
104 |
--prompt-column question \
|
105 |
-
--
|
|
|
106 |
```
|
107 |
|
108 |
-
|
109 |
-
|
110 |
-
| Model | GPU Flavor | Memory | Cost/Hour | Best For |
|
111 |
-
|-------|------------|--------|-----------|----------|
|
112 |
-
| `gpt-oss-20b` | `a10g-large` | 48GB | $2.50 | 20B model (needs ~40GB) |
|
113 |
-
| `gpt-oss-20b` | `a100-large` | 80GB | $4.34 | 20B with headroom |
|
114 |
-
| `gpt-oss-120b` | `4xa100` | 320GB | $17.36 | 120B model (needs ~240GB) |
|
115 |
-
| `gpt-oss-120b` | `8xl40s` | 384GB | $23.50 | 120B maximum speed |
|
116 |
-
|
117 |
-
**Note**: The MXFP4 quantization is dequantized to bf16 during loading, which doubles memory requirements.
|
118 |
-
|
119 |
-
## π Local Execution
|
120 |
-
|
121 |
-
If you have a local GPU:
|
122 |
-
|
123 |
```bash
|
124 |
-
#
|
125 |
-
uv run gpt_oss_vllm.py \
|
126 |
-
--input-dataset davanstrien/haiku_dpo \
|
127 |
-
--output-dataset haiku-reasoning \
|
128 |
-
--prompt-column question \
|
129 |
-
--max-samples 50
|
130 |
-
|
131 |
-
# Using Transformers
|
132 |
uv run gpt_oss_transformers.py \
|
133 |
-
--input-dataset
|
134 |
-
--output-dataset
|
135 |
--prompt-column question \
|
136 |
-
--
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
137 |
```
|
138 |
|
139 |
-
##
|
140 |
|
141 |
-
|
|
142 |
-
|
143 |
-
| `--input-dataset` |
|
144 |
-
| `--output-dataset` | Output dataset name
|
145 |
-
| `--prompt-column` | Column
|
146 |
| `--model-id` | Model to use | `openai/gpt-oss-20b` |
|
147 |
-
| `--reasoning-level` | Reasoning depth
|
148 |
-
| `--max-samples` | Limit
|
149 |
-
| `--temperature` |
|
150 |
| `--max-tokens` | Max tokens to generate | `512` |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
151 |
|
152 |
-
##
|
153 |
-
|
154 |
-
- **Open Source Models**: `openai/gpt-oss-20b` and `openai/gpt-oss-120b`
|
155 |
-
- **Structured Output**: Separate channels for reasoning (`analysis`) and response (`final`)
|
156 |
-
- **Zero Setup**: Run with a single command on HF Jobs
|
157 |
-
- **Flexible Input**: Works with any prompt dataset
|
158 |
-
- **Automatic Upload**: Results pushed directly to your Hub account
|
159 |
-
|
160 |
-
## π― Use Cases
|
161 |
-
|
162 |
-
1. **Training Data**: Create datasets with built-in reasoning explanations
|
163 |
-
2. **Evaluation**: Generate test sets where each answer includes its rationale
|
164 |
-
3. **Research**: Study how large models approach different types of problems
|
165 |
-
4. **Applications**: Build systems that can explain their outputs
|
166 |
|
167 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
168 |
|
169 |
-
|
170 |
-
- **`gpt_oss_transformers.py`**: Fallback if vLLM has compatibility issues
|
171 |
|
172 |
-
|
|
|
|
|
|
|
173 |
|
174 |
-
|
175 |
-
-
|
176 |
-
-
|
177 |
|
178 |
-
|
179 |
-
-
|
180 |
-
-
|
181 |
-
- Hugging Face token
|
182 |
|
183 |
-
##
|
184 |
|
185 |
-
|
|
|
|
|
186 |
|
187 |
-
##
|
188 |
|
189 |
-
|
190 |
|
191 |
---
|
192 |
|
193 |
-
|
|
|
1 |
+
# π OpenAI GPT OSS Models - Works on Regular GPUs!
|
2 |
|
3 |
+
Generate synthetic datasets with transparent reasoning using OpenAI's GPT OSS models. **No H100s required** - works on L4, A100, A10G, and even T4 GPUs!
|
4 |
|
5 |
+
## π Key Discovery
|
6 |
|
7 |
+
**The models work on regular datacenter GPUs!** Transformers automatically handles MXFP4 β bf16 conversion, making these models accessible on standard hardware.
|
|
|
|
|
8 |
|
9 |
+
## π Quick Start
|
|
|
10 |
|
11 |
+
### Test Locally (Single Prompt)
|
12 |
+
```bash
|
13 |
+
uv run gpt_oss_transformers.py --prompt "Write a haiku about mountains"
|
14 |
```
|
15 |
|
16 |
+
### Run on HuggingFace Jobs (No GPU Required!)
|
|
|
|
|
|
|
17 |
```bash
|
18 |
+
# Generate haiku with reasoning (~$1.50/hr on A10G)
|
19 |
+
hf jobs uv run --flavor a10g-small \
|
20 |
+
https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_transformers.py \
|
21 |
--input-dataset davanstrien/haiku_dpo \
|
22 |
+
--output-dataset username/haiku-reasoning \
|
23 |
--prompt-column question \
|
24 |
--max-samples 50
|
25 |
```
|
26 |
|
|
|
|
|
27 |
## π‘ What You Get
|
28 |
|
29 |
The models output structured reasoning in separate channels:
|
30 |
|
31 |
+
**Raw Output**:
|
32 |
+
```
|
33 |
+
analysisI need to write a haiku about mountains. Haiku: 5-7-5 syllable structure...
|
34 |
+
assistantfinalSilent peaks climb high,
|
35 |
+
Echoing winds trace stone's breath,
|
36 |
+
Dawn paints them gold bright.
|
37 |
+
```
|
38 |
+
|
39 |
+
**Parsed Dataset**:
|
40 |
```json
|
41 |
{
|
42 |
+
"prompt": "Write a haiku about mountains",
|
43 |
+
"think": "[Analysis] I need to write a haiku about mountains. Haiku: 5-7-5 syllable structure...",
|
44 |
+
"content": "Silent peaks climb high,\nEchoing winds trace stone's breath,\nDawn paints them gold bright.",
|
45 |
"reasoning_level": "high",
|
46 |
"model": "openai/gpt-oss-20b"
|
47 |
}
|
48 |
```
|
49 |
|
50 |
+
## π₯οΈ GPU Requirements
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
51 |
|
52 |
+
### β
Confirmed Working GPUs
|
53 |
+
| GPU | Memory | Status | Notes |
|
54 |
+
|-----|--------|--------|-------|
|
55 |
+
| **L4** | 24GB | β
Tested | Works perfectly! |
|
56 |
+
| **A100** | 40/80GB | β
Works | Great performance |
|
57 |
+
| **A10G** | 24GB | β
Recommended | Best value at $1.50/hr |
|
58 |
+
| **T4** | 16GB | β οΈ Limited | May need 8-bit for 20B |
|
59 |
+
| **RTX 4090** | 24GB | β
Works | Consumer GPU support |
|
60 |
|
61 |
+
### Memory Requirements
|
62 |
+
- **20B model**: ~40GB VRAM when dequantized (use A100-40GB or 2xL4)
|
63 |
+
- **120B model**: ~240GB VRAM when dequantized (use 4xA100-80GB)
|
64 |
|
65 |
+
## π― Examples
|
66 |
|
67 |
+
### Creative Writing with Reasoning
|
68 |
```bash
|
69 |
+
# Process haiku dataset with high reasoning
|
70 |
+
uv run gpt_oss_transformers.py \
|
|
|
71 |
--input-dataset davanstrien/haiku_dpo \
|
72 |
+
--output-dataset my-haiku-reasoning \
|
73 |
--prompt-column question \
|
74 |
+
--reasoning-level high \
|
75 |
+
--max-samples 100
|
76 |
```
|
77 |
|
78 |
+
### Math Problems with Step-by-Step Solutions
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
79 |
```bash
|
80 |
+
# Generate math solutions with reasoning traces
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
81 |
uv run gpt_oss_transformers.py \
|
82 |
+
--input-dataset gsm8k \
|
83 |
+
--output-dataset math-with-reasoning \
|
84 |
--prompt-column question \
|
85 |
+
--reasoning-level high
|
86 |
+
```
|
87 |
+
|
88 |
+
### Test Different Reasoning Levels
|
89 |
+
```bash
|
90 |
+
# Compare reasoning levels
|
91 |
+
for level in low medium high; do
|
92 |
+
echo "Testing: $level"
|
93 |
+
uv run gpt_oss_transformers.py \
|
94 |
+
--prompt "Explain gravity to a 5-year-old" \
|
95 |
+
--reasoning-level $level \
|
96 |
+
--debug
|
97 |
+
done
|
98 |
```
|
99 |
|
100 |
+
## π Script Options
|
101 |
|
102 |
+
| Option | Description | Default |
|
103 |
+
|--------|-------------|---------|
|
104 |
+
| `--input-dataset` | HuggingFace dataset to process | - |
|
105 |
+
| `--output-dataset` | Output dataset name | - |
|
106 |
+
| `--prompt-column` | Column with prompts | `prompt` |
|
107 |
| `--model-id` | Model to use | `openai/gpt-oss-20b` |
|
108 |
+
| `--reasoning-level` | Reasoning depth: low/medium/high | `high` |
|
109 |
+
| `--max-samples` | Limit samples to process | None |
|
110 |
+
| `--temperature` | Sampling temperature | `0.7` |
|
111 |
| `--max-tokens` | Max tokens to generate | `512` |
|
112 |
+
| `--prompt` | Single prompt test (skip dataset) | - |
|
113 |
+
| `--debug` | Show raw model output | `False` |
|
114 |
+
|
115 |
+
## π§ Technical Details
|
116 |
+
|
117 |
+
### Why It Works Without H100s
|
118 |
+
|
119 |
+
1. **Automatic MXFP4 Handling**: When your GPU doesn't support MXFP4, you'll see:
|
120 |
+
```
|
121 |
+
MXFP4 quantization requires triton >= 3.4.0 and triton_kernels installed,
|
122 |
+
we will default to dequantizing the model to bf16
|
123 |
+
```
|
124 |
+
|
125 |
+
2. **No Flash Attention 3 Required**: FA3 needs Hopper architecture, but models work fine without it
|
126 |
+
|
127 |
+
3. **Simple Loading**: Just use standard transformers:
|
128 |
+
```python
|
129 |
+
model = AutoModelForCausalLM.from_pretrained(
|
130 |
+
"openai/gpt-oss-20b",
|
131 |
+
torch_dtype=torch.bfloat16,
|
132 |
+
device_map="auto"
|
133 |
+
)
|
134 |
+
```
|
135 |
+
|
136 |
+
### Channel Output Format
|
137 |
+
|
138 |
+
The models use a simplified channel format:
|
139 |
+
- `analysis`: Chain of thought reasoning
|
140 |
+
- `commentary`: Meta operations (optional)
|
141 |
+
- `final`: User-facing response
|
142 |
+
|
143 |
+
### Reasoning Control
|
144 |
+
|
145 |
+
Control reasoning depth via system message:
|
146 |
+
```python
|
147 |
+
messages = [
|
148 |
+
{
|
149 |
+
"role": "system",
|
150 |
+
"content": f"...Reasoning: {level}..."
|
151 |
+
},
|
152 |
+
{"role": "user", "content": prompt}
|
153 |
+
]
|
154 |
+
```
|
155 |
|
156 |
+
## π¨ Best Practices
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
157 |
|
158 |
+
1. **Token Limits**: Use 1000+ tokens for detailed reasoning
|
159 |
+
2. **Security**: Never expose reasoning channels to end users
|
160 |
+
3. **Batch Size**: Keep at 1 for memory efficiency
|
161 |
+
4. **Reasoning Levels**:
|
162 |
+
- `low`: Quick responses
|
163 |
+
- `medium`: Balanced reasoning
|
164 |
+
- `high`: Detailed chain-of-thought
|
165 |
|
166 |
+
## π Troubleshooting
|
|
|
167 |
|
168 |
+
### Out of Memory
|
169 |
+
- Use larger GPU flavor: `--flavor a100-large`
|
170 |
+
- Reduce batch size to 1
|
171 |
+
- Try 8-bit quantization for smaller GPUs
|
172 |
|
173 |
+
### No GPU Available
|
174 |
+
- Use HuggingFace Jobs (no local GPU needed!)
|
175 |
+
- Or use cloud instances with GPU support
|
176 |
|
177 |
+
### Empty Reasoning
|
178 |
+
- Increase `--max-tokens` to 1500+
|
179 |
+
- Ensure prompts trigger reasoning
|
|
|
180 |
|
181 |
+
## π References
|
182 |
|
183 |
+
- [OpenAI Cookbook: GPT OSS](https://cookbook.openai.com/articles/gpt-oss/run-transformers)
|
184 |
+
- [Model: openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b)
|
185 |
+
- [HF Jobs Documentation](https://huggingface.co/docs/hub/spaces-gpu-jobs)
|
186 |
|
187 |
+
## π The Bottom Line
|
188 |
|
189 |
+
**You don't need H100s!** These models work great on regular datacenter GPUs. Just run the script and start generating datasets with transparent reasoning.
|
190 |
|
191 |
---
|
192 |
|
193 |
+
*Last tested: 2025-08-05 on NVIDIA L4 GPUs - Working perfectly!*
|
gpt_oss_minimal.py
CHANGED
@@ -62,6 +62,12 @@ def main():
|
|
62 |
parser.add_argument(
|
63 |
"--max-new-tokens", type=int, default=1024, help="Max tokens to generate"
|
64 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
65 |
args = parser.parse_args()
|
66 |
|
67 |
# Check GPU availability
|
@@ -108,6 +114,9 @@ def main():
|
|
108 |
print(f"ERROR: Column '{args.prompt_column}' not found")
|
109 |
print(f"Available columns: {dataset.column_names}")
|
110 |
sys.exit(1)
|
|
|
|
|
|
|
111 |
|
112 |
# Limit samples if requested
|
113 |
if args.max_samples:
|
@@ -125,9 +134,13 @@ def main():
|
|
125 |
# Create messages (user message only, as per official examples)
|
126 |
messages = [{"role": "user", "content": prompt_text}]
|
127 |
|
128 |
-
# Apply chat template
|
129 |
inputs = tokenizer.apply_chat_template(
|
130 |
-
messages,
|
|
|
|
|
|
|
|
|
131 |
).to(model.device)
|
132 |
|
133 |
# Generate
|
@@ -151,12 +164,13 @@ def main():
|
|
151 |
"prompt": prompt_text,
|
152 |
"raw_output": response,
|
153 |
"model": args.model_id,
|
|
|
154 |
}
|
155 |
)
|
156 |
|
157 |
# Show preview of output structure
|
158 |
if i == 0:
|
159 |
-
print(
|
160 |
print(response[:200])
|
161 |
print("...")
|
162 |
|
@@ -173,6 +187,7 @@ def main():
|
|
173 |
print("- prompt: Original prompt")
|
174 |
print("- raw_output: Full model response with channel markers")
|
175 |
print("- model: Model ID used")
|
|
|
176 |
print(
|
177 |
"\nTo extract final response, look for text after '<|channel|>final<|message|>'"
|
178 |
)
|
|
|
62 |
parser.add_argument(
|
63 |
"--max-new-tokens", type=int, default=1024, help="Max tokens to generate"
|
64 |
)
|
65 |
+
parser.add_argument(
|
66 |
+
"--reasoning-effort",
|
67 |
+
choices=["low", "medium", "high"],
|
68 |
+
default="medium",
|
69 |
+
help="Reasoning effort level (default: medium)"
|
70 |
+
)
|
71 |
args = parser.parse_args()
|
72 |
|
73 |
# Check GPU availability
|
|
|
114 |
print(f"ERROR: Column '{args.prompt_column}' not found")
|
115 |
print(f"Available columns: {dataset.column_names}")
|
116 |
sys.exit(1)
|
117 |
+
# if args.random_sample:
|
118 |
+
# dataset.shuffle()
|
119 |
+
# print(f"Random sampling enabled. Using {args.max_samples} samples.")
|
120 |
|
121 |
# Limit samples if requested
|
122 |
if args.max_samples:
|
|
|
134 |
# Create messages (user message only, as per official examples)
|
135 |
messages = [{"role": "user", "content": prompt_text}]
|
136 |
|
137 |
+
# Apply chat template with reasoning_effort parameter
|
138 |
inputs = tokenizer.apply_chat_template(
|
139 |
+
messages,
|
140 |
+
add_generation_prompt=True,
|
141 |
+
return_tensors="pt",
|
142 |
+
return_dict=True,
|
143 |
+
reasoning_effort=args.reasoning_effort # "low", "medium", or "high"
|
144 |
).to(model.device)
|
145 |
|
146 |
# Generate
|
|
|
164 |
"prompt": prompt_text,
|
165 |
"raw_output": response,
|
166 |
"model": args.model_id,
|
167 |
+
"reasoning_effort": args.reasoning_effort,
|
168 |
}
|
169 |
)
|
170 |
|
171 |
# Show preview of output structure
|
172 |
if i == 0:
|
173 |
+
print("Sample output preview (first 200 chars):")
|
174 |
print(response[:200])
|
175 |
print("...")
|
176 |
|
|
|
187 |
print("- prompt: Original prompt")
|
188 |
print("- raw_output: Full model response with channel markers")
|
189 |
print("- model: Model ID used")
|
190 |
+
print(f"- reasoning_effort: {args.reasoning_effort}")
|
191 |
print(
|
192 |
"\nTo extract final response, look for text after '<|channel|>final<|message|>'"
|
193 |
)
|