Commit
Β·
78e2e1c
1
Parent(s):
a4ee9cd
Update README to focus on minimal script
Browse files- Document tested and working l4x4 configuration
- Include exact command that worked
- Remove references to full script to avoid confusion
- Add clear GPU requirements table
- Explain reasoning_effort parameter
- Last tested: 2025-01-06
README.md
CHANGED
@@ -1,193 +1,113 @@
|
|
1 |
-
# π OpenAI GPT OSS Models -
|
2 |
|
3 |
-
Generate synthetic datasets
|
4 |
|
5 |
-
##
|
6 |
|
7 |
-
|
8 |
|
9 |
## π Quick Start
|
10 |
|
11 |
-
### Test Locally (Single Prompt)
|
12 |
```bash
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
### Run on HuggingFace Jobs (No GPU Required!)
|
17 |
-
```bash
|
18 |
-
# Generate haiku with reasoning (~$1.50/hr on A10G)
|
19 |
-
hf jobs uv run --flavor a10g-small \
|
20 |
-
https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_transformers.py \
|
21 |
--input-dataset davanstrien/haiku_dpo \
|
22 |
-
--output-dataset username/haiku
|
23 |
--prompt-column question \
|
24 |
-
--max-samples
|
|
|
25 |
```
|
26 |
|
27 |
-
##
|
28 |
|
29 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
|
31 |
-
|
32 |
-
```
|
33 |
-
analysisI need to write a haiku about mountains. Haiku: 5-7-5 syllable structure...
|
34 |
-
assistantfinalSilent peaks climb high,
|
35 |
-
Echoing winds trace stone's breath,
|
36 |
-
Dawn paints them gold bright.
|
37 |
-
```
|
38 |
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
"content": "Silent peaks climb high,\nEchoing winds trace stone's breath,\nDawn paints them gold bright.",
|
45 |
-
"reasoning_level": "high",
|
46 |
-
"model": "openai/gpt-oss-20b"
|
47 |
-
}
|
48 |
-
```
|
49 |
|
50 |
-
|
51 |
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
| **L4** | 24GB | β
Tested | Works perfectly! |
|
56 |
-
| **A100** | 40/80GB | β
Works | Great performance |
|
57 |
-
| **A10G** | 24GB | β
Recommended | Best value at $1.50/hr |
|
58 |
-
| **T4** | 16GB | β οΈ Limited | May need 8-bit for 20B |
|
59 |
-
| **RTX 4090** | 24GB | β
Works | Consumer GPU support |
|
60 |
|
61 |
-
|
62 |
-
|
63 |
-
|
|
|
|
|
|
|
|
|
64 |
|
65 |
## π― Examples
|
66 |
|
67 |
-
###
|
|
|
|
|
68 |
```bash
|
69 |
-
|
70 |
-
|
71 |
--input-dataset davanstrien/haiku_dpo \
|
72 |
-
--output-dataset
|
73 |
--prompt-column question \
|
74 |
-
--reasoning-
|
75 |
-
--max-samples
|
76 |
```
|
77 |
|
78 |
-
|
79 |
```bash
|
80 |
-
|
81 |
-
|
82 |
-
--input-dataset
|
83 |
-
--output-dataset
|
84 |
--prompt-column question \
|
85 |
-
--reasoning-
|
|
|
86 |
```
|
87 |
|
88 |
-
|
89 |
-
```bash
|
90 |
-
# Compare reasoning levels
|
91 |
-
for level in low medium high; do
|
92 |
-
echo "Testing: $level"
|
93 |
-
uv run gpt_oss_transformers.py \
|
94 |
-
--prompt "Explain gravity to a 5-year-old" \
|
95 |
-
--reasoning-level $level \
|
96 |
-
--debug
|
97 |
-
done
|
98 |
-
```
|
99 |
|
100 |
-
|
|
|
|
|
|
|
101 |
|
102 |
-
|
103 |
-
|--------|-------------|---------|
|
104 |
-
| `--input-dataset` | HuggingFace dataset to process | - |
|
105 |
-
| `--output-dataset` | Output dataset name | - |
|
106 |
-
| `--prompt-column` | Column with prompts | `prompt` |
|
107 |
-
| `--model-id` | Model to use | `openai/gpt-oss-20b` |
|
108 |
-
| `--reasoning-level` | Reasoning depth: low/medium/high | `high` |
|
109 |
-
| `--max-samples` | Limit samples to process | None |
|
110 |
-
| `--temperature` | Sampling temperature | `0.7` |
|
111 |
-
| `--max-tokens` | Max tokens to generate | `512` |
|
112 |
-
| `--prompt` | Single prompt test (skip dataset) | - |
|
113 |
-
| `--debug` | Show raw model output | `False` |
|
114 |
|
115 |
## π§ Technical Details
|
116 |
|
117 |
-
### Why
|
118 |
-
|
119 |
-
|
120 |
-
|
121 |
-
|
122 |
-
we will default to dequantizing the model to bf16
|
123 |
-
```
|
124 |
-
|
125 |
-
2. **No Flash Attention 3 Required**: FA3 needs Hopper architecture, but models work fine without it
|
126 |
-
|
127 |
-
3. **Simple Loading**: Just use standard transformers:
|
128 |
-
```python
|
129 |
-
model = AutoModelForCausalLM.from_pretrained(
|
130 |
-
"openai/gpt-oss-20b",
|
131 |
-
torch_dtype=torch.bfloat16,
|
132 |
-
device_map="auto"
|
133 |
-
)
|
134 |
-
```
|
135 |
-
|
136 |
-
### Channel Output Format
|
137 |
-
|
138 |
-
The models use a simplified channel format:
|
139 |
-
- `analysis`: Chain of thought reasoning
|
140 |
-
- `commentary`: Meta operations (optional)
|
141 |
-
- `final`: User-facing response
|
142 |
-
|
143 |
-
### Reasoning Control
|
144 |
-
|
145 |
-
Control reasoning depth via system message:
|
146 |
-
```python
|
147 |
-
messages = [
|
148 |
-
{
|
149 |
-
"role": "system",
|
150 |
-
"content": f"...Reasoning: {level}..."
|
151 |
-
},
|
152 |
-
{"role": "user", "content": prompt}
|
153 |
-
]
|
154 |
-
```
|
155 |
-
|
156 |
-
## π¨ Best Practices
|
157 |
-
|
158 |
-
1. **Token Limits**: Use 1000+ tokens for detailed reasoning
|
159 |
-
2. **Security**: Never expose reasoning channels to end users
|
160 |
-
3. **Batch Size**: Keep at 1 for memory efficiency
|
161 |
-
4. **Reasoning Levels**:
|
162 |
-
- `low`: Quick responses
|
163 |
-
- `medium`: Balanced reasoning
|
164 |
-
- `high`: Detailed chain-of-thought
|
165 |
-
|
166 |
-
## π Troubleshooting
|
167 |
|
168 |
-
###
|
169 |
-
|
170 |
-
-
|
171 |
-
-
|
|
|
172 |
|
173 |
-
|
174 |
-
- Use HuggingFace Jobs (no local GPU needed!)
|
175 |
-
- Or use cloud instances with GPU support
|
176 |
|
177 |
-
### Empty Reasoning
|
178 |
-
- Increase `--max-tokens` to 1500+
|
179 |
-
- Ensure prompts trigger reasoning
|
180 |
-
|
181 |
-
## π References
|
182 |
-
|
183 |
-
- [OpenAI Cookbook: GPT OSS](https://cookbook.openai.com/articles/gpt-oss/run-transformers)
|
184 |
- [Model: openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b)
|
185 |
- [HF Jobs Documentation](https://huggingface.co/docs/hub/spaces-gpu-jobs)
|
186 |
-
|
187 |
-
## π The Bottom Line
|
188 |
-
|
189 |
-
**You don't need H100s!** These models work great on regular datacenter GPUs. Just run the script and start generating datasets with transparent reasoning.
|
190 |
|
191 |
---
|
192 |
|
193 |
-
*Last tested: 2025-
|
|
|
1 |
+
# π OpenAI GPT OSS Models - Simple Generation Script
|
2 |
|
3 |
+
Generate synthetic datasets using OpenAI's GPT OSS models with transparent reasoning. Works on HuggingFace Jobs with L4 GPUs!
|
4 |
|
5 |
+
## β
Tested & Working
|
6 |
|
7 |
+
Successfully tested on HF Jobs with `l4x4` flavor (4x L4 GPUs = 96GB total memory).
|
8 |
|
9 |
## π Quick Start
|
10 |
|
|
|
11 |
```bash
|
12 |
+
# Run on HF Jobs (tested and working)
|
13 |
+
hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
|
14 |
+
https://huggingface.co/datasets/davanstrien/openai-oss/raw/main/gpt_oss_minimal.py \
|
|
|
|
|
|
|
|
|
|
|
15 |
--input-dataset davanstrien/haiku_dpo \
|
16 |
+
--output-dataset username/gpt-oss-haiku \
|
17 |
--prompt-column question \
|
18 |
+
--max-samples 2 \
|
19 |
+
--reasoning-effort high
|
20 |
```
|
21 |
|
22 |
+
## π Script Options
|
23 |
|
24 |
+
| Option | Description | Default |
|
25 |
+
|--------|-------------|---------|
|
26 |
+
| `--input-dataset` | HuggingFace dataset to process | Required |
|
27 |
+
| `--output-dataset` | Output dataset name | Required |
|
28 |
+
| `--prompt-column` | Column containing prompts | `prompt` |
|
29 |
+
| `--model-id` | Model to use | `openai/gpt-oss-20b` |
|
30 |
+
| `--max-samples` | Limit samples to process | None (all) |
|
31 |
+
| `--max-new-tokens` | Max tokens to generate | `1024` |
|
32 |
+
| `--reasoning-effort` | Reasoning depth: low/medium/high | `medium` |
|
33 |
|
34 |
+
## π‘ What You Get
|
|
|
|
|
|
|
|
|
|
|
|
|
35 |
|
36 |
+
The output dataset contains:
|
37 |
+
- `prompt`: Original prompt from input dataset
|
38 |
+
- `raw_output`: Full model response with channel markers
|
39 |
+
- `model`: Model ID used
|
40 |
+
- `reasoning_effort`: The reasoning level used
|
|
|
|
|
|
|
|
|
|
|
41 |
|
42 |
+
### Understanding the Output
|
43 |
|
44 |
+
The raw output contains special channel markers:
|
45 |
+
- `<|channel|>analysis<|message|>` - Chain of thought reasoning
|
46 |
+
- `<|channel|>final<|message|>` - The actual response
|
|
|
|
|
|
|
|
|
|
|
47 |
|
48 |
+
Example raw output structure:
|
49 |
+
```
|
50 |
+
<|channel|>analysis<|message|>
|
51 |
+
[Reasoning about the task...]
|
52 |
+
<|channel|>final<|message|>
|
53 |
+
[Actual haiku or response]
|
54 |
+
```
|
55 |
|
56 |
## π― Examples
|
57 |
|
58 |
+
### Test with Different Reasoning Levels
|
59 |
+
|
60 |
+
**High reasoning (most detailed):**
|
61 |
```bash
|
62 |
+
hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
|
63 |
+
https://huggingface.co/datasets/davanstrien/openai-oss/raw/main/gpt_oss_minimal.py \
|
64 |
--input-dataset davanstrien/haiku_dpo \
|
65 |
+
--output-dataset username/haiku-high \
|
66 |
--prompt-column question \
|
67 |
+
--reasoning-effort high \
|
68 |
+
--max-samples 5
|
69 |
```
|
70 |
|
71 |
+
**Low reasoning (fastest):**
|
72 |
```bash
|
73 |
+
hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
|
74 |
+
https://huggingface.co/datasets/davanstrien/openai-oss/raw/main/gpt_oss_minimal.py \
|
75 |
+
--input-dataset davanstrien/haiku_dpo \
|
76 |
+
--output-dataset username/haiku-low \
|
77 |
--prompt-column question \
|
78 |
+
--reasoning-effort low \
|
79 |
+
--max-samples 10
|
80 |
```
|
81 |
|
82 |
+
## π₯οΈ GPU Requirements
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
83 |
|
84 |
+
| Model | Memory Required | Recommended Flavor |
|
85 |
+
|-------|----------------|-------------------|
|
86 |
+
| **openai/gpt-oss-20b** | ~40GB | `l4x4` (4x24GB = 96GB) |
|
87 |
+
| **openai/gpt-oss-120b** | ~240GB | `8xa100` (8x80GB) |
|
88 |
|
89 |
+
**Note**: The 20B model automatically dequantizes from MXFP4 to bf16 on non-Hopper GPUs, requiring more memory than the quantized size.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
90 |
|
91 |
## π§ Technical Details
|
92 |
|
93 |
+
### Why L4x4?
|
94 |
+
- The 20B model needs ~40GB VRAM when dequantized
|
95 |
+
- Single A10G (24GB) is insufficient
|
96 |
+
- L4x4 provides 96GB total memory across 4 GPUs
|
97 |
+
- Cost-effective compared to A100 instances
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
98 |
|
99 |
+
### Reasoning Effort
|
100 |
+
The `reasoning_effort` parameter controls how much chain-of-thought reasoning the model generates:
|
101 |
+
- `low`: Quick responses with minimal reasoning
|
102 |
+
- `medium`: Balanced reasoning (default)
|
103 |
+
- `high`: Detailed step-by-step reasoning
|
104 |
|
105 |
+
## π Resources
|
|
|
|
|
106 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
107 |
- [Model: openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b)
|
108 |
- [HF Jobs Documentation](https://huggingface.co/docs/hub/spaces-gpu-jobs)
|
109 |
+
- [Dataset: davanstrien/haiku_dpo](https://huggingface.co/datasets/davanstrien/haiku_dpo)
|
|
|
|
|
|
|
110 |
|
111 |
---
|
112 |
|
113 |
+
*Last tested: 2025-01-06 on HF Jobs with l4x4 flavor*
|