davanstrien HF Staff commited on
Commit
78e2e1c
Β·
1 Parent(s): a4ee9cd

Update README to focus on minimal script

Browse files

- Document tested and working l4x4 configuration
- Include exact command that worked
- Remove references to full script to avoid confusion
- Add clear GPU requirements table
- Explain reasoning_effort parameter
- Last tested: 2025-01-06

Files changed (1) hide show
  1. README.md +71 -151
README.md CHANGED
@@ -1,193 +1,113 @@
1
- # πŸš€ OpenAI GPT OSS Models - Works on Regular GPUs!
2
 
3
- Generate synthetic datasets with transparent reasoning using OpenAI's GPT OSS models. **No H100s required** - works on L4, A100, A10G, and even T4 GPUs!
4
 
5
- ## πŸŽ‰ Key Discovery
6
 
7
- **The models work on regular datacenter GPUs!** Transformers automatically handles MXFP4 β†’ bf16 conversion, making these models accessible on standard hardware.
8
 
9
  ## 🌟 Quick Start
10
 
11
- ### Test Locally (Single Prompt)
12
  ```bash
13
- uv run gpt_oss_transformers.py --prompt "Write a haiku about mountains"
14
- ```
15
-
16
- ### Run on HuggingFace Jobs (No GPU Required!)
17
- ```bash
18
- # Generate haiku with reasoning (~$1.50/hr on A10G)
19
- hf jobs uv run --flavor a10g-small \
20
- https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_transformers.py \
21
  --input-dataset davanstrien/haiku_dpo \
22
- --output-dataset username/haiku-reasoning \
23
  --prompt-column question \
24
- --max-samples 50
 
25
  ```
26
 
27
- ## πŸ’‘ What You Get
28
 
29
- The models output structured reasoning in separate channels:
 
 
 
 
 
 
 
 
30
 
31
- **Raw Output**:
32
- ```
33
- analysisI need to write a haiku about mountains. Haiku: 5-7-5 syllable structure...
34
- assistantfinalSilent peaks climb high,
35
- Echoing winds trace stone's breath,
36
- Dawn paints them gold bright.
37
- ```
38
 
39
- **Parsed Dataset**:
40
- ```json
41
- {
42
- "prompt": "Write a haiku about mountains",
43
- "think": "[Analysis] I need to write a haiku about mountains. Haiku: 5-7-5 syllable structure...",
44
- "content": "Silent peaks climb high,\nEchoing winds trace stone's breath,\nDawn paints them gold bright.",
45
- "reasoning_level": "high",
46
- "model": "openai/gpt-oss-20b"
47
- }
48
- ```
49
 
50
- ## πŸ–₯️ GPU Requirements
51
 
52
- ### βœ… Confirmed Working GPUs
53
- | GPU | Memory | Status | Notes |
54
- |-----|--------|--------|-------|
55
- | **L4** | 24GB | βœ… Tested | Works perfectly! |
56
- | **A100** | 40/80GB | βœ… Works | Great performance |
57
- | **A10G** | 24GB | βœ… Recommended | Best value at $1.50/hr |
58
- | **T4** | 16GB | ⚠️ Limited | May need 8-bit for 20B |
59
- | **RTX 4090** | 24GB | βœ… Works | Consumer GPU support |
60
 
61
- ### Memory Requirements
62
- - **20B model**: ~40GB VRAM when dequantized (use A100-40GB or 2xL4)
63
- - **120B model**: ~240GB VRAM when dequantized (use 4xA100-80GB)
 
 
 
 
64
 
65
  ## 🎯 Examples
66
 
67
- ### Creative Writing with Reasoning
 
 
68
  ```bash
69
- # Process haiku dataset with high reasoning
70
- uv run gpt_oss_transformers.py \
71
  --input-dataset davanstrien/haiku_dpo \
72
- --output-dataset my-haiku-reasoning \
73
  --prompt-column question \
74
- --reasoning-level high \
75
- --max-samples 100
76
  ```
77
 
78
- ### Math Problems with Step-by-Step Solutions
79
  ```bash
80
- # Generate math solutions with reasoning traces
81
- uv run gpt_oss_transformers.py \
82
- --input-dataset gsm8k \
83
- --output-dataset math-with-reasoning \
84
  --prompt-column question \
85
- --reasoning-level high
 
86
  ```
87
 
88
- ### Test Different Reasoning Levels
89
- ```bash
90
- # Compare reasoning levels
91
- for level in low medium high; do
92
- echo "Testing: $level"
93
- uv run gpt_oss_transformers.py \
94
- --prompt "Explain gravity to a 5-year-old" \
95
- --reasoning-level $level \
96
- --debug
97
- done
98
- ```
99
 
100
- ## πŸ“‹ Script Options
 
 
 
101
 
102
- | Option | Description | Default |
103
- |--------|-------------|---------|
104
- | `--input-dataset` | HuggingFace dataset to process | - |
105
- | `--output-dataset` | Output dataset name | - |
106
- | `--prompt-column` | Column with prompts | `prompt` |
107
- | `--model-id` | Model to use | `openai/gpt-oss-20b` |
108
- | `--reasoning-level` | Reasoning depth: low/medium/high | `high` |
109
- | `--max-samples` | Limit samples to process | None |
110
- | `--temperature` | Sampling temperature | `0.7` |
111
- | `--max-tokens` | Max tokens to generate | `512` |
112
- | `--prompt` | Single prompt test (skip dataset) | - |
113
- | `--debug` | Show raw model output | `False` |
114
 
115
  ## πŸ”§ Technical Details
116
 
117
- ### Why It Works Without H100s
118
-
119
- 1. **Automatic MXFP4 Handling**: When your GPU doesn't support MXFP4, you'll see:
120
- ```
121
- MXFP4 quantization requires triton >= 3.4.0 and triton_kernels installed,
122
- we will default to dequantizing the model to bf16
123
- ```
124
-
125
- 2. **No Flash Attention 3 Required**: FA3 needs Hopper architecture, but models work fine without it
126
-
127
- 3. **Simple Loading**: Just use standard transformers:
128
- ```python
129
- model = AutoModelForCausalLM.from_pretrained(
130
- "openai/gpt-oss-20b",
131
- torch_dtype=torch.bfloat16,
132
- device_map="auto"
133
- )
134
- ```
135
-
136
- ### Channel Output Format
137
-
138
- The models use a simplified channel format:
139
- - `analysis`: Chain of thought reasoning
140
- - `commentary`: Meta operations (optional)
141
- - `final`: User-facing response
142
-
143
- ### Reasoning Control
144
-
145
- Control reasoning depth via system message:
146
- ```python
147
- messages = [
148
- {
149
- "role": "system",
150
- "content": f"...Reasoning: {level}..."
151
- },
152
- {"role": "user", "content": prompt}
153
- ]
154
- ```
155
-
156
- ## 🚨 Best Practices
157
-
158
- 1. **Token Limits**: Use 1000+ tokens for detailed reasoning
159
- 2. **Security**: Never expose reasoning channels to end users
160
- 3. **Batch Size**: Keep at 1 for memory efficiency
161
- 4. **Reasoning Levels**:
162
- - `low`: Quick responses
163
- - `medium`: Balanced reasoning
164
- - `high`: Detailed chain-of-thought
165
-
166
- ## πŸ› Troubleshooting
167
 
168
- ### Out of Memory
169
- - Use larger GPU flavor: `--flavor a100-large`
170
- - Reduce batch size to 1
171
- - Try 8-bit quantization for smaller GPUs
 
172
 
173
- ### No GPU Available
174
- - Use HuggingFace Jobs (no local GPU needed!)
175
- - Or use cloud instances with GPU support
176
 
177
- ### Empty Reasoning
178
- - Increase `--max-tokens` to 1500+
179
- - Ensure prompts trigger reasoning
180
-
181
- ## πŸ“š References
182
-
183
- - [OpenAI Cookbook: GPT OSS](https://cookbook.openai.com/articles/gpt-oss/run-transformers)
184
  - [Model: openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b)
185
  - [HF Jobs Documentation](https://huggingface.co/docs/hub/spaces-gpu-jobs)
186
-
187
- ## πŸŽ‰ The Bottom Line
188
-
189
- **You don't need H100s!** These models work great on regular datacenter GPUs. Just run the script and start generating datasets with transparent reasoning.
190
 
191
  ---
192
 
193
- *Last tested: 2025-08-05 on NVIDIA L4 GPUs - Working perfectly!*
 
1
+ # πŸš€ OpenAI GPT OSS Models - Simple Generation Script
2
 
3
+ Generate synthetic datasets using OpenAI's GPT OSS models with transparent reasoning. Works on HuggingFace Jobs with L4 GPUs!
4
 
5
+ ## βœ… Tested & Working
6
 
7
+ Successfully tested on HF Jobs with `l4x4` flavor (4x L4 GPUs = 96GB total memory).
8
 
9
  ## 🌟 Quick Start
10
 
 
11
  ```bash
12
+ # Run on HF Jobs (tested and working)
13
+ hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
14
+ https://huggingface.co/datasets/davanstrien/openai-oss/raw/main/gpt_oss_minimal.py \
 
 
 
 
 
15
  --input-dataset davanstrien/haiku_dpo \
16
+ --output-dataset username/gpt-oss-haiku \
17
  --prompt-column question \
18
+ --max-samples 2 \
19
+ --reasoning-effort high
20
  ```
21
 
22
+ ## πŸ“‹ Script Options
23
 
24
+ | Option | Description | Default |
25
+ |--------|-------------|---------|
26
+ | `--input-dataset` | HuggingFace dataset to process | Required |
27
+ | `--output-dataset` | Output dataset name | Required |
28
+ | `--prompt-column` | Column containing prompts | `prompt` |
29
+ | `--model-id` | Model to use | `openai/gpt-oss-20b` |
30
+ | `--max-samples` | Limit samples to process | None (all) |
31
+ | `--max-new-tokens` | Max tokens to generate | `1024` |
32
+ | `--reasoning-effort` | Reasoning depth: low/medium/high | `medium` |
33
 
34
+ ## πŸ’‘ What You Get
 
 
 
 
 
 
35
 
36
+ The output dataset contains:
37
+ - `prompt`: Original prompt from input dataset
38
+ - `raw_output`: Full model response with channel markers
39
+ - `model`: Model ID used
40
+ - `reasoning_effort`: The reasoning level used
 
 
 
 
 
41
 
42
+ ### Understanding the Output
43
 
44
+ The raw output contains special channel markers:
45
+ - `<|channel|>analysis<|message|>` - Chain of thought reasoning
46
+ - `<|channel|>final<|message|>` - The actual response
 
 
 
 
 
47
 
48
+ Example raw output structure:
49
+ ```
50
+ <|channel|>analysis<|message|>
51
+ [Reasoning about the task...]
52
+ <|channel|>final<|message|>
53
+ [Actual haiku or response]
54
+ ```
55
 
56
  ## 🎯 Examples
57
 
58
+ ### Test with Different Reasoning Levels
59
+
60
+ **High reasoning (most detailed):**
61
  ```bash
62
+ hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
63
+ https://huggingface.co/datasets/davanstrien/openai-oss/raw/main/gpt_oss_minimal.py \
64
  --input-dataset davanstrien/haiku_dpo \
65
+ --output-dataset username/haiku-high \
66
  --prompt-column question \
67
+ --reasoning-effort high \
68
+ --max-samples 5
69
  ```
70
 
71
+ **Low reasoning (fastest):**
72
  ```bash
73
+ hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
74
+ https://huggingface.co/datasets/davanstrien/openai-oss/raw/main/gpt_oss_minimal.py \
75
+ --input-dataset davanstrien/haiku_dpo \
76
+ --output-dataset username/haiku-low \
77
  --prompt-column question \
78
+ --reasoning-effort low \
79
+ --max-samples 10
80
  ```
81
 
82
+ ## πŸ–₯️ GPU Requirements
 
 
 
 
 
 
 
 
 
 
83
 
84
+ | Model | Memory Required | Recommended Flavor |
85
+ |-------|----------------|-------------------|
86
+ | **openai/gpt-oss-20b** | ~40GB | `l4x4` (4x24GB = 96GB) |
87
+ | **openai/gpt-oss-120b** | ~240GB | `8xa100` (8x80GB) |
88
 
89
+ **Note**: The 20B model automatically dequantizes from MXFP4 to bf16 on non-Hopper GPUs, requiring more memory than the quantized size.
 
 
 
 
 
 
 
 
 
 
 
90
 
91
  ## πŸ”§ Technical Details
92
 
93
+ ### Why L4x4?
94
+ - The 20B model needs ~40GB VRAM when dequantized
95
+ - Single A10G (24GB) is insufficient
96
+ - L4x4 provides 96GB total memory across 4 GPUs
97
+ - Cost-effective compared to A100 instances
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
98
 
99
+ ### Reasoning Effort
100
+ The `reasoning_effort` parameter controls how much chain-of-thought reasoning the model generates:
101
+ - `low`: Quick responses with minimal reasoning
102
+ - `medium`: Balanced reasoning (default)
103
+ - `high`: Detailed step-by-step reasoning
104
 
105
+ ## πŸ“š Resources
 
 
106
 
 
 
 
 
 
 
 
107
  - [Model: openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b)
108
  - [HF Jobs Documentation](https://huggingface.co/docs/hub/spaces-gpu-jobs)
109
+ - [Dataset: davanstrien/haiku_dpo](https://huggingface.co/datasets/davanstrien/haiku_dpo)
 
 
 
110
 
111
  ---
112
 
113
+ *Last tested: 2025-01-06 on HF Jobs with l4x4 flavor*