davanstrien HF Staff commited on
Commit
bbe5ce0
·
1 Parent(s): 27b7430

Add transformers implementation for OpenAI GPT OSS models

Browse files

- Add gpt_oss_transformers.py with standard transformers library implementation
- Supports both 20B and 120B models with structured reasoning output
- Includes channel parsing for analysis (thinking) and final (response) channels
- Fixed device_map configuration for proper model loading
- Added MXFP4 quantization note (models are quantized out of the box)
- Comprehensive documentation with HF Jobs examples

This provides a fallback option while vLLM FlashAttention 3 support is pending.

Files changed (2) hide show
  1. README.md +191 -0
  2. gpt_oss_transformers.py +520 -0
README.md ADDED
@@ -0,0 +1,191 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # 🚀 OpenAI GPT OSS Models - Open Source Language Models with Reasoning
2
+
3
+ Generate responses with transparent chain-of-thought reasoning using OpenAI's new open source GPT models. Run on cloud GPUs with zero setup!
4
+
5
+ ## 🏁 Quick Setup for HF Jobs (One-time)
6
+
7
+ ```bash
8
+ # Install huggingface-hub CLI using uv
9
+ uv tool install huggingface-hub
10
+
11
+ # Login to Hugging Face
12
+ huggingface-cli login
13
+
14
+ # Now you're ready to run jobs!
15
+ ```
16
+
17
+ Need more help? Check the [HF Jobs documentation](https://huggingface.co/docs/huggingface_hub/guides/job).
18
+
19
+ ## 🌟 Try It Now! Copy & Run This Command:
20
+
21
+ ```bash
22
+ # Generate 50 haiku with reasoning (~5 minutes on A10G)
23
+ huggingface-cli job run --gpu-flavor a10g-small \
24
+ uv run https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_vllm.py \
25
+ --input-dataset davanstrien/haiku_dpo \
26
+ --output-dataset haiku-reasoning \
27
+ --prompt-column question \
28
+ --max-samples 50
29
+ ```
30
+
31
+ That's it! Your dataset will be generated and pushed to `your-username/haiku-reasoning`. 🎉
32
+
33
+ ## 💡 What You Get
34
+
35
+ The models output structured reasoning in separate channels:
36
+
37
+ ```json
38
+ {
39
+ "prompt": "Write a haiku about mountain serenity",
40
+ "think": "I need to create a haiku with 5-7-5 syllable structure. Mountains suggest stillness, permanence. For serenity, I'll use calm imagery like 'silent peaks' (3 syllables)...",
41
+ "content": "Silent peaks stand tall\nClouds drift through morning stillness\nPeace in stone and sky",
42
+ "reasoning_level": "high",
43
+ "model": "openai/gpt-oss-20b"
44
+ }
45
+ ```
46
+
47
+ ## 🎯 More Examples
48
+
49
+ ### Use Your Own Dataset
50
+
51
+ ```bash
52
+ # Process your entire dataset
53
+ huggingface-cli job run --gpu-flavor a10g-small \
54
+ uv run https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_vllm.py \
55
+ --input-dataset your-prompts \
56
+ --output-dataset my-responses
57
+
58
+ # Use the larger 120B model
59
+ huggingface-cli job run --gpu-flavor a100-large \
60
+ uv run https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_vllm.py \
61
+ --input-dataset your-prompts \
62
+ --output-dataset my-responses-120b \
63
+ --model-id openai/gpt-oss-120b
64
+ ```
65
+
66
+ ### Process Different Dataset Types
67
+
68
+ ```bash
69
+ # Math problems with step-by-step reasoning
70
+ huggingface-cli job run --gpu-flavor a10g-small \
71
+ uv run https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_vllm.py \
72
+ --input-dataset math-problems \
73
+ --output-dataset math-solutions \
74
+ --reasoning-level high
75
+
76
+ # Code generation with explanation
77
+ huggingface-cli job run --gpu-flavor a10g-small \
78
+ uv run https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_vllm.py \
79
+ --input-dataset code-prompts \
80
+ --output-dataset code-explained \
81
+ --max-tokens 1024
82
+
83
+ # Test with just 10 samples
84
+ huggingface-cli job run --gpu-flavor a10g-small \
85
+ uv run https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_vllm.py \
86
+ --input-dataset your-dataset \
87
+ --output-dataset quick-test \
88
+ --max-samples 10
89
+ ```
90
+
91
+ ## 📦 Two Script Options
92
+
93
+ 1. **`gpt_oss_vllm.py`** - High-performance batch generation using vLLM (recommended)
94
+ 2. **`gpt_oss_transformers.py`** - Standard transformers implementation (fallback)
95
+
96
+ ### Transformers Fallback (if vLLM has issues)
97
+
98
+ ```bash
99
+ # Same command, different script!
100
+ huggingface-cli job run --gpu-flavor a10g-small \
101
+ uv run https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_transformers.py \
102
+ --input-dataset davanstrien/haiku_dpo \
103
+ --output-dataset haiku-reasoning \
104
+ --prompt-column question \
105
+ --max-samples 50
106
+ ```
107
+
108
+ ## 💰 GPU Flavors and Costs
109
+
110
+ | Model | GPU Flavor | Memory | Cost/Hour | Best For |
111
+ |-------|------------|--------|-----------|----------|
112
+ | `gpt-oss-20b` | `a10g-small` | 24GB | $1.38 | Most use cases |
113
+ | `gpt-oss-20b` | `a10g-large` | 48GB | $2.50 | Larger batches |
114
+ | `gpt-oss-120b` | `a100-large` | 80GB | $4.34 | 120B model |
115
+ | `gpt-oss-120b` | `4xa100` | 320GB | $17.36 | Maximum speed |
116
+
117
+ ## 🏃 Local Execution
118
+
119
+ If you have a local GPU:
120
+
121
+ ```bash
122
+ # Using vLLM (recommended)
123
+ uv run gpt_oss_vllm.py \
124
+ --input-dataset davanstrien/haiku_dpo \
125
+ --output-dataset haiku-reasoning \
126
+ --prompt-column question \
127
+ --max-samples 50
128
+
129
+ # Using Transformers
130
+ uv run gpt_oss_transformers.py \
131
+ --input-dataset davanstrien/haiku_dpo \
132
+ --output-dataset haiku-reasoning \
133
+ --prompt-column question \
134
+ --max-samples 50
135
+ ```
136
+
137
+ ## 🛠️ Parameters
138
+
139
+ | Parameter | Description | Default |
140
+ |-----------|-------------|---------|
141
+ | `--input-dataset` | Source dataset on HF Hub | Required |
142
+ | `--output-dataset` | Output dataset name (auto-prefixed with your username) | Required |
143
+ | `--prompt-column` | Column containing prompts | `prompt` |
144
+ | `--model-id` | Model to use | `openai/gpt-oss-20b` |
145
+ | `--reasoning-level` | Reasoning depth (high/medium/low) | `high` |
146
+ | `--max-samples` | Limit number of examples | None (all) |
147
+ | `--temperature` | Generation temperature | `0.7` |
148
+ | `--max-tokens` | Max tokens to generate | `512` |
149
+
150
+ ## 🎯 Key Features
151
+
152
+ - **Open Source Models**: `openai/gpt-oss-20b` and `openai/gpt-oss-120b`
153
+ - **Structured Output**: Separate channels for reasoning (`analysis`) and response (`final`)
154
+ - **Zero Setup**: Run with a single command on HF Jobs
155
+ - **Flexible Input**: Works with any prompt dataset
156
+ - **Automatic Upload**: Results pushed directly to your Hub account
157
+
158
+ ## 🎯 Use Cases
159
+
160
+ 1. **Training Data**: Create datasets with built-in reasoning explanations
161
+ 2. **Evaluation**: Generate test sets where each answer includes its rationale
162
+ 3. **Research**: Study how large models approach different types of problems
163
+ 4. **Applications**: Build systems that can explain their outputs
164
+
165
+ ## 🤔 Which Script to Use?
166
+
167
+ - **`gpt_oss_vllm.py`**: First choice for performance and scale
168
+ - **`gpt_oss_transformers.py`**: Fallback if vLLM has compatibility issues
169
+
170
+ ## 🔧 Requirements
171
+
172
+ For HF Jobs:
173
+ - Hugging Face account (free)
174
+ - `huggingface-hub` CLI tool
175
+
176
+ For local execution:
177
+ - Python 3.10+
178
+ - GPU with CUDA support
179
+ - Hugging Face token
180
+
181
+ ## 🤝 Contributing
182
+
183
+ This is part of the [uv-scripts](https://huggingface.co/uv-scripts) collection. Contributions and improvements welcome!
184
+
185
+ ## 📜 License
186
+
187
+ Apache 2.0 - Same as the OpenAI GPT OSS models
188
+
189
+ ---
190
+
191
+ **Ready to generate data with reasoning?** Copy the command at the top and run it! 🚀
gpt_oss_transformers.py ADDED
@@ -0,0 +1,520 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # /// script
2
+ # requires-python = ">=3.10"
3
+ # dependencies = [
4
+ # "datasets",
5
+ # "huggingface-hub[hf_transfer]",
6
+ # "torch",
7
+ # "transformers>=4.45.0",
8
+ # "tqdm",
9
+ # "accelerate",
10
+ # ]
11
+ # ///
12
+ """
13
+ Generate responses with transparent reasoning using OpenAI's open source GPT OSS models.
14
+
15
+ This implementation uses standard Transformers library for maximum compatibility.
16
+ The models output structured reasoning in separate channels, allowing you to
17
+ capture both the thinking process and final response.
18
+
19
+ Example usage:
20
+ # Generate haiku with reasoning
21
+ uv run gpt_oss_transformers.py \\
22
+ --input-dataset davanstrien/haiku_dpo \\
23
+ --output-dataset username/haiku-reasoning \\
24
+ --prompt-column question
25
+
26
+ # Any prompt dataset with custom settings
27
+ uv run gpt_oss_transformers.py \\
28
+ --input-dataset username/prompts \\
29
+ --output-dataset username/responses-with-reasoning \\
30
+ --prompt-column prompt \\
31
+ --reasoning-level high \\
32
+ --max-samples 100
33
+
34
+ # HF Jobs execution
35
+ hf jobs uv run --flavor a10g-small \\
36
+ https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_transformers.py \\
37
+ --input-dataset username/prompts \\
38
+ --output-dataset username/responses-with-reasoning
39
+ """
40
+
41
+ import argparse
42
+ import logging
43
+ import os
44
+ import re
45
+ import sys
46
+ from datetime import datetime
47
+ from typing import Dict, List, Optional
48
+
49
+ import torch
50
+ from datasets import Dataset, load_dataset
51
+ from huggingface_hub import DatasetCard, get_token, login
52
+ from tqdm.auto import tqdm
53
+ from transformers import (
54
+ AutoModelForCausalLM,
55
+ AutoTokenizer,
56
+ GenerationConfig,
57
+ set_seed,
58
+ )
59
+
60
+ # Enable HF Transfer for faster downloads
61
+ os.environ["HF_HUB_ENABLE_HF_TRANSFER"] = "1"
62
+
63
+ logging.basicConfig(
64
+ level=logging.INFO, format="%(asctime)s - %(levelname)s - %(message)s"
65
+ )
66
+ logger = logging.getLogger(__name__)
67
+
68
+
69
+ def check_gpu_availability() -> int:
70
+ """Check if CUDA is available and return the number of GPUs."""
71
+ if not torch.cuda.is_available():
72
+ logger.error("CUDA is not available. This script requires a GPU.")
73
+ logger.error(
74
+ "Please run on a machine with NVIDIA GPU or use HF Jobs with GPU flavor."
75
+ )
76
+ sys.exit(1)
77
+
78
+ num_gpus = torch.cuda.device_count()
79
+ for i in range(num_gpus):
80
+ gpu_name = torch.cuda.get_device_name(i)
81
+ gpu_memory = torch.cuda.get_device_properties(i).total_memory / 1024**3
82
+ logger.info(f"GPU {i}: {gpu_name} with {gpu_memory:.1f} GB memory")
83
+
84
+ return num_gpus
85
+
86
+
87
+ def parse_channels(raw_output: str) -> Dict[str, str]:
88
+ """
89
+ Extract think/content from GPT OSS channel-based output.
90
+
91
+ Expected format:
92
+ <|start|>assistant<|channel|>analysis<|message|>CHAIN_OF_THOUGHT<|end|>
93
+ <|start|>assistant<|channel|>final<|message|>ACTUAL_MESSAGE
94
+ """
95
+ think = ""
96
+ content = ""
97
+
98
+ # Extract analysis channel (chain of thought)
99
+ analysis_pattern = (
100
+ r"<\|start\|>assistant<\|channel\|>analysis<\|message\|>(.*?)<\|end\|>"
101
+ )
102
+ analysis_match = re.search(analysis_pattern, raw_output, re.DOTALL)
103
+ if analysis_match:
104
+ think = analysis_match.group(1).strip()
105
+
106
+ # Extract final channel (user-facing content)
107
+ final_pattern = (
108
+ r"<\|start\|>assistant<\|channel\|>final<\|message\|>(.*?)(?:<\|end\|>|$)"
109
+ )
110
+ final_match = re.search(final_pattern, raw_output, re.DOTALL)
111
+ if final_match:
112
+ content = final_match.group(1).strip()
113
+
114
+ # If no channels found, treat entire output as content
115
+ if not think and not content:
116
+ content = raw_output.strip()
117
+
118
+ return {"think": think, "content": content, "raw_output": raw_output}
119
+
120
+
121
+ def create_dataset_card(
122
+ input_dataset: str,
123
+ model_id: str,
124
+ prompt_column: str,
125
+ reasoning_level: str,
126
+ num_examples: int,
127
+ generation_time: str,
128
+ num_gpus: int,
129
+ temperature: float,
130
+ max_tokens: int,
131
+ ) -> str:
132
+ """Create a dataset card documenting the generation process."""
133
+ return f"""---
134
+ tags:
135
+ - generated
136
+ - synthetic
137
+ - reasoning
138
+ - openai-gpt-oss
139
+ ---
140
+
141
+ # Generated Responses with Reasoning (Transformers)
142
+
143
+ This dataset contains AI-generated responses with transparent chain-of-thought reasoning using OpenAI GPT OSS models via Transformers.
144
+
145
+ ## Generation Details
146
+
147
+ - **Source Dataset**: [{input_dataset}](https://huggingface.co/datasets/{input_dataset})
148
+ - **Model**: [{model_id}](https://huggingface.co/{model_id})
149
+ - **Reasoning Level**: {reasoning_level}
150
+ - **Number of Examples**: {num_examples:,}
151
+ - **Generation Date**: {generation_time}
152
+ - **Implementation**: Transformers (fallback)
153
+ - **GPUs Used**: {num_gpus}
154
+
155
+ ## Dataset Structure
156
+
157
+ Each example contains:
158
+ - `prompt`: The input prompt from the source dataset
159
+ - `think`: The model's internal reasoning process
160
+ - `content`: The final response
161
+ - `raw_output`: Complete model output with channel markers
162
+ - `reasoning_level`: The reasoning effort level used
163
+ - `model`: Model identifier
164
+
165
+ ## Generation Script
166
+
167
+ Generated using [uv-scripts/openai-oss](https://huggingface.co/datasets/uv-scripts/openai-oss).
168
+
169
+ To reproduce:
170
+ ```bash
171
+ uv run gpt_oss_transformers.py \\
172
+ --input-dataset {input_dataset} \\
173
+ --output-dataset <your-dataset> \\
174
+ --prompt-column {prompt_column} \\
175
+ --model-id {model_id} \\
176
+ --reasoning-level {reasoning_level}
177
+ ```
178
+ """
179
+
180
+
181
+ def main(
182
+ input_dataset: str,
183
+ output_dataset_hub_id: str,
184
+ prompt_column: str = "prompt",
185
+ model_id: str = "openai/gpt-oss-20b",
186
+ reasoning_level: str = "high",
187
+ max_samples: Optional[int] = None,
188
+ temperature: float = 0.7,
189
+ max_tokens: int = 512,
190
+ batch_size: int = 1,
191
+ seed: int = 42,
192
+ hf_token: Optional[str] = None,
193
+ ):
194
+ """
195
+ Main generation pipeline using Transformers.
196
+
197
+ Args:
198
+ input_dataset: Source dataset on Hugging Face Hub
199
+ output_dataset_hub_id: Where to save results on Hugging Face Hub
200
+ prompt_column: Column containing the prompts
201
+ model_id: OpenAI GPT OSS model to use
202
+ reasoning_level: Reasoning effort level (high/medium/low)
203
+ max_samples: Maximum number of samples to process
204
+ temperature: Sampling temperature
205
+ max_tokens: Maximum tokens to generate
206
+ batch_size: Batch size for generation
207
+ seed: Random seed for reproducibility
208
+ hf_token: Hugging Face authentication token
209
+ """
210
+ generation_start_time = datetime.now().isoformat()
211
+ set_seed(seed)
212
+
213
+ # GPU check
214
+ num_gpus = check_gpu_availability()
215
+
216
+ # Authentication
217
+ HF_TOKEN = hf_token or os.environ.get("HF_TOKEN") or get_token()
218
+
219
+ if not HF_TOKEN:
220
+ logger.error("No HuggingFace token found. Please provide token via:")
221
+ logger.error(" 1. --hf-token argument")
222
+ logger.error(" 2. HF_TOKEN environment variable")
223
+ logger.error(" 3. Run 'huggingface-cli login'")
224
+ sys.exit(1)
225
+
226
+ logger.info("HuggingFace token found, authenticating...")
227
+ login(token=HF_TOKEN)
228
+
229
+ # Load tokenizer
230
+ logger.info(f"Loading tokenizer: {model_id}")
231
+ tokenizer = AutoTokenizer.from_pretrained(
232
+ model_id, padding_side="left" if "120b" in model_id else "right"
233
+ )
234
+
235
+ # Add padding token if needed
236
+ if tokenizer.pad_token is None:
237
+ tokenizer.pad_token = tokenizer.eos_token
238
+
239
+ # Model loading configuration
240
+ device_map = {"tp_plan": "auto"} if "120b" in model_id else "auto"
241
+
242
+ # Load model
243
+ logger.info(f"Loading model: {model_id}")
244
+ logger.info("This may take a few minutes for large models...")
245
+ # Note: GPT OSS models are MXFP4 quantized out of the box
246
+
247
+ try:
248
+ model = AutoModelForCausalLM.from_pretrained(
249
+ model_id,
250
+ torch_dtype=torch.bfloat16,
251
+ device_map=device_map,
252
+ )
253
+ model.eval()
254
+ except Exception as e:
255
+ logger.error(f"Failed to load model: {e}")
256
+ logger.error("Trying with default configuration...")
257
+ # Fallback to simpler loading
258
+ model = AutoModelForCausalLM.from_pretrained(
259
+ model_id,
260
+ torch_dtype="auto",
261
+ device_map="auto",
262
+ )
263
+ model.eval()
264
+
265
+ # Generation configuration
266
+ generation_config = GenerationConfig(
267
+ max_new_tokens=max_tokens,
268
+ temperature=temperature,
269
+ do_sample=temperature > 0,
270
+ eos_token_id=tokenizer.eos_token_id,
271
+ pad_token_id=tokenizer.pad_token_id,
272
+ )
273
+
274
+ # Load dataset
275
+ logger.info(f"Loading dataset: {input_dataset}")
276
+ dataset = load_dataset(input_dataset, split="train")
277
+
278
+ # Validate prompt column
279
+ if prompt_column not in dataset.column_names:
280
+ logger.error(
281
+ f"Column '{prompt_column}' not found. Available columns: {dataset.column_names}"
282
+ )
283
+ sys.exit(1)
284
+
285
+ # Limit samples if requested
286
+ if max_samples:
287
+ dataset = dataset.select(range(min(max_samples, len(dataset))))
288
+ total_examples = len(dataset)
289
+ logger.info(f"Processing {total_examples:,} examples")
290
+
291
+ # Prepare prompts with reasoning control
292
+ logger.info(f"Applying chat template with reasoning_level={reasoning_level}...")
293
+ prompts = []
294
+ original_prompts = []
295
+
296
+ for example in tqdm(dataset, desc="Preparing prompts"):
297
+ prompt_text = example[prompt_column]
298
+ original_prompts.append(prompt_text)
299
+
300
+ # Create message format (using user role only as per documentation)
301
+ messages = [{"role": "user", "content": prompt_text}]
302
+
303
+ # Apply chat template with reasoning effort
304
+ try:
305
+ prompt = tokenizer.apply_chat_template(
306
+ messages,
307
+ reasoning_effort=reasoning_level,
308
+ add_generation_prompt=True,
309
+ tokenize=False,
310
+ )
311
+ except TypeError:
312
+ # Fallback if reasoning_effort parameter not supported
313
+ logger.warning(
314
+ "reasoning_effort parameter not supported, using standard template"
315
+ )
316
+ prompt = tokenizer.apply_chat_template(
317
+ messages, add_generation_prompt=True, tokenize=False
318
+ )
319
+ prompts.append(prompt)
320
+
321
+ # Generate responses in batches
322
+ logger.info(f"Starting generation for {len(prompts):,} prompts...")
323
+ results = []
324
+
325
+ for i in tqdm(range(0, len(prompts), batch_size), desc="Generating"):
326
+ batch_prompts = prompts[i : i + batch_size]
327
+ batch_original = original_prompts[i : i + batch_size]
328
+
329
+ # Tokenize batch
330
+ inputs = tokenizer(
331
+ batch_prompts, return_tensors="pt", padding=True, truncation=True
332
+ ).to(model.device)
333
+
334
+ # Generate
335
+ with torch.no_grad():
336
+ outputs = model.generate(**inputs, generation_config=generation_config)
337
+
338
+ # Decode and parse
339
+ for j, output in enumerate(outputs):
340
+ # Decode without input prompt
341
+ output_ids = output[inputs.input_ids.shape[1] :]
342
+ raw_output = tokenizer.decode(output_ids, skip_special_tokens=False)
343
+ parsed = parse_channels(raw_output)
344
+
345
+ result = {
346
+ "prompt": batch_original[j],
347
+ "think": parsed["think"],
348
+ "content": parsed["content"],
349
+ "raw_output": parsed["raw_output"],
350
+ "reasoning_level": reasoning_level,
351
+ "model": model_id,
352
+ }
353
+ results.append(result)
354
+
355
+ # Create dataset
356
+ logger.info("Creating output dataset...")
357
+ output_dataset = Dataset.from_list(results)
358
+
359
+ # Create dataset card
360
+ logger.info("Creating dataset card...")
361
+ card_content = create_dataset_card(
362
+ input_dataset=input_dataset,
363
+ model_id=model_id,
364
+ prompt_column=prompt_column,
365
+ reasoning_level=reasoning_level,
366
+ num_examples=total_examples,
367
+ generation_time=generation_start_time,
368
+ num_gpus=num_gpus,
369
+ temperature=temperature,
370
+ max_tokens=max_tokens,
371
+ )
372
+
373
+ # Push to hub
374
+ logger.info(f"Pushing dataset to: {output_dataset_hub_id}")
375
+ output_dataset.push_to_hub(output_dataset_hub_id, token=HF_TOKEN)
376
+
377
+ # Push dataset card
378
+ card = DatasetCard(card_content)
379
+ card.push_to_hub(output_dataset_hub_id, token=HF_TOKEN)
380
+
381
+ logger.info("✅ Generation complete!")
382
+ logger.info(
383
+ f"Dataset available at: https://huggingface.co/datasets/{output_dataset_hub_id}"
384
+ )
385
+
386
+
387
+ if __name__ == "__main__":
388
+ if len(sys.argv) > 1:
389
+ parser = argparse.ArgumentParser(
390
+ description="Generate responses with reasoning using OpenAI GPT OSS models (Transformers)",
391
+ formatter_class=argparse.RawDescriptionHelpFormatter,
392
+ epilog="""
393
+ Examples:
394
+ # Generate haiku with reasoning
395
+ uv run gpt_oss_transformers.py \\
396
+ --input-dataset davanstrien/haiku_dpo \\
397
+ --output-dataset username/haiku-reasoning \\
398
+ --prompt-column question
399
+
400
+ # Any prompt dataset
401
+ uv run gpt_oss_transformers.py \\
402
+ --input-dataset username/prompts \\
403
+ --output-dataset username/responses-reasoning \\
404
+ --reasoning-level high \\
405
+ --max-samples 100
406
+
407
+ # Use larger 120B model (requires 80GB+ GPU)
408
+ uv run gpt_oss_transformers.py \\
409
+ --input-dataset username/prompts \\
410
+ --output-dataset username/responses-reasoning \\
411
+ --model-id openai/gpt-oss-120b
412
+ """,
413
+ )
414
+
415
+ parser.add_argument(
416
+ "--input-dataset",
417
+ type=str,
418
+ required=True,
419
+ help="Input dataset on Hugging Face Hub",
420
+ )
421
+ parser.add_argument(
422
+ "--output-dataset",
423
+ type=str,
424
+ required=True,
425
+ help="Output dataset name on Hugging Face Hub",
426
+ )
427
+ parser.add_argument(
428
+ "--prompt-column",
429
+ type=str,
430
+ default="prompt",
431
+ help="Column containing prompts (default: prompt)",
432
+ )
433
+ parser.add_argument(
434
+ "--model-id",
435
+ type=str,
436
+ default="openai/gpt-oss-20b",
437
+ help="Model to use (default: openai/gpt-oss-20b)",
438
+ )
439
+ parser.add_argument(
440
+ "--reasoning-level",
441
+ type=str,
442
+ choices=["high", "medium", "low"],
443
+ default="high",
444
+ help="Reasoning effort level (default: high)",
445
+ )
446
+ parser.add_argument(
447
+ "--max-samples", type=int, help="Maximum number of samples to process"
448
+ )
449
+ parser.add_argument(
450
+ "--temperature",
451
+ type=float,
452
+ default=0.7,
453
+ help="Sampling temperature (default: 0.7)",
454
+ )
455
+ parser.add_argument(
456
+ "--max-tokens",
457
+ type=int,
458
+ default=512,
459
+ help="Maximum tokens to generate (default: 512)",
460
+ )
461
+ parser.add_argument(
462
+ "--batch-size",
463
+ type=int,
464
+ default=1,
465
+ help="Batch size for generation (default: 1)",
466
+ )
467
+ parser.add_argument(
468
+ "--seed",
469
+ type=int,
470
+ default=42,
471
+ help="Random seed (default: 42)",
472
+ )
473
+ parser.add_argument(
474
+ "--hf-token",
475
+ type=str,
476
+ help="Hugging Face token (can also use HF_TOKEN env var)",
477
+ )
478
+
479
+ args = parser.parse_args()
480
+
481
+ main(
482
+ input_dataset=args.input_dataset,
483
+ output_dataset_hub_id=args.output_dataset,
484
+ prompt_column=args.prompt_column,
485
+ model_id=args.model_id,
486
+ reasoning_level=args.reasoning_level,
487
+ max_samples=args.max_samples,
488
+ temperature=args.temperature,
489
+ max_tokens=args.max_tokens,
490
+ batch_size=args.batch_size,
491
+ seed=args.seed,
492
+ hf_token=args.hf_token,
493
+ )
494
+ else:
495
+ # Show HF Jobs example when run without arguments
496
+ print("""
497
+ OpenAI GPT OSS Reasoning Generation Script (Transformers)
498
+ ========================================================
499
+
500
+ This script requires arguments. For usage information:
501
+ uv run gpt_oss_transformers.py --help
502
+
503
+ Example HF Jobs command for 20B model:
504
+ hf jobs uv run \\
505
+ --flavor a10g-small \\
506
+ https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_transformers.py \\
507
+ --input-dataset davanstrien/haiku_dpo \\
508
+ --output-dataset username/haiku-reasoning \\
509
+ --prompt-column question \\
510
+ --reasoning-level high
511
+
512
+ Example HF Jobs command for 120B model:
513
+ hf jobs uv run \\
514
+ --flavor a100-large \\
515
+ https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_transformers.py \\
516
+ --input-dataset username/prompts \\
517
+ --output-dataset username/responses-reasoning \\
518
+ --model-id openai/gpt-oss-120b \\
519
+ --reasoning-level high
520
+ """)