|
--- |
|
viewer: false |
|
tags: |
|
- uv-script |
|
- synthetic-data |
|
- openai-oss |
|
--- |
|
|
|
# π OpenAI GPT OSS Models - Simple Generation Script |
|
|
|
Generate synthetic datasets using OpenAI's GPT OSS models with transparent reasoning. Works on HuggingFace Jobs with L4 GPUs! |
|
|
|
## β
Tested & Working |
|
|
|
Successfully tested on HF Jobs with `l4x4` flavor (4x L4 GPUs = 96GB total memory). |
|
|
|
## π Getting Started with HF Jobs |
|
|
|
### First-time Setup (2 minutes) |
|
|
|
1. **Install HuggingFace CLI**: |
|
```bash |
|
pip install huggingface-hub |
|
``` |
|
|
|
2. **Login to HuggingFace**: |
|
```bash |
|
huggingface-cli login |
|
``` |
|
(Enter your HF token when prompted - get one at https://huggingface.co/settings/tokens) |
|
|
|
3. **Run the script on HF Jobs**: |
|
```bash |
|
hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \ |
|
https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_minimal.py \ |
|
--input-dataset davanstrien/haiku_dpo \ |
|
--output-dataset YOUR_USERNAME/gpt-oss-test \ |
|
--prompt-column question \ |
|
--max-samples 2 |
|
``` |
|
|
|
That's it! Your job will run on HuggingFace's GPUs and the output dataset will appear in your HF account. |
|
|
|
## π Quick Start |
|
|
|
```bash |
|
# Run on HF Jobs (tested and working) |
|
hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \ |
|
https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_minimal.py \ |
|
--input-dataset davanstrien/haiku_dpo \ |
|
--output-dataset username/gpt-oss-haiku \ |
|
--prompt-column question \ |
|
--max-samples 2 \ |
|
--reasoning-effort high |
|
``` |
|
|
|
## π Script Options |
|
|
|
| Option | Description | Default | |
|
| -------------------- | -------------------------------- | -------------------------- | |
|
| `--input-dataset` | HuggingFace dataset to process | Required | |
|
| `--output-dataset` | Output dataset name | Required | |
|
| `--prompt-column` | Column containing prompts | `prompt` | |
|
| `--model-id` | Model to use | `openai/gpt-oss-20b` | |
|
| `--max-samples` | Limit samples to process | None (all) | |
|
| `--max-new-tokens` | Max tokens to generate | Auto-scales: 512/1024/2048 | |
|
| `--reasoning-effort` | Reasoning depth: low/medium/high | `medium` | |
|
| `--temperature` | Sampling temperature | `1.0` | |
|
| `--top-p` | Top-p sampling | `1.0` | |
|
|
|
**Note**: `max-new-tokens` auto-scales based on `reasoning-effort` if not set: |
|
|
|
- `low`: 512 tokens |
|
- `medium`: 1024 tokens |
|
- `high`: 2048 tokens (prevents truncation of detailed reasoning) |
|
|
|
## π‘ What You Get |
|
|
|
The output dataset contains: |
|
|
|
- `prompt`: Original prompt from input dataset |
|
- `raw_output`: Full model response with channel markers |
|
- `model`: Model ID used |
|
- `reasoning_effort`: The reasoning level used |
|
|
|
### Understanding the Output |
|
|
|
The raw output contains special channel markers: |
|
|
|
- `<|channel|>analysis<|message|>` - Chain of thought reasoning |
|
- `<|channel|>final<|message|>` - The actual response |
|
|
|
Example raw output structure: |
|
|
|
``` |
|
<|channel|>analysis<|message|> |
|
[Reasoning about the task...] |
|
<|channel|>final<|message|> |
|
[Actual haiku or response] |
|
``` |
|
|
|
## π― Examples |
|
|
|
### Test with Different Reasoning Levels |
|
|
|
**High reasoning (most detailed):** |
|
|
|
```bash |
|
hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \ |
|
https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_minimal.py \ |
|
--input-dataset davanstrien/haiku_dpo \ |
|
--output-dataset username/haiku-high \ |
|
--prompt-column question \ |
|
--reasoning-effort high \ |
|
--max-samples 5 |
|
``` |
|
|
|
**Low reasoning (fastest):** |
|
|
|
```bash |
|
hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \ |
|
https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_minimal.py \ |
|
--input-dataset davanstrien/haiku_dpo \ |
|
--output-dataset username/haiku-low \ |
|
--prompt-column question \ |
|
--reasoning-effort low \ |
|
--max-samples 10 |
|
``` |
|
|
|
## π₯οΈ GPU Requirements |
|
|
|
| Model | Memory Required | Recommended Flavor | |
|
| ---------------------- | --------------- | ---------------------- | |
|
| **openai/gpt-oss-20b** | ~40GB | `l4x4` (4x24GB = 96GB) | |
|
|
|
**Note**: The 20B model automatically dequantizes from MXFP4 to bf16 on non-Hopper GPUs, requiring more memory than the quantized size. |
|
|
|
### Reasoning Effort |
|
|
|
The `reasoning_effort` parameter controls how much chain-of-thought reasoning the model generates: |
|
|
|
- `low`: Quick responses with minimal reasoning |
|
- `medium`: Balanced reasoning (default) |
|
- `high`: Detailed step-by-step reasoning |
|
|
|
### Sampling Parameters |
|
|
|
OpenAI recommends `temperature=1.0` and `top_p=1.0` as defaults for GPT OSS models: |
|
|
|
- These settings provide good diversity without compromising quality |
|
- The model was trained to work well with these parameters |
|
- Adjust only if you need specific behavior (e.g., lower temperature for more deterministic output) |
|
|
|
## π Resources |
|
|
|
- [OpenAI GPT OSS Model Collection](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4) - Both 20B and 120B models |
|
- [Model: openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b) |
|
- [HF Jobs Documentation](https://huggingface.co/docs/huggingface_hub/guides/jobs) - Complete guide to running jobs on HuggingFace |
|
- [HF CLI Guide](https://huggingface.co/docs/huggingface_hub/guides/cli) - HuggingFace CLI installation and usage |
|
- [Dataset: davanstrien/haiku_dpo](https://huggingface.co/datasets/davanstrien/haiku_dpo) |
|
|
|
--- |
|
|
|
_Last tested: 2025-01-06 on HF Jobs with l4x4 flavor_ |
|
|