File size: 5,653 Bytes
39535cd ee497f0 39535cd 78e2e1c bbe5ce0 78e2e1c bbe5ce0 78e2e1c bbe5ce0 78e2e1c bbe5ce0 e4dd652 a4ee9cd bbe5ce0 a4ee9cd 78e2e1c 50a87a3 bbe5ce0 78e2e1c bbe5ce0 78e2e1c bbe5ce0 78e2e1c bbe5ce0 39535cd db35e01 39535cd db35e01 39535cd db35e01 bbe5ce0 78e2e1c a4ee9cd 78e2e1c 39535cd 78e2e1c bbe5ce0 78e2e1c bbe5ce0 78e2e1c 39535cd 78e2e1c bbe5ce0 78e2e1c 39535cd 78e2e1c bbe5ce0 a4ee9cd bbe5ce0 78e2e1c 39535cd bbe5ce0 78e2e1c 50a87a3 bbe5ce0 78e2e1c bbe5ce0 78e2e1c bbe5ce0 78e2e1c 39535cd bbe5ce0 78e2e1c 50a87a3 78e2e1c bbe5ce0 78e2e1c a4ee9cd 78e2e1c bbe5ce0 50a87a3 bbe5ce0 78e2e1c a4ee9cd 78e2e1c 39535cd 78e2e1c 39535cd 78e2e1c bbe5ce0 db35e01 39535cd db35e01 39535cd db35e01 78e2e1c bbe5ce0 7e3ee8a a4ee9cd e4dd652 78e2e1c bbe5ce0 39535cd |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 |
---
viewer: false
tags:
- uv-script
- synthetic-data
- openai-oss
---
# π OpenAI GPT OSS Models - Simple Generation Script
Generate synthetic datasets using OpenAI's GPT OSS models with transparent reasoning. Works on HuggingFace Jobs with L4 GPUs!
## β
Tested & Working
Successfully tested on HF Jobs with `l4x4` flavor (4x L4 GPUs = 96GB total memory).
## π Getting Started with HF Jobs
### First-time Setup (2 minutes)
1. **Install HuggingFace CLI**:
```bash
pip install huggingface-hub
```
2. **Login to HuggingFace**:
```bash
huggingface-cli login
```
(Enter your HF token when prompted - get one at https://huggingface.co/settings/tokens)
3. **Run the script on HF Jobs**:
```bash
hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_minimal.py \
--input-dataset davanstrien/haiku_dpo \
--output-dataset YOUR_USERNAME/gpt-oss-test \
--prompt-column question \
--max-samples 2
```
That's it! Your job will run on HuggingFace's GPUs and the output dataset will appear in your HF account.
## π Quick Start
```bash
# Run on HF Jobs (tested and working)
hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_minimal.py \
--input-dataset davanstrien/haiku_dpo \
--output-dataset username/gpt-oss-haiku \
--prompt-column question \
--max-samples 2 \
--reasoning-effort high
```
## π Script Options
| Option | Description | Default |
| -------------------- | -------------------------------- | -------------------------- |
| `--input-dataset` | HuggingFace dataset to process | Required |
| `--output-dataset` | Output dataset name | Required |
| `--prompt-column` | Column containing prompts | `prompt` |
| `--model-id` | Model to use | `openai/gpt-oss-20b` |
| `--max-samples` | Limit samples to process | None (all) |
| `--max-new-tokens` | Max tokens to generate | Auto-scales: 512/1024/2048 |
| `--reasoning-effort` | Reasoning depth: low/medium/high | `medium` |
| `--temperature` | Sampling temperature | `1.0` |
| `--top-p` | Top-p sampling | `1.0` |
**Note**: `max-new-tokens` auto-scales based on `reasoning-effort` if not set:
- `low`: 512 tokens
- `medium`: 1024 tokens
- `high`: 2048 tokens (prevents truncation of detailed reasoning)
## π‘ What You Get
The output dataset contains:
- `prompt`: Original prompt from input dataset
- `raw_output`: Full model response with channel markers
- `model`: Model ID used
- `reasoning_effort`: The reasoning level used
### Understanding the Output
The raw output contains special channel markers:
- `<|channel|>analysis<|message|>` - Chain of thought reasoning
- `<|channel|>final<|message|>` - The actual response
Example raw output structure:
```
<|channel|>analysis<|message|>
[Reasoning about the task...]
<|channel|>final<|message|>
[Actual haiku or response]
```
## π― Examples
### Test with Different Reasoning Levels
**High reasoning (most detailed):**
```bash
hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_minimal.py \
--input-dataset davanstrien/haiku_dpo \
--output-dataset username/haiku-high \
--prompt-column question \
--reasoning-effort high \
--max-samples 5
```
**Low reasoning (fastest):**
```bash
hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_minimal.py \
--input-dataset davanstrien/haiku_dpo \
--output-dataset username/haiku-low \
--prompt-column question \
--reasoning-effort low \
--max-samples 10
```
## π₯οΈ GPU Requirements
| Model | Memory Required | Recommended Flavor |
| ---------------------- | --------------- | ---------------------- |
| **openai/gpt-oss-20b** | ~40GB | `l4x4` (4x24GB = 96GB) |
**Note**: The 20B model automatically dequantizes from MXFP4 to bf16 on non-Hopper GPUs, requiring more memory than the quantized size.
### Reasoning Effort
The `reasoning_effort` parameter controls how much chain-of-thought reasoning the model generates:
- `low`: Quick responses with minimal reasoning
- `medium`: Balanced reasoning (default)
- `high`: Detailed step-by-step reasoning
### Sampling Parameters
OpenAI recommends `temperature=1.0` and `top_p=1.0` as defaults for GPT OSS models:
- These settings provide good diversity without compromising quality
- The model was trained to work well with these parameters
- Adjust only if you need specific behavior (e.g., lower temperature for more deterministic output)
## π Resources
- [OpenAI GPT OSS Model Collection](https://huggingface.co/collections/openai/gpt-oss-68911959590a1634ba11c7a4) - Both 20B and 120B models
- [Model: openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b)
- [HF Jobs Documentation](https://huggingface.co/docs/huggingface_hub/guides/jobs) - Complete guide to running jobs on HuggingFace
- [HF CLI Guide](https://huggingface.co/docs/huggingface_hub/guides/cli) - HuggingFace CLI installation and usage
- [Dataset: davanstrien/haiku_dpo](https://huggingface.co/datasets/davanstrien/haiku_dpo)
---
_Last tested: 2025-01-06 on HF Jobs with l4x4 flavor_
|