File size: 4,371 Bytes
78e2e1c bbe5ce0 78e2e1c bbe5ce0 78e2e1c bbe5ce0 78e2e1c bbe5ce0 a4ee9cd bbe5ce0 a4ee9cd 78e2e1c bbe5ce0 78e2e1c bbe5ce0 78e2e1c bbe5ce0 78e2e1c bbe5ce0 78e2e1c db35e01 78e2e1c db35e01 bbe5ce0 78e2e1c a4ee9cd 78e2e1c bbe5ce0 78e2e1c bbe5ce0 78e2e1c bbe5ce0 78e2e1c bbe5ce0 a4ee9cd bbe5ce0 78e2e1c bbe5ce0 78e2e1c bbe5ce0 78e2e1c bbe5ce0 78e2e1c bbe5ce0 78e2e1c bbe5ce0 78e2e1c bbe5ce0 78e2e1c a4ee9cd 78e2e1c bbe5ce0 78e2e1c bbe5ce0 78e2e1c a4ee9cd 78e2e1c bbe5ce0 78e2e1c bbe5ce0 db35e01 78e2e1c bbe5ce0 a4ee9cd 78e2e1c bbe5ce0 78e2e1c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 |
# π OpenAI GPT OSS Models - Simple Generation Script
Generate synthetic datasets using OpenAI's GPT OSS models with transparent reasoning. Works on HuggingFace Jobs with L4 GPUs!
## β
Tested & Working
Successfully tested on HF Jobs with `l4x4` flavor (4x L4 GPUs = 96GB total memory).
## π Quick Start
```bash
# Run on HF Jobs (tested and working)
hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
https://huggingface.co/datasets/davanstrien/openai-oss/raw/main/gpt_oss_minimal.py \
--input-dataset davanstrien/haiku_dpo \
--output-dataset username/gpt-oss-haiku \
--prompt-column question \
--max-samples 2 \
--reasoning-effort high
```
## π Script Options
| Option | Description | Default |
|--------|-------------|---------|
| `--input-dataset` | HuggingFace dataset to process | Required |
| `--output-dataset` | Output dataset name | Required |
| `--prompt-column` | Column containing prompts | `prompt` |
| `--model-id` | Model to use | `openai/gpt-oss-20b` |
| `--max-samples` | Limit samples to process | None (all) |
| `--max-new-tokens` | Max tokens to generate | Auto-scales: 512/1024/2048 |
| `--reasoning-effort` | Reasoning depth: low/medium/high | `medium` |
| `--temperature` | Sampling temperature | `1.0` |
| `--top-p` | Top-p sampling | `1.0` |
**Note**: `max-new-tokens` auto-scales based on `reasoning-effort` if not set:
- `low`: 512 tokens
- `medium`: 1024 tokens
- `high`: 2048 tokens (prevents truncation of detailed reasoning)
## π‘ What You Get
The output dataset contains:
- `prompt`: Original prompt from input dataset
- `raw_output`: Full model response with channel markers
- `model`: Model ID used
- `reasoning_effort`: The reasoning level used
### Understanding the Output
The raw output contains special channel markers:
- `<|channel|>analysis<|message|>` - Chain of thought reasoning
- `<|channel|>final<|message|>` - The actual response
Example raw output structure:
```
<|channel|>analysis<|message|>
[Reasoning about the task...]
<|channel|>final<|message|>
[Actual haiku or response]
```
## π― Examples
### Test with Different Reasoning Levels
**High reasoning (most detailed):**
```bash
hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
https://huggingface.co/datasets/davanstrien/openai-oss/raw/main/gpt_oss_minimal.py \
--input-dataset davanstrien/haiku_dpo \
--output-dataset username/haiku-high \
--prompt-column question \
--reasoning-effort high \
--max-samples 5
```
**Low reasoning (fastest):**
```bash
hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
https://huggingface.co/datasets/davanstrien/openai-oss/raw/main/gpt_oss_minimal.py \
--input-dataset davanstrien/haiku_dpo \
--output-dataset username/haiku-low \
--prompt-column question \
--reasoning-effort low \
--max-samples 10
```
## π₯οΈ GPU Requirements
| Model | Memory Required | Recommended Flavor |
|-------|----------------|-------------------|
| **openai/gpt-oss-20b** | ~40GB | `l4x4` (4x24GB = 96GB) |
| **openai/gpt-oss-120b** | ~240GB | `8xa100` (8x80GB) |
**Note**: The 20B model automatically dequantizes from MXFP4 to bf16 on non-Hopper GPUs, requiring more memory than the quantized size.
## π§ Technical Details
### Why L4x4?
- The 20B model needs ~40GB VRAM when dequantized
- Single A10G (24GB) is insufficient
- L4x4 provides 96GB total memory across 4 GPUs
- Cost-effective compared to A100 instances
### Reasoning Effort
The `reasoning_effort` parameter controls how much chain-of-thought reasoning the model generates:
- `low`: Quick responses with minimal reasoning
- `medium`: Balanced reasoning (default)
- `high`: Detailed step-by-step reasoning
### Sampling Parameters
OpenAI recommends `temperature=1.0` and `top_p=1.0` as defaults for GPT OSS models:
- These settings provide good diversity without compromising quality
- The model was trained to work well with these parameters
- Adjust only if you need specific behavior (e.g., lower temperature for more deterministic output)
## π Resources
- [Model: openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b)
- [HF Jobs Documentation](https://huggingface.co/docs/hub/spaces-gpu-jobs)
- [Dataset: davanstrien/haiku_dpo](https://huggingface.co/datasets/davanstrien/haiku_dpo)
---
*Last tested: 2025-01-06 on HF Jobs with l4x4 flavor* |