davanstrien HF Staff commited on
Commit
39535cd
Β·
1 Parent(s): b4c35f6

viewer false

Browse files
Files changed (1) hide show
  1. README.md +32 -17
README.md CHANGED
@@ -1,3 +1,7 @@
 
 
 
 
1
  # πŸš€ OpenAI GPT OSS Models - Simple Generation Script
2
 
3
  Generate synthetic datasets using OpenAI's GPT OSS models with transparent reasoning. Works on HuggingFace Jobs with L4 GPUs!
@@ -21,26 +25,28 @@ hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
21
 
22
  ## πŸ“‹ Script Options
23
 
24
- | Option | Description | Default |
25
- |--------|-------------|---------|
26
- | `--input-dataset` | HuggingFace dataset to process | Required |
27
- | `--output-dataset` | Output dataset name | Required |
28
- | `--prompt-column` | Column containing prompts | `prompt` |
29
- | `--model-id` | Model to use | `openai/gpt-oss-20b` |
30
- | `--max-samples` | Limit samples to process | None (all) |
31
- | `--max-new-tokens` | Max tokens to generate | Auto-scales: 512/1024/2048 |
32
- | `--reasoning-effort` | Reasoning depth: low/medium/high | `medium` |
33
- | `--temperature` | Sampling temperature | `1.0` |
34
- | `--top-p` | Top-p sampling | `1.0` |
35
 
36
  **Note**: `max-new-tokens` auto-scales based on `reasoning-effort` if not set:
 
37
  - `low`: 512 tokens
38
- - `medium`: 1024 tokens
39
  - `high`: 2048 tokens (prevents truncation of detailed reasoning)
40
 
41
  ## πŸ’‘ What You Get
42
 
43
  The output dataset contains:
 
44
  - `prompt`: Original prompt from input dataset
45
  - `raw_output`: Full model response with channel markers
46
  - `model`: Model ID used
@@ -49,10 +55,12 @@ The output dataset contains:
49
  ### Understanding the Output
50
 
51
  The raw output contains special channel markers:
 
52
  - `<|channel|>analysis<|message|>` - Chain of thought reasoning
53
  - `<|channel|>final<|message|>` - The actual response
54
 
55
  Example raw output structure:
 
56
  ```
57
  <|channel|>analysis<|message|>
58
  [Reasoning about the task...]
@@ -65,6 +73,7 @@ Example raw output structure:
65
  ### Test with Different Reasoning Levels
66
 
67
  **High reasoning (most detailed):**
 
68
  ```bash
69
  hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
70
  https://huggingface.co/datasets/davanstrien/openai-oss/raw/main/gpt_oss_minimal.py \
@@ -76,6 +85,7 @@ hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
76
  ```
77
 
78
  **Low reasoning (fastest):**
 
79
  ```bash
80
  hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
81
  https://huggingface.co/datasets/davanstrien/openai-oss/raw/main/gpt_oss_minimal.py \
@@ -88,29 +98,34 @@ hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
88
 
89
  ## πŸ–₯️ GPU Requirements
90
 
91
- | Model | Memory Required | Recommended Flavor |
92
- |-------|----------------|-------------------|
93
- | **openai/gpt-oss-20b** | ~40GB | `l4x4` (4x24GB = 96GB) |
94
- | **openai/gpt-oss-120b** | ~240GB | `8xa100` (8x80GB) |
95
 
96
  **Note**: The 20B model automatically dequantizes from MXFP4 to bf16 on non-Hopper GPUs, requiring more memory than the quantized size.
97
 
98
  ## πŸ”§ Technical Details
99
 
100
  ### Why L4x4?
 
101
  - The 20B model needs ~40GB VRAM when dequantized
102
  - Single A10G (24GB) is insufficient
103
  - L4x4 provides 96GB total memory across 4 GPUs
104
  - Cost-effective compared to A100 instances
105
 
106
  ### Reasoning Effort
 
107
  The `reasoning_effort` parameter controls how much chain-of-thought reasoning the model generates:
 
108
  - `low`: Quick responses with minimal reasoning
109
  - `medium`: Balanced reasoning (default)
110
  - `high`: Detailed step-by-step reasoning
111
 
112
  ### Sampling Parameters
 
113
  OpenAI recommends `temperature=1.0` and `top_p=1.0` as defaults for GPT OSS models:
 
114
  - These settings provide good diversity without compromising quality
115
  - The model was trained to work well with these parameters
116
  - Adjust only if you need specific behavior (e.g., lower temperature for more deterministic output)
@@ -123,4 +138,4 @@ OpenAI recommends `temperature=1.0` and `top_p=1.0` as defaults for GPT OSS mode
123
 
124
  ---
125
 
126
- *Last tested: 2025-01-06 on HF Jobs with l4x4 flavor*
 
1
+ ---
2
+ viewer: false
3
+ ---
4
+
5
  # πŸš€ OpenAI GPT OSS Models - Simple Generation Script
6
 
7
  Generate synthetic datasets using OpenAI's GPT OSS models with transparent reasoning. Works on HuggingFace Jobs with L4 GPUs!
 
25
 
26
  ## πŸ“‹ Script Options
27
 
28
+ | Option | Description | Default |
29
+ | -------------------- | -------------------------------- | -------------------------- |
30
+ | `--input-dataset` | HuggingFace dataset to process | Required |
31
+ | `--output-dataset` | Output dataset name | Required |
32
+ | `--prompt-column` | Column containing prompts | `prompt` |
33
+ | `--model-id` | Model to use | `openai/gpt-oss-20b` |
34
+ | `--max-samples` | Limit samples to process | None (all) |
35
+ | `--max-new-tokens` | Max tokens to generate | Auto-scales: 512/1024/2048 |
36
+ | `--reasoning-effort` | Reasoning depth: low/medium/high | `medium` |
37
+ | `--temperature` | Sampling temperature | `1.0` |
38
+ | `--top-p` | Top-p sampling | `1.0` |
39
 
40
  **Note**: `max-new-tokens` auto-scales based on `reasoning-effort` if not set:
41
+
42
  - `low`: 512 tokens
43
+ - `medium`: 1024 tokens
44
  - `high`: 2048 tokens (prevents truncation of detailed reasoning)
45
 
46
  ## πŸ’‘ What You Get
47
 
48
  The output dataset contains:
49
+
50
  - `prompt`: Original prompt from input dataset
51
  - `raw_output`: Full model response with channel markers
52
  - `model`: Model ID used
 
55
  ### Understanding the Output
56
 
57
  The raw output contains special channel markers:
58
+
59
  - `<|channel|>analysis<|message|>` - Chain of thought reasoning
60
  - `<|channel|>final<|message|>` - The actual response
61
 
62
  Example raw output structure:
63
+
64
  ```
65
  <|channel|>analysis<|message|>
66
  [Reasoning about the task...]
 
73
  ### Test with Different Reasoning Levels
74
 
75
  **High reasoning (most detailed):**
76
+
77
  ```bash
78
  hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
79
  https://huggingface.co/datasets/davanstrien/openai-oss/raw/main/gpt_oss_minimal.py \
 
85
  ```
86
 
87
  **Low reasoning (fastest):**
88
+
89
  ```bash
90
  hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
91
  https://huggingface.co/datasets/davanstrien/openai-oss/raw/main/gpt_oss_minimal.py \
 
98
 
99
  ## πŸ–₯️ GPU Requirements
100
 
101
+ | Model | Memory Required | Recommended Flavor |
102
+ | ----------------------- | --------------- | ---------------------- |
103
+ | **openai/gpt-oss-20b** | ~40GB | `l4x4` (4x24GB = 96GB) |
104
+ | **openai/gpt-oss-120b** | ~240GB | `8xa100` (8x80GB) |
105
 
106
  **Note**: The 20B model automatically dequantizes from MXFP4 to bf16 on non-Hopper GPUs, requiring more memory than the quantized size.
107
 
108
  ## πŸ”§ Technical Details
109
 
110
  ### Why L4x4?
111
+
112
  - The 20B model needs ~40GB VRAM when dequantized
113
  - Single A10G (24GB) is insufficient
114
  - L4x4 provides 96GB total memory across 4 GPUs
115
  - Cost-effective compared to A100 instances
116
 
117
  ### Reasoning Effort
118
+
119
  The `reasoning_effort` parameter controls how much chain-of-thought reasoning the model generates:
120
+
121
  - `low`: Quick responses with minimal reasoning
122
  - `medium`: Balanced reasoning (default)
123
  - `high`: Detailed step-by-step reasoning
124
 
125
  ### Sampling Parameters
126
+
127
  OpenAI recommends `temperature=1.0` and `top_p=1.0` as defaults for GPT OSS models:
128
+
129
  - These settings provide good diversity without compromising quality
130
  - The model was trained to work well with these parameters
131
  - Adjust only if you need specific behavior (e.g., lower temperature for more deterministic output)
 
138
 
139
  ---
140
 
141
+ _Last tested: 2025-01-06 on HF Jobs with l4x4 flavor_