davanstrien HF Staff commited on
Commit
50a87a3
·
1 Parent(s): 7e3ee8a

Update repository references to new uv-scripts org

Browse files

- Changed all references from davanstrien/openai-oss to uv-scripts/openai-oss
- Updated HF Jobs commands in README
- Updated dataset card generation link
- Updated example commands in script docstring
- Now part of the uv-scripts organization

Files changed (2) hide show
  1. README.md +6 -16
  2. gpt_oss_minimal.py +3 -3
README.md CHANGED
@@ -19,7 +19,7 @@ Successfully tested on HF Jobs with `l4x4` flavor (4x L4 GPUs = 96GB total memor
19
  ```bash
20
  # Run on HF Jobs (tested and working)
21
  hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
22
- https://huggingface.co/datasets/davanstrien/openai-oss/raw/main/gpt_oss_minimal.py \
23
  --input-dataset davanstrien/haiku_dpo \
24
  --output-dataset username/gpt-oss-haiku \
25
  --prompt-column question \
@@ -80,7 +80,7 @@ Example raw output structure:
80
 
81
  ```bash
82
  hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
83
- https://huggingface.co/datasets/davanstrien/openai-oss/raw/main/gpt_oss_minimal.py \
84
  --input-dataset davanstrien/haiku_dpo \
85
  --output-dataset username/haiku-high \
86
  --prompt-column question \
@@ -92,7 +92,7 @@ hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
92
 
93
  ```bash
94
  hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
95
- https://huggingface.co/datasets/davanstrien/openai-oss/raw/main/gpt_oss_minimal.py \
96
  --input-dataset davanstrien/haiku_dpo \
97
  --output-dataset username/haiku-low \
98
  --prompt-column question \
@@ -102,22 +102,12 @@ hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
102
 
103
  ## 🖥️ GPU Requirements
104
 
105
- | Model | Memory Required | Recommended Flavor |
106
- | ----------------------- | --------------- | ---------------------- |
107
- | **openai/gpt-oss-20b** | ~40GB | `l4x4` (4x24GB = 96GB) |
108
- | **openai/gpt-oss-120b** | ~240GB | `8xa100` (8x80GB) |
109
 
110
  **Note**: The 20B model automatically dequantizes from MXFP4 to bf16 on non-Hopper GPUs, requiring more memory than the quantized size.
111
 
112
- ## 🔧 Technical Details
113
-
114
- ### Why L4x4?
115
-
116
- - The 20B model needs ~40GB VRAM when dequantized
117
- - Single A10G (24GB) is insufficient
118
- - L4x4 provides 96GB total memory across 4 GPUs
119
- - Cost-effective compared to A100 instances
120
-
121
  ### Reasoning Effort
122
 
123
  The `reasoning_effort` parameter controls how much chain-of-thought reasoning the model generates:
 
19
  ```bash
20
  # Run on HF Jobs (tested and working)
21
  hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
22
+ https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_minimal.py \
23
  --input-dataset davanstrien/haiku_dpo \
24
  --output-dataset username/gpt-oss-haiku \
25
  --prompt-column question \
 
80
 
81
  ```bash
82
  hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
83
+ https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_minimal.py \
84
  --input-dataset davanstrien/haiku_dpo \
85
  --output-dataset username/haiku-high \
86
  --prompt-column question \
 
92
 
93
  ```bash
94
  hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
95
+ https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_minimal.py \
96
  --input-dataset davanstrien/haiku_dpo \
97
  --output-dataset username/haiku-low \
98
  --prompt-column question \
 
102
 
103
  ## 🖥️ GPU Requirements
104
 
105
+ | Model | Memory Required | Recommended Flavor |
106
+ | ---------------------- | --------------- | ---------------------- |
107
+ | **openai/gpt-oss-20b** | ~40GB | `l4x4` (4x24GB = 96GB) |
 
108
 
109
  **Note**: The 20B model automatically dequantizes from MXFP4 to bf16 on non-Hopper GPUs, requiring more memory than the quantized size.
110
 
 
 
 
 
 
 
 
 
 
111
  ### Reasoning Effort
112
 
113
  The `reasoning_effort` parameter controls how much chain-of-thought reasoning the model generates:
gpt_oss_minimal.py CHANGED
@@ -23,8 +23,8 @@ Usage:
23
  --prompt-column question \
24
  --max-samples 2
25
 
26
- # HF Jobs execution (A10G for $1.50/hr)
27
- hf jobs uv run --flavor a10g-small \
28
  https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_minimal.py \
29
  --input-dataset davanstrien/haiku_dpo \
30
  --output-dataset username/haiku-raw \
@@ -85,7 +85,7 @@ Each example contains:
85
 
86
  To extract the final response, look for text after `<|channel|>final<|message|>` in the raw_output.
87
 
88
- Generated using [davanstrien/openai-oss](https://huggingface.co/datasets/davanstrien/openai-oss).
89
  """
90
 
91
 
 
23
  --prompt-column question \
24
  --max-samples 2
25
 
26
+ # HF Jobs execution (L4x4 for proper memory)
27
+ hf jobs uv run --flavor l4x4 --secrets HF_TOKEN=hf_*** \
28
  https://huggingface.co/datasets/uv-scripts/openai-oss/raw/main/gpt_oss_minimal.py \
29
  --input-dataset davanstrien/haiku_dpo \
30
  --output-dataset username/haiku-raw \
 
85
 
86
  To extract the final response, look for text after `<|channel|>final<|message|>` in the raw_output.
87
 
88
+ Generated using [uv-scripts/openai-oss](https://huggingface.co/datasets/uv-scripts/openai-oss).
89
  """
90
 
91