GPT-OS3-V2-8B-Base

Still initialized from the pruned 8.4B/A3.6B GPT-OSS model but with a new chat template that lets you do:

>>> print(tokenizer.decode(tokenizer.apply_chat_template([
    {"role": "reasoning_effort", "content": "high"},
    {"role": "user", "content": "Write a Python function to calculate factorial"},
    {"role": "assistant", "content": "Here's a Python function to calculate factorial:"},
    {"role": "user", "content": "Write a Python function to calculate factorial"},
])))

=== OUTPUT ===
<|start|>system<|message|>You are ChatGPT, a large language model trained by OpenAI.
Knowledge cutoff: 2024-06
Current date: 2025-08-30

Reasoning: high

# Valid channels: analysis, commentary, final. Channel must be included for every message.<|end|><|start|>user<|message|>Write a Python function to calculate factorial<|end|><|start|>assistant<|channel|>final<|message|>Here's a Python function to calculate factorial:<|end|><|start|>user<|message|>Write a Python function to calculate factorial<|end|>
Downloads last month
151
Safetensors
Model size
8.37B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for qingy2024/GPT-OS3-V2-8B-Base

Finetunes
1 model
Quantizations
1 model