Math GPT-OSS Model (11 Experts)
Project: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/
Introduction
This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 11 experts per layer based on activation patterns from the AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset. We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for math tasks.
⚠️ Experimental Model: This is an experimental pruned model that may not work well - check the examples below to see if the outputs meet your needs before use.
This pruning approach reduces the model size while attempting to preserve performance on the target domain.
Model Architecture & Statistics
Metric | Value |
---|---|
Base Model | openai/gpt-oss-20b |
Architecture | Mixture-of-Experts Transformer |
Total Parameters | ~8.4B (pruned from 21B) |
Original Experts per Layer | 32 |
Pruned Experts per Layer | 11 |
Layers | 24 |
Top-k Routing | 4 |
Context Length | 128K tokens |
Attention Heads | 64 (Query), 8 (Key-Value) |
Residual Dimension | 2880 |
Attention Pattern | Alternating dense & sliding window (128 tokens) |
Positional Encoding | RoPE (Rotary Position Embedding) |
Normalization | RMSNorm |
Precision | BF16 |
License | Apache 2.0 |
Specialization | Math |
Pruning Methodology
What is Expert Pruning?
Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:
- Analyzing Usage Patterns: Tracking which experts activate most frequently for specific tasks
- Removing Underutilized Experts: Discarding experts with low activation rates for the target domain
- Preserving Router Functionality: Maintaining the routing mechanism with fewer available experts
Our Approach
- Data-Driven Selection: Used activation patterns from math evaluation tasks
- Systematic Reduction: Reduced from 32 to 11 experts per layer
- No Retraining: Direct removal without additional training steps
Performance & Applications
Pruning Benefits
- Smaller Memory Footprint: 34.4% of original expert parameters
- Reduced Computational Load: Fewer routing decisions during inference
- Focused Capabilities: Retains experts relevant to math tasks
Use Cases
- Speculative Decoding: Draft model for full GPT-OSS-20B
- Resource-Constrained Deployment: Edge devices, mobile applications
- Research: Study expert specialization in MoE models
- Fine-tuning: Smaller base model for domain adaptation
Note: Performance may vary depending on how well the pruned experts match your specific use case.
Motivation & Expert Selection
This mathematics-focused model utilizes experts that exhibited strong performance on mathematical reasoning tasks from MMLU mathematics subjects and quantitative sections. These experts excel at mathematical computation, proof strategies, and logical reasoning.
The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:
- GPQA: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
- MMLU/MMLU-Pro: Comprehensive knowledge across 57+ subjects including science, medicine, law
- SORRY-Bench: Safety evaluation across harmful content categories
- Tulu3: Persona-driven instruction following with verifiable constraints
- Polyglot-or-Not: Multilingual factual completion tasks
By identifying experts that consistently activated for math tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 11 experts per layer.
Dataset & Analysis Foundation
This model is based on analysis from the GPT-OSS-20B MoE Expert Activations dataset available at: 🔗 https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations
The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.
Pruning Methodology
Our approach involves:
- Activation Analysis: Comprehensive evaluation of expert usage patterns across domain-specific tasks
- Expert Ranking: Identification of the most frequently activated experts for target domains
- Systematic Pruning: Reduction from 32 to 11 experts while preserving router functionality
- Quality Validation: Testing to ensure maintained performance on target tasks
This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.
Usage
CPU Inference
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-8.4b-specialized-math-pruned-moe-only-11-experts",
torch_dtype=torch.bfloat16,
device_map="cpu",
trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-8.4b-specialized-math-pruned-moe-only-11-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
Apple Silicon (MPS) Inference
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(
"AmanPriyanshu/gpt-oss-8.4b-specialized-math-pruned-moe-only-11-experts",
torch_dtype=torch.float16, # Better MPS compatibility
device_map=device,
trust_remote_code=True,
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-8.4b-specialized-math-pruned-moe-only-11-experts")
# Generate with the model
messages = [
{"role": "user", "content": "Solve this equation: 2x + 5 = 17. Show your work step by step."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
return_dict=True,
reasoning_effort="medium"
)
# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}
# Use torch.no_grad for MPS stability
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.1,
top_p=0.9,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
use_cache=True
)
# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)
GPU Inference
For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:
device_map="auto" # Will automatically use GPU if available
torch_dtype=torch.bfloat16 # or torch.float16
Model Examples
The following examples demonstrate the model's performance on math tasks:
Example 1
Question: Solve this equation: 2x + 5 = 17. Show your work step by step.
Response:
<|channel|>analysis<|message|>We need to solve 2x+5=17. Steps: subtract 5, divide by 2. Provide explanation.<|end|><|start|>assistant<|channel|>final<|message|>**Equation to solve**
\[
2x + 5 = 17
\]
---
### Step 1: Isolate the term containing the variable
Subtract the constant term from both sides of the equation.
\[
2x + 5 - 5 = 17 - 5
\]
\[
2x = 12
\]
---
### Step 2: Solve for \(x\)
Divide both sides by the coefficient of \(x\) (which is 2).
\[
\frac{2x}{2} = \frac{12}{2}
\]
\[
x = 6
\]
---
### Result
\[
\boxed{x = 6}
\]
**Check**:
\(2(6) + 5 = 12 + 5 = 17\), which matches the original equation.<|return|>
Example 2
Question: What is the Pythagorean theorem and how is it used?
Response:
<|channel|>analysis<|message|>Need explain theorem, usage.<|end|><|start|>assistant<|channel|>final<|message|>**Pyth‑theorem**
- **Statement**: In a triangle with sides *a*, *b*, *c* (where *c* is the longest side, the “hyp‑side”), the square of the longest side equals the sum of the squares of the other two sides.
\[
a^{2}+b^{2}=c^{2}
\]
- **Why**: It follows from the geometry of a triangle (area, angles) or from the algebraic definition of a distance in a 2‑dimensional space (the “Euclidean distance” in a vector space).
---
## How it is used
| **Field** | **Application** | **Key idea** |
|---|---|---|
| **Geometry** | Finding the lengths of a triangle (given two sides, find the third). | Rearsolve \(c=\sqrt{a^{2}+b^{2}}\). |
| **Engineering** | Calculing the height of a bridge, the width of a wall, or the length of a cable. | Use the formula to find the required dimensions. |
| **Computer graphics** | Determing the distance between two points in a 2‑D space. | Distance \(d=\sqrt{(x_2-x_1)^2+(y_2-y_1)^2}\). |
| **Physics** | In a 2‑D or 3‑D system, the conservation of energy or momentum (e.g., \(E=mc^{2}\)). | The “energy” is proportional to the square of the distance. |
| **Mat‑analysis** | Pro proving that the triangle inequality holds, or proving that the “Euclidean norm” is a metric. | Use the Pyth‑theorem as a basis for the metric’s properties. |
---
### Quick example
Supp you have a triangle with sides 3 cm and 4 cm.
Find the longest side:
\[
c=\sqrt{3^{2}+4^{2}}=\sqrt{9+16}=\sqrt{25}=5\text{ cm}
\]
So the triangle is 3‑4‑5, a classic “Pyth‑theorem” triangle.
---
**Bottom line**: The Pyth‑theorem is the “law of the squares” for a triangle (or a 2‑D space). It is used whenever
Example 3
Question: Calculate the area of a circle with radius 7 meters.
Response:
<|channel|>analysis<|message|>Need area = π r^2. r=7 => 49π. Provide numeric.<|end|><|start|>assistant<|channel|>final<|message|>The area of a circle is given by the formula
\[
A = \pi r^{2}
\]
where \(r\) is the radius.
For a radius of \(7\) meters:
\[
A = \pi (7\,\text{m})^{2}
= \pi \times 49\,\text{m}^{2}
= 49\pi \,\text{m}^{2}
\]
So the area is **\(49\pi\) square meters**, which is approximately:
\[
49\pi \approx 49 \times 3.14159 \approx 154.16 \text{ m}^{2}
\]<|return|>
Citation
If you use this model in your research, please cite:
@misc{priyanshu2025gptoss,
title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
author={Priyanshu, Aman and Vijay, Supriti},
year={2025},
howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
note={Interactive analysis tool for expert activation patterns in MoE architectures}
}
References & Resources
- Original Model: OpenAI GPT-OSS Model Card
- Model Hub: GPT-OSS-20B on Hugging Face
- Expert Analysis Dataset: GPT-OSS-20B MoE Expert Activations
- Project Page: GPT-OSS MoE Expert Fingerprinting
- GitHub Repository: OpenAI GPT-OSS
- Downloads last month
- 4