Instruction Following GPT-OSS Model (31 Experts)

Project: https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/

👥 Follow the Authors

Aman Priyanshu LinkedIn Twitter Website

Supriti Vijay LinkedIn Twitter Website

Introduction

This is a pruned variant of OpenAI's GPT-OSS-20B model, reduced to 31 experts per layer based on activation patterns from the AmanPriyanshu/GPT-OSS-20B MoE Expert Activations dataset. We analyzed router decisions across evaluation benchmarks to identify and retain experts most relevant for instruction following tasks.

⚠️ Experimental Model: This is an experimental pruned model that may not work well - check the examples below to see if the outputs meet your needs before use.

This pruning approach reduces the model size while attempting to preserve performance on the target domain.

Model Architecture & Statistics

Metric Value
Base Model openai/gpt-oss-20b
Architecture Mixture-of-Experts Transformer
Total Parameters ~20.3B (pruned from 21B)
Original Experts per Layer 32
Pruned Experts per Layer 31
Layers 24
Top-k Routing 4
Context Length 128K tokens
Attention Heads 64 (Query), 8 (Key-Value)
Residual Dimension 2880
Attention Pattern Alternating dense & sliding window (128 tokens)
Positional Encoding RoPE (Rotary Position Embedding)
Normalization RMSNorm
Precision BF16
License Apache 2.0
Specialization Instruction Following

Pruning Methodology

What is Expert Pruning?

Mixture-of-Experts models contain multiple specialized sub-networks (experts) per layer. During inference, only a subset of experts are activated for each token. Expert pruning involves:

  1. Analyzing Usage Patterns: Tracking which experts activate most frequently for specific tasks
  2. Removing Underutilized Experts: Discarding experts with low activation rates for the target domain
  3. Preserving Router Functionality: Maintaining the routing mechanism with fewer available experts

Our Approach

  • Data-Driven Selection: Used activation patterns from instruction following evaluation tasks
  • Systematic Reduction: Reduced from 32 to 31 experts per layer
  • No Retraining: Direct removal without additional training steps

Performance & Applications

Pruning Benefits

  • Smaller Memory Footprint: 96.9% of original expert parameters
  • Reduced Computational Load: Fewer routing decisions during inference
  • Focused Capabilities: Retains experts relevant to instruction following tasks

Use Cases

  • Speculative Decoding: Draft model for full GPT-OSS-20B
  • Resource-Constrained Deployment: Edge devices, mobile applications
  • Research: Study expert specialization in MoE models
  • Fine-tuning: Smaller base model for domain adaptation

Note: Performance may vary depending on how well the pruned experts match your specific use case.

Motivation & Expert Selection

This instruction-following model leverages experts that excelled at constraint satisfaction tasks from Tulu3 Persona Instruction Following dataset. These experts specialize in precise adherence to user specifications and formatting requirements.

The expert selection process utilized our comprehensive analysis of router activation patterns across multiple evaluation benchmarks:

  • GPQA: Graduate-level questions in physics, chemistry, biology (Diamond & Expert subsets)
  • MMLU/MMLU-Pro: Comprehensive knowledge across 57+ subjects including science, medicine, law
  • SORRY-Bench: Safety evaluation across harmful content categories
  • Tulu3: Persona-driven instruction following with verifiable constraints
  • Polyglot-or-Not: Multilingual factual completion tasks

By identifying experts that consistently activated for instruction following tasks, we created this specialized model that maintains domain expertise while significantly reducing computational requirements from 32 to 31 experts per layer.

Dataset & Analysis Foundation

This model is based on analysis from the GPT-OSS-20B MoE Expert Activations dataset available at: 🔗 https://huggingface.co/datasets/AmanPriyanshu/GPT-OSS-20B-MoE-expert-activations

The dataset contains router activation patterns from OpenAI's GPT-OSS-20B model across diverse evaluation benchmarks, enabling the creation of these domain-optimized models through systematic expert pruning.

Pruning Methodology

Our approach involves:

  1. Activation Analysis: Comprehensive evaluation of expert usage patterns across domain-specific tasks
  2. Expert Ranking: Identification of the most frequently activated experts for target domains
  3. Systematic Pruning: Reduction from 32 to 31 experts while preserving router functionality
  4. Quality Validation: Testing to ensure maintained performance on target tasks

This is a direct pruning approach - no additional training was performed. The model inherits all capabilities from the original GPT-OSS-20B with focused expert selection.

Usage

CPU Inference

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

# Load the specialized model on CPU
model = AutoModelForCausalLM.from_pretrained(
    "AmanPriyanshu/gpt-oss-20.3b-specialized-instruction_following-pruned-moe-only-31-experts", 
    torch_dtype=torch.bfloat16, 
    device_map="cpu", 
    trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-20.3b-specialized-instruction_following-pruned-moe-only-31-experts")

# Generate with the model
messages = [
    {"role": "user", "content": "Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing."}
]

inputs = tokenizer.apply_chat_template(
    messages, 
    add_generation_prompt=True, 
    return_tensors="pt", 
    return_dict=True,
    reasoning_effort="medium"
)

# Ensure inputs are on the same device as model
inputs = {k: v.to(model.device) for k, v in inputs.items()}

outputs = model.generate(
    **inputs, 
    max_new_tokens=512,
    do_sample=True,
    temperature=0.1,
    top_p=0.9,
    pad_token_id=tokenizer.eos_token_id,
    eos_token_id=tokenizer.eos_token_id
)

# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)

Apple Silicon (MPS) Inference

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

# Check MPS availability and load model
device = "mps" if torch.backends.mps.is_available() else "cpu"

model = AutoModelForCausalLM.from_pretrained(
    "AmanPriyanshu/gpt-oss-20.3b-specialized-instruction_following-pruned-moe-only-31-experts", 
    torch_dtype=torch.float16,  # Better MPS compatibility
    device_map=device, 
    trust_remote_code=True,
    low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained("AmanPriyanshu/gpt-oss-20.3b-specialized-instruction_following-pruned-moe-only-31-experts")

# Generate with the model
messages = [
    {"role": "user", "content": "Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing."}
]

inputs = tokenizer.apply_chat_template(
    messages, 
    add_generation_prompt=True, 
    return_tensors="pt", 
    return_dict=True,
    reasoning_effort="medium"
)

# Move inputs to model device
inputs = {k: v.to(model.device) if isinstance(v, torch.Tensor) else v for k, v in inputs.items()}

# Use torch.no_grad for MPS stability
with torch.no_grad():
    outputs = model.generate(
        **inputs, 
        max_new_tokens=512,
        do_sample=True,
        temperature=0.1,
        top_p=0.9,
        pad_token_id=tokenizer.eos_token_id,
        eos_token_id=tokenizer.eos_token_id,
        use_cache=True
    )

# Decode only the generated part
input_length = inputs['input_ids'].shape[1]
response_tokens = outputs[0][input_length:]
response = tokenizer.decode(response_tokens, skip_special_tokens=True)
print(response)

GPU Inference

For CUDA GPU inference, the models work with minimal modifications from the CPU example above. Simply change:

device_map="auto"  # Will automatically use GPU if available
torch_dtype=torch.bfloat16  # or torch.float16

Model Examples

The following examples demonstrate the model's performance on instruction following tasks:

Example 1

Question: Write a formal email to a professor requesting a meeting, including: subject line, greeting, purpose, proposed times, and professional closing.

Response:

<|channel|>analysis<|message|>Need formal email.<|end|><|start|>assistant<|channel|>final<|message|>**Subject:** Request for Meeting – Discussion of Research Project Progress  

Dear Professor [Last Name],

I hope this message finds you well. I am writing to request a brief meeting with you to discuss the current status of my research project on [brief topic], as well as to seek your guidance on the next steps and potential publication avenues.

Could we meet at one of the following times next week?  
- **Tuesday, Aug 20, 2025, 10:30 – 11:00 a.m.**  
- **Wednesday, Aug 21, 2025, 2:00 – 2:30 p.m.**  
- **Friday, Aug 23, 2025, 9:00 – 9:30 a.m.**  

If none of these slots are convenient, I would be grateful if you could suggest an alternative time that fits your schedule.

Thank you very much for your time and consideration. I look forward to our conversation.

Sincerely,

[Your Full Name]  
[Your Program / Department]  
[University Name]  
[Email Address] | [Phone Number]<|return|>

Example 2

Question: Create a numbered list of steps for making scrambled eggs, using exactly 6 steps.

Response:

<|channel|>analysis<|message|>Need 6 steps numbered.<|end|><|start|>assistant<|channel|>final<|message|>1. **Gather ingredients and tools** – You’ll need eggs, a pinch of salt, a splash of milk or cream (optional), butter or oil, a bowl, a whisk or fork, and a non‑stick skillet with a spatula.  
2. **Beat the eggs** – Crack the eggs into the bowl, add salt (and milk if using), then whisk or stir until the mixture is uniform and slightly frothy.  
3. **Heat the skillet** – Place the skillet over medium‑low heat and add butter or oil, letting it melt and coat the surface without browning.  
4. **Pour in the eggs** – Once the butter is foamy or the oil is shimmering, pour the beaten eggs into the skillet.  
5. **Cook and stir gently** – Let the eggs sit for a few seconds until they start to set at the edges, then use the spatula to gently push the cooked portions toward the center while tilting the pan to let uncooked liquid flow to the edges. Repeat until the eggs are softly set but still slightly runny.  
6. **Serve immediately** – Remove the skillet from heat, transfer the scrambled eggs to a plate, and enjoy while hot.<|return|>

Example 3

Question: Summarize the water cycle in exactly 50 words, no more, no less.

Response:

<|channel|>analysis<|message|>Need 50 words exactly. Let's craft: "Water evaporates from oceans, lakes, and soil, forming clouds. Condensation cools vapor into droplets, which fall as precipitation—rain, snow, sleet, or hail—returning to Earth's surface. Runoff channels water into rivers, streams, and eventually back to oceans, while infiltration replenishes groundwater. This continuous cycle sustains life and regulates climate." Count words. Let's count: Water(1) evaporates2 from3 oceans,4 lakes,5 and6 soil,7 forming8 clouds.9 Condensation10 cools11 vapor12 into13 droplets,14 which15 fall16 as17 precipitation—rain,18 snow,19 sleet,20 or21 hail—returning22 to23 Earth's24 surface.25 Runoff26 channels27 water28 into29 rivers,30 streams,31 and32 eventually33 back34 to35 oceans,36 while37 infiltration38 replenishes39 groundwater.40 This41 continuous42 cycle43 sustains44 life45 and46 regulates47 climate48. Need 2 more words: "for"49 "all"50.<|end|><|start|>assistant<|channel|>final<|message|>Water evaporates from oceans, lakes, and soil, forming clouds. Condensation cools vapor into droplets, which fall as precipitation—rain, snow, sleet, or hail—returning to Earth's surface. Runoff channels water into rivers, streams, and eventually back to oceans, while infiltration replenishes groundwater. This continuous cycle sustains life and regulates climate for all.<|return|>

Citation

If you use this model in your research, please cite:

@misc{priyanshu2025gptoss,
  title={{GPT-OSS MoE Expert Fingerprinting: Analyzing Expert Activation Patterns in Mixture of Experts Models}},
  author={Priyanshu, Aman and Vijay, Supriti},
  year={2025},
  howpublished={\url{https://amanpriyanshu.github.io/GPT-OSS-MoE-ExpertFingerprinting/}},
  note={Interactive analysis tool for expert activation patterns in MoE architectures}
}

References & Resources

Downloads last month
2
Safetensors
Model size
20.3B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train AmanPriyanshu/gpt-oss-20.3b-specialized-instruction_following-pruned-moe-only-31-experts

Collection including AmanPriyanshu/gpt-oss-20.3b-specialized-instruction_following-pruned-moe-only-31-experts