Qwen3-58B-Embiggened
Model Description
This is a SIGNIFICANTLY cool outcome. I widened Qwen3-32B. And it's still perfectly coherent.
This is an intermediate checkpoint in the process of expanding Qwen3-32B to match Qwen3-72B architecture dimensions. This model represents Stage 1 of a two-stage upscaling process, where the hidden dimensions and attention heads have been expanded, but the model still maintains 64 layers.
the code to generate this model is here: stage1_v2.py
This model was made possible by excellent AMD mi300x compute generously provided by Hot Aisle.
As is, this model underperforms Qwen3-32B. The intent is to create a target suitable for distillation from Qwen3-235B.
Architecture Changes
Original Qwen3-32B
- Hidden size: 5,120
- Intermediate size: 25,600
- Attention heads: 40 (64 Q heads due to GQA)
- KV heads: 8
- Layers: 64
Stage 1 Output (This Model)
- Hidden size: 8,192 โ
- Intermediate size: 29,568 โ
- Attention heads: 64 โ
- KV heads: 8 โ
- Layers: 64 (unchanged)
Methodology
This model was created using structure-aware linear interpolation with the following techniques:
Layer-Dependent Interpolation Weights
- Early layers (0-25%): Conservative interpolation (weight=0.3)
- Middle layers (25-75%): Balanced interpolation (weight=0.5)
- Late layers (75-100%): Aggressive interpolation (weight=0.7)
Structured Noise Addition
- Small amounts of structured noise (0.5%) added to break symmetry
- Reduced noise in central components to preserve important features
Norm Preservation
- Original tensor norms preserved during interpolation
- Critical for maintaining stable activations
Component-Specific Handling
- Embeddings: Conservative interpolation (0.3)
- Attention projections: Proper handling of GQA architecture
- MLP layers: More aggressive interpolation with layer-dependent weights
Technical Details
Dimension Transformations
lm_head: [151936, 5120] โ [151936, 8192]
embed_tokens: [151936, 5120] โ [151936, 8192]
q_proj: [8192, 5120] โ [8192, 8192]
k_proj: [1024, 5120] โ [1024, 8192]
v_proj: [1024, 5120] โ [1024, 8192]
o_proj: [5120, 8192] โ [8192, 8192]
gate_proj: [25600, 5120] โ [29568, 8192]
up_proj: [25600, 5120] โ [29568, 8192]
down_proj: [5120, 25600] โ [8192, 29568]
Key Insights
- Qwen3-32B already uses asymmetric attention with 64 Q heads despite 5120 hidden size
- Group Query Attention (GQA) maintained with 8 KV heads
- All interpolations preserve the mathematical properties of the original weights
Evaluation Results
To answer the question "is it smarter or dumber than the original?", the model was evaluated on the IFEval (Instruction Following Evaluation) benchmark and compared directly against its base model, Qwen/Qwen3-32B
.
IFEval: Instruction Following Comparison
Evaluation was performed using the lm-evaluation-harness in a 0-shot setting. The results show that while the raw interpolated model is not yet as capable as the highly polished base model, it has successfully retained a significant portion of its instruction-following ability.
Metric (Higher is Better) | ๐ฅ Base Model (Qwen3-32B) | Embiggened Model (This Model) | Performance Change |
---|---|---|---|
Prompt-level Strict Accuracy | 81.25% | 68.75% | -12.5 pts |
Instruction-level Strict Accuracy | 87.50% | 75.00% | -12.5 pts |
Prompt-level Loose Accuracy | 87.50% | 68.75% | -18.75 pts |
Instruction-level Loose Accuracy | 91.67% | 75.00% | -16.67 pts |
Analysis of Results
- Expected Performance Drop: The drop in performance is an expected and normal consequence of the architectural expansion. The interpolation process, while structure-aware, cannot perfectly preserve the intricate balance of a fine-tuned model's weights.
- Success in Retaining Capability: The key takeaway is not the performance drop, but how much capability the model retained. Achieving ~85% of the original's strict accuracy (68.75% vs 81.25%) without any post-expansion training is a strong indicator of a successful architectural merge. The model remained coherent and functional.
- Strong Foundation for Fine-Tuning: These results establish a powerful baseline. The model is now a larger, coherent architecture that serves as an excellent starting point for further fine-tuning, which would likely recover and ultimately exceed the performance of the original 32B model.
Usage
Basic Usage with Thinking Mode
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "cognitivecomputations/Qwen3-58B-Embiggened"
# Load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# Prepare the model input
prompt = "How many r's are in strawberry?"
messages = [
{"role": "user", "content": prompt}
]
# Apply chat template with thinking mode enabled
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Enable thinking mode (default)
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate response
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768,
temperature=0.6, # Recommended for thinking mode
top_p=0.95,
top_k=20,
min_p=0
)
# Parse thinking content and final response
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
try:
# Find </think> token (151668)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("Thinking content:", thinking_content)
print("Final answer:", content)
Non-Thinking Mode (Efficient General Dialogue)
# Same setup as above...
# Apply chat template with thinking mode disabled
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Disable thinking for efficiency
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# Generate with non-thinking parameters
outputs = model.generate(
**model_inputs,
max_new_tokens=2048,
temperature=0.7, # Recommended for non-thinking mode
top_p=0.8,
top_k=20,
min_p=0
)
Advanced: Dynamic Mode Switching
# Use /think and /no_think tags to control behavior
messages = [
{"role": "user", "content": "Explain quantum computing /no_think"}, # Quick response
{"role": "assistant", "content": "Quantum computing uses quantum bits..."},
{"role": "user", "content": "How does superposition work mathematically? /think"} # Detailed reasoning
]
vLLM Deployment with Reasoning Support
# Start server with reasoning parser
# vllm serve cognitivecomputations/Qwen3-58B-Embiggened --enable-reasoning --reasoning-parser deepseek_r1
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8000/v1", api_key="dummy")
# Use with thinking mode
response = client.chat.completions.create(
model="cognitivecomputations/Qwen3-58B-Embiggened",
messages=[{"role": "user", "content": "Solve: What is 15% of 250?"}],
extra_body={"enable_thinking": True}
)
Advanced Usage with Quantization
from transformers import BitsAndBytesConfig
# 4-bit quantization for reduced memory usage
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
)
model = AutoModelForCausalLM.from_pretrained(
"cognitivecomputations/Qwen3-58B-Embiggened",
quantization_config=bnb_config,
device_map="auto"
)
Example Outputs with Thinking
Prompt: "How many r's are in strawberry?"
Thinking: Let me count the r's in "strawberry". S-t-r-a-w-b-e-r-r-y.
Going through each letter: s(no), t(no), r(yes, 1), a(no), w(no),
b(no), e(no), r(yes, 2), r(yes, 3), y(no).
Final answer: There are 3 r's in the word "strawberry".
Prompt: "What is the capital of France, and what is it famous for?"
Final answer (no thinking): Paris is the capital of France. It's famous for
the Eiffel Tower, the Louvre Museum, Notre-Dame Cathedral, and its rich
cultural heritage, fashion, and cuisine.
Hardware Requirements
- Minimum VRAM: ~130GB (for full model in bf16)
- Recommended: Multiple GPUs with at least 160GB total VRAM
- Tested on: 8x AMD MI300X GPUs
Limitations
- This is an intermediate checkpoint - layer count doesn't match Qwen3-72B
- Not fine-tuned or aligned - raw interpolated weights only
- May exhibit some instabilities due to interpolation artifacts
- Performance characteristics undefined without further training
Next Steps
To complete the expansion to Qwen3-72B architecture:
- Use Stage 2 processing to expand from 64 to 80 layers
- Consider fine-tuning on high-quality datasets
- Apply alignment techniques if needed for specific use cases
Citation
If you use this work, please cite:
@misc{qwen3-embiggening-2025,
title={Qwen3 32B to 72B Architecture Expansion via Structure-Aware Interpolation},
author={[Your Name]},
year={2025},
howpublished={\url{https://github.com/yourusername/qwen3-embiggening}}
}
License
This model inherits the license from the original Qwen3-32B model. Please refer to the original model card for licensing information.
Acknowledgments
- Original Qwen3-32B model by Alibaba Cloud
- Interpolation techniques inspired by model merging research
- "Embiggened" - A perfectly cromulent word
Original Model Card
Qwen3-32B
Qwen3 Highlights
Qwen3 is the latest generation of large language models in Qwen series, offering a comprehensive suite of dense and mixture-of-experts (MoE) models. Built upon extensive training, Qwen3 delivers groundbreaking advancements in reasoning, instruction-following, agent capabilities, and multilingual support, with the following key features:
- Uniquely support of seamless switching between thinking mode (for complex logical reasoning, math, and coding) and non-thinking mode (for efficient, general-purpose dialogue) within single model, ensuring optimal performance across various scenarios.
- Significantly enhancement in its reasoning capabilities, surpassing previous QwQ (in thinking mode) and Qwen2.5 instruct models (in non-thinking mode) on mathematics, code generation, and commonsense logical reasoning.
- Superior human preference alignment, excelling in creative writing, role-playing, multi-turn dialogues, and instruction following, to deliver a more natural, engaging, and immersive conversational experience.
- Expertise in agent capabilities, enabling precise integration with external tools in both thinking and unthinking modes and achieving leading performance among open-source models in complex agent-based tasks.
- Support of 100+ languages and dialects with strong capabilities for multilingual instruction following and translation.
Model Overview
Qwen3-32B has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Number of Parameters: 32.8B
- Number of Paramaters (Non-Embedding): 31.2B
- Number of Layers: 64
- Number of Attention Heads (GQA): 64 for Q and 8 for KV
- Context Length: 32,768 natively and 131,072 tokens with YaRN.
For more details, including benchmark evaluation, hardware requirements, and inference performance, please refer to our blog, GitHub, and Documentation.
Quickstart
The code of Qwen3 has been in the latest Hugging Face transformers
and we advise you to use the latest version of transformers
.
With transformers<4.51.0
, you will encounter the following error:
KeyError: 'qwen3'
The following contains a code snippet illustrating how to use the model generate content based on given inputs.
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen3-32B"
# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # Switches between thinking and non-thinking modes. Default is True.
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
# conduct text completion
generated_ids = model.generate(
**model_inputs,
max_new_tokens=32768
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist()
# parsing thinking content
try:
# rindex finding 151668 (</think>)
index = len(output_ids) - output_ids[::-1].index(151668)
except ValueError:
index = 0
thinking_content = tokenizer.decode(output_ids[:index], skip_special_tokens=True).strip("\n")
content = tokenizer.decode(output_ids[index:], skip_special_tokens=True).strip("\n")
print("thinking content:", thinking_content)
print("content:", content)
For deployment, you can use sglang>=0.4.6.post1
or vllm>=0.8.5
or to create an OpenAI-compatible API endpoint:
- SGLang:
python -m sglang.launch_server --model-path Qwen/Qwen3-32B --reasoning-parser qwen3
- vLLM:
vllm serve Qwen/Qwen3-32B --enable-reasoning --reasoning-parser deepseek_r1
For local use, applications such as Ollama, LMStudio, MLX-LM, llama.cpp, and KTransformers have also supported Qwen3.
Switching Between Thinking and Non-Thinking Mode
The
enable_thinking
switch is also available in APIs created by SGLang and vLLM. Please refer to our documentation for SGLang and vLLM users.
enable_thinking=True
By default, Qwen3 has thinking capabilities enabled, similar to QwQ-32B. This means the model will use its reasoning abilities to enhance the quality of generated responses. For example, when explicitly setting enable_thinking=True
or leaving it as the default value in tokenizer.apply_chat_template
, the model will engage its thinking mode.
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=True # True is the default value for enable_thinking
)
In this mode, the model will generate think content wrapped in a <think>...</think>
block, followed by the final response.
For thinking mode, use
Temperature=0.6
,TopP=0.95
,TopK=20
, andMinP=0
(the default setting ingeneration_config.json
). DO NOT use greedy decoding, as it can lead to performance degradation and endless repetitions. For more detailed guidance, please refer to the Best Practices section.
enable_thinking=False
We provide a hard switch to strictly disable the model's thinking behavior, aligning its functionality with the previous Qwen2.5-Instruct models. This mode is particularly useful in scenarios where disabling thinking is essential for enhancing efficiency.
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True,
enable_thinking=False # Setting enable_thinking=False disables thinking mode
)
In this mode, the model will not generate any think content and will not include a <think>...</think>
block.
For non-thinking mode, we suggest using
Temperature=0.7
,TopP=0.8
,TopK=20
, andMinP=0
. For more detailed guidance, please refer to the Best Practices section.
Advanced Usage: Switching Between Thinking and Non-Thinking Modes via User Input
We provide a soft switch mechanism that allows users to dynamically control the model's behavior when enable_thinking=True
. Specifically, you can add /think
and /no_think
to user prompts or system messages to switch the model's thinking mode from turn to turn. The model will follow the most recent instruction in multi-turn conversations.
Here is an example of a multi-turn conversation:
from transformers import AutoModelForCausalLM, AutoTokenizer
class QwenChatbot:
def __init__(self, model_name="Qwen/Qwen3-32B"):
self.tokenizer = AutoTokenizer.from_pretrained(model_name)
self.model = AutoModelForCausalLM.from_pretrained(model_name)
self.history = []
def generate_response(self, user_input):
messages = self.history + [{"role": "user", "content": user_input}]
text = self.tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
inputs = self.tokenizer(text, return_tensors="pt")
response_ids = self.model.generate(**inputs, max_new_tokens=32768)[0][len(inputs.input_ids[0]):].tolist()
response = self.tokenizer.decode(response_ids, skip_special_tokens=True)
# Update history
self.history.append({"role": "user", "content": user_input})
self.history.append({"role": "assistant", "content": response})
return response
# Example Usage
if __name__ == "__main__":
chatbot = QwenChatbot()
# First input (without /think or /no_think tags, thinking mode is enabled by default)
user_input_1 = "How many r's in strawberries?"
print(f"User: {user_input_1}")
response_1 = chatbot.generate_response(user_input_1)
print(f"Bot: {response_1}")
print("----------------------")
# Second input with /no_think
user_input_2 = "Then, how many r's in blueberries? /no_think"
print(f"User: {user_input_2}")
response_2 = chatbot.generate_response(user_input_2)
print(f"Bot: {response_2}")
print("----------------------")
# Third input with /think
user_input_3 = "Really? /think"
print(f"User: {user_input_3}")
response_3 = chatbot.generate_response(user_input_3)
print(f"Bot: {response_3}")
For API compatibility, when
enable_thinking=True
, regardless of whether the user uses/think
or/no_think
, the model will always output a block wrapped in<think>...</think>
. However, the content inside this block may be empty if thinking is disabled. Whenenable_thinking=False
, the soft switches are not valid. Regardless of any/think
or/no_think
tags input by the user, the model will not generate think content and will not include a<think>...</think>
block.
Agentic Use
Qwen3 excels in tool calling capabilities. We recommend using Qwen-Agent to make the best use of agentic ability of Qwen3. Qwen-Agent encapsulates tool-calling templates and tool-calling parsers internally, greatly reducing coding complexity.
To define the available tools, you can use the MCP configuration file, use the integrated tool of Qwen-Agent, or integrate other tools by yourself.
from qwen_agent.agents import Assistant
# Define LLM
llm_cfg = {
'model': 'Qwen3-32B',
# Use the endpoint provided by Alibaba Model Studio:
# 'model_type': 'qwen_dashscope',
# 'api_key': os.getenv('DASHSCOPE_API_KEY'),
# Use a custom endpoint compatible with OpenAI API:
'model_server': 'http://localhost:8000/v1', # api_base
'api_key': 'EMPTY',
# Other parameters:
# 'generate_cfg': {
# # Add: When the response content is `<think>this is the thought</think>this is the answer;
# # Do not add: When the response has been separated by reasoning_content and content.
# 'thought_in_content': True,
# },
}
# Define Tools
tools = [
{'mcpServers': { # You can specify the MCP configuration file
'time': {
'command': 'uvx',
'args': ['mcp-server-time', '--local-timezone=Asia/Shanghai']
},
"fetch": {
"command": "uvx",
"args": ["mcp-server-fetch"]
}
}
},
'code_interpreter', # Built-in tools
]
# Define Agent
bot = Assistant(llm=llm_cfg, function_list=tools)
# Streaming generation
messages = [{'role': 'user', 'content': 'https://qwenlm.github.io/blog/ Introduce the latest developments of Qwen'}]
for responses in bot.run(messages=messages):
pass
print(responses)
Processing Long Texts
Qwen3 natively supports context lengths of up to 32,768 tokens. For conversations where the total length (including both input and output) significantly exceeds this limit, we recommend using RoPE scaling techniques to handle long texts effectively. We have validated the model's performance on context lengths of up to 131,072 tokens using the YaRN method.
YaRN is currently supported by several inference frameworks, e.g., transformers
and llama.cpp
for local use, vllm
and sglang
for deployment. In general, there are two approaches to enabling YaRN for supported frameworks:
Modifying the model files: In the
config.json
file, add therope_scaling
fields:{ ..., "rope_scaling": { "rope_type": "yarn", "factor": 4.0, "original_max_position_embeddings": 32768 } }
For
llama.cpp
, you need to regenerate the GGUF file after the modification.Passing command line arguments:
For
vllm
, you can usevllm serve ... --rope-scaling '{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}' --max-model-len 131072
For
sglang
, you can usepython -m sglang.launch_server ... --json-model-override-args '{"rope_scaling":{"rope_type":"yarn","factor":4.0,"original_max_position_embeddings":32768}}'
For
llama-server
fromllama.cpp
, you can usellama-server ... --rope-scaling yarn --rope-scale 4 --yarn-orig-ctx 32768
If you encounter the following warning
Unrecognized keys in `rope_scaling` for 'rope_type'='yarn': {'original_max_position_embeddings'}
please upgrade
transformers>=4.51.0
.
All the notable open-source frameworks implement static YaRN, which means the scaling factor remains constant regardless of input length, potentially impacting performance on shorter texts. We advise adding the
rope_scaling
configuration only when processing long contexts is required. It is also recommended to modify thefactor
as needed. For example, if the typical context length for your application is 65,536 tokens, it would be better to setfactor
as 2.0.
The default
max_position_embeddings
inconfig.json
is set to 40,960. This allocation includes reserving 32,768 tokens for outputs and 8,192 tokens for typical prompts, which is sufficient for most scenarios involving short text processing. If the average context length does not exceed 32,768 tokens, we do not recommend enabling YaRN in this scenario, as it may potentially degrade model performance.
The endpoint provided by Alibaba Model Studio supports dynamic YaRN by default and no extra configuration is needed.
Best Practices
To achieve optimal performance, we recommend the following settings:
Sampling Parameters:
- For thinking mode (
enable_thinking=True
), useTemperature=0.6
,TopP=0.95
,TopK=20
, andMinP=0
. DO NOT use greedy decoding, as it can lead to performance degradation and endless repetitions. - For non-thinking mode (
enable_thinking=False
), we suggest usingTemperature=0.7
,TopP=0.8
,TopK=20
, andMinP=0
. - For supported frameworks, you can adjust the
presence_penalty
parameter between 0 and 2 to reduce endless repetitions. However, using a higher value may occasionally result in language mixing and a slight decrease in model performance.
- For thinking mode (
Adequate Output Length: We recommend using an output length of 32,768 tokens for most queries. For benchmarking on highly complex problems, such as those found in math and programming competitions, we suggest setting the max output length to 38,912 tokens. This provides the model with sufficient space to generate detailed and comprehensive responses, thereby enhancing its overall performance.
Standardize Output Format: We recommend using prompts to standardize model outputs when benchmarking.
- Math Problems: Include "Please reason step by step, and put your final answer within \boxed{}." in the prompt.
- Multiple-Choice Questions: Add the following JSON structure to the prompt to standardize responses: "Please show your choice in the
answer
field with only the choice letter, e.g.,"answer": "C"
."
No Thinking Content in History: In multi-turn conversations, the historical model output should only include the final output part and does not need to include the thinking content. It is implemented in the provided chat template in Jinja2. However, for frameworks that do not directly use the Jinja2 chat template, it is up to the developers to ensure that the best practice is followed.
Citation
If you find our work helpful, feel free to give us a cite.
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}
- Downloads last month
- 51