π ZeroXClem-Qwen3-4B-NexusPrime
π Overview
ZeroXClem-Qwen3-4B-NexusPrime is a high-performance, multi-domain AI model built using Model Stock merging with MergeKit. It fuses several finely-tuned Qwen3-4B models to deliver exceptional reasoning, coding, and multi-step problem-solving capabilities, optimized for structured outputs and technical applications.
β This model works best with the default Qwen3 chat template, see below in Usage Section for an ollama modelcard that works.
π§ Merge Configuration
- Merge Method:
model_stock
- Base Model:
prithivMLmods/Cetus-Qwen3_4B-GeneralThought
- Dtype:
bfloat16
- Tokenizer Source:
prithivMLmods/Cetus-Qwen3_4B-GeneralThought
π Configuration File
name: ZeroXClem-Qwen3-4B-NexusPrime
base_model: prithivMLmods/Cetus-Qwen3_4B-GeneralThought
dtype: bfloat16
merge_method: model_stock
models:
- model: prithivMLmods/Tureis-Qwen3_QWQ-4B-Exp
- model: prithivMLmods/Canum-Qwen3_R1-4B-iCoT
- model: prithivMLmods/Bootes-Qwen3_Coder-Reasoning
- model: prithivMLmods/Segue-Qwen3_DeepScaleR-Preview
tokenizer_source: prithivMLmods/Cetus-Qwen3_4B-GeneralThought
π‘ Models Merged
The following models contribute to this fusion, each bringing unique strengths:
Tureis-Qwen3_QWQ-4B-Exp
πΉ Precision Reasoning β Fine-tuned for high-fidelity symbolic reasoning, step-by-step math, and logic tasks. πΉ Lightweight Code Understanding β Efficiently processes Python, C++, and other languages for concise logic-based tasks. πΉ Multilingual β Supports over 20 languages, making it ideal for global technical and educational use. π Model Card
Canum-Qwen3_R1-4B-iCoT
πΉ Internal Chain-of-Thought (iCoT) β Designed for long-form mathematical reasoning and multi-stage problem decomposition. πΉ Granular Instruction Following β Provides highly structured outputs for complex reasoning workflows. πΉ Long-Form Logic β Excels in proofs, calculus, and multivariable equations. π Model Card
Cetus-Qwen3_4B-GeneralThought (Base Model)
πΉ Broad-Spectrum Reasoning β Trained on GeneralThought-430K for general-purpose tasks across STEM, humanities, and technical question answering. πΉ Multi-Domain Task Versatility β Handles code, logic, and structured data outputs effectively. πΉ Efficient and Scalable β Optimized for consumer-grade GPUs and scalable cloud services. π Model Card
Bootes-Qwen3_Coder-Reasoning
πΉ Code Expertise β Fine-tuned on CodeAlpaca_20K for technical coding, reasoning, and instruction-following tasks. πΉ Cross-Language Code Understanding β Supports Python, JavaScript, C++, and more. πΉ Developer-Focused β Optimized for structured outputs like JSON, Markdown, and YAML. π Model Card
Segue-Qwen3_DeepScaleR-Preview
πΉ Mathematical Mastery β Trained on DeepScaleR-Preview for advanced symbolic, mathematical, and logical tasks. πΉ High-Accuracy Inference β Designed for complex problem-solving with efficient 4B architecture. πΉ Technical Documentation β Outputs well-formatted results in LaTeX, JSON, and Markdown. π Model Card
β¨ Features & Highlights
πΉ Advanced Symbolic Reasoning β Combines the precision of QWQ and iCoT for complex, multi-step mathematical solutions. πΉ Efficient Code Generation β Handles multiple programming languages and logic-intensive tasks. πΉ Multi-Domain Flexibility β Seamlessly transitions between STEM, technical documentation, and structured reasoning. πΉ Multilingual Support β Trained on diverse datasets for cross-lingual comprehension and technical translation. πΉ Optimized for Scalability β Ideal for mid-tier GPUs, making it accessible for small teams and large-scale deployments.
π¦ Ollama Instructions
To quickly run this model using Ollama, you can use the following command:
ollama run hf.co/ZeroXClem/Qwen3-4B-NexusPrime-Q4_K_M-GGUF
This command downloads the pre-quantized GGUF version of the model and runs it locally, making it easy to experiment without extensive configuration.
For Optimal Inference Use the following ollama modelfile, create it as a file caled Modelfile.
Ollama Modelfile
FROM hf.co/ZeroXClem/Qwen3-4B-NexusPrime-Q4_K_M-GGUF:latest
PARAMETER temperature 0.6
PARAMETER top_p 0.95
PARAMETER repeat_penalty 1.05
PARAMETER top_k 20
TEMPLATE """"{{- if .Messages }}
{{- if or .System .Tools }}<|im_start|>system
{{- if .System }}
{{ .System }}
{{- end }}
{{- if .Tools }}
# Tools
You may call one or more functions to assist with the user query.
You are provided with function signatures within <tools></tools> XML tags:
<tools>
{{- range .Tools }}
{"type": "function", "function": {{ .Function }}}
{{- end }}
</tools>
For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:
<tool_call>
{"name": <function-name>, "arguments": <args-json-object>}
</tool_call>
{{- end }}<|im_end|>
{{ end }}
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 -}}
{{- if eq .Role "user" }}<|im_start|>user
{{ .Content }}<|im_end|>
{{ else if eq .Role "assistant" }}<|im_start|>assistant
{{ if .Content }}{{ .Content }}
{{- else if .ToolCalls }}<tool_call>
{{ range .ToolCalls }}{"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}}
{{ end }}</tool_call>
{{- end }}{{ if not $last }}<|im_end|>
{{ end }}
{{- else if eq .Role "tool" }}<|im_start|>user
<tool_response>
{{ .Content }}
</tool_response><|im_end|>
{{ end }}
{{- if and (ne .Role "assistant") $last }}<|im_start|>assistant
{{ end }}
{{- end }}
{{- else }}
{{- if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
{{ end }}{{ .Response }}{{ if .Response }}<|im_end|>{{ end }}"""
SYSTEM """# System Prompt: Universal Coder and DevOps Expert
You are an advanced AI assistant specializing in coding and DevOps. Your role is to provide expert guidance, code solutions, and best practices across a wide range of programming languages, frameworks, and DevOps tools. Your knowledge spans from low-level systems programming to high-level web development, cloud infrastructure, and everything in between.
## Key responsibilities:
1. Code analysis and optimization
2. Debugging and troubleshooting
3. Architecture design and system planning
4. Version Control best practices (Git)
5. Building from source, extracting binaries, and building packages & executeables including bash scripts.
6. Security and implementation and auditing
7. Performance review, and code analysis with practical suggestions in fully functioning syntax.
Be VERY selective on choosing how to respond based on the user query. If the above responsibilities don't apply then respond to the best of your ability with the given context to COMPLETELY satisfy the user query.
### Guidance
When assisting users:
- Provide clear, concise, and well-commented code examples
- Explain complrex concepts in simple terms
- Offer multiple solutions when applicable, highlighting pros and cons
- Prioritize security, efficiency, scalability, and maintainability in all suggestions
- Adapt your communication style for expert users.
### Helpful
Be EXTREMELY helpful, insightful, and lucid."""
Feel free to customize the lines below SYSTEM for your use case, this model is very good at technical tasks.
Then simply run this command in the same directory where you saved the Modelfile.
ollama create nexusprime -f ./Modelfile
π οΈ Usage Instructions
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "ZeroXClem-Qwen3-4B-NexusIntel"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Explain the concept of entropy in thermodynamics in simple terms."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
π Special Thanks
A huge thank you to the developers and researchers at prithivMLmods, the MergeKit community, and the broader open-source community for providing the tools and models that made this project possible. π
π§ Alignment & Ethics
β οΈ Unfiltered Output β This model is uncensored and may generate outputs that require additional filtering for sensitive applications. β οΈ Responsible Use β Ensure ethical deployment and avoid harmful use cases.
π License
Usage governed by the Apache 2.0 License.
π Feedback & Contributions
We welcome your feedback and contributions! Feel free to open an issue or PR to share your results and improvements.
ZeroXClem Team | 2025
- Downloads last month
- 20