🦑 ZeroXClem-Qwen3-8B-HoneyBadger-EXP

🧬 Overview

ZeroXClem-Qwen3-8B-HoneyBadger-EXP is a fierce and expressive model fusion crafted using the Model Stock merge method from MergeKit. Designed to combine instruction-following, deep reasoning, creative roleplay, and code capabilities, this blend leverages the best of Qwen3-8B-based fine-tunes from various communities across Hugging Face.

This HoneyBadger doesn't just careβ€”it dominates symbolic reasoning, narrative immersion, and technical comprehension with sleek aggression.

Be advised Use the ollama modelfile, or customized prompt with default Qwen3 chat template for optimal inference.


πŸ”§ Merge Configuration

  • Merge Method: model_stock
  • Base Model: AXCXEPT/Qwen3-EZO-8B-beta
  • Dtype: bfloat16
  • Tokenizer Source: AXCXEPT/Qwen3-EZO-8B-beta

🧾 YAML

name: ZeroXClem-Qwen3-8B-HoneyBadger-EXP
base_model: AXCXEPT/Qwen3-EZO-8B-beta
dtype: bfloat16
merge_method: model_stock
models:
  - model: taki555/Qwen3-8B-Shadow-FT-BAAI-2k
  - model: GreenerPastures/Bald-Beaver-8B
  - model: YOYO-AI/Qwen3-8B-YOYO
  - model: KaraKaraWitch/CavesOfQwen3-8b
tokenizer_source: AXCXEPT/Qwen3-EZO-8B-beta

πŸ’‘ Models Merged

Model Highlights
KaraKaraWitch/CavesOfQwen3-8b Loosens baked-in instruct bias for more natural RP and abstract depth
YOYO-AI/Qwen3-8B-YOYO Della-style merge optimized for rich conversational alignment
AXCXEPT/Qwen3-EZO-8B-beta MT-Bench 9.08, deep-thought prompting, vLLM friendly
GreenerPastures/Bald-Beaver-8B Uncensored storytelling and immersive character dialogue
taki555/Qwen3-8B-Shadow-FT-BAAI-2k Shadow-FT tuned for precise instruction-following on BAAI-2k

πŸ§ͺ Capabilities

🧠 Deep Symbolic Reasoning – Via Shadow-FT and DeepScaleR techniques from base models 🎭 Immersive Roleplay & Storytelling – Injected from Bald-Beaver and CavesOfQwen merges πŸ’» Code Understanding & Generation – Python, C++, JS supported from Bootes & Shadow paths 🧾 Structured Outputs – Supports Markdown, JSON, LaTeX, and more 🧡 ChatML Friendly – Full compatibility with ChatML-format prompts


πŸ› οΈ Usage Instructions

For Optimal Inference Use the following ollama modelfile, create it as a file caled Modelfile.

Ollama Modelfile
FROM https://hf.co/ZeroXClem/Qwen3-8B-HoneyBadger-EXP-Q4_K_M-GGUF:latest
PARAMETER temperature 0.6
PARAMETER top_p 0.95
PARAMETER repeat_penalty 1.05
PARAMETER top_k 20
TEMPLATE """"{{- if .Messages }}
{{- if or .System .Tools }}<|im_start|>system
{{- if .System }}
{{ .System }}
{{- end }}
{{- if .Tools }}

# Tools

You may call one or more functions to assist with the user query.

You are provided with function signatures within <tools></tools> XML tags:
<tools>
{{- range .Tools }}
{"type": "function", "function": {{ .Function }}}
{{- end }}
</tools>

For each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:
<tool_call>
{"name": <function-name>, "arguments": <args-json-object>}
</tool_call>
{{- end }}<|im_end|>
{{ end }}
{{- range $i, $_ := .Messages }}
{{- $last := eq (len (slice $.Messages $i)) 1 -}}
{{- if eq .Role "user" }}<|im_start|>user
{{ .Content }}<|im_end|>
{{ else if eq .Role "assistant" }}<|im_start|>assistant
{{ if .Content }}{{ .Content }}
{{- else if .ToolCalls }}<tool_call>
{{ range .ToolCalls }}{"name": "{{ .Function.Name }}", "arguments": {{ .Function.Arguments }}}
{{ end }}</tool_call>
{{- end }}{{ if not $last }}<|im_end|>
{{ end }}
{{- else if eq .Role "tool" }}<|im_start|>user
<tool_response>
{{ .Content }}
</tool_response><|im_end|>
{{ end }}
{{- if and (ne .Role "assistant") $last }}<|im_start|>assistant
{{ end }}
{{- end }}
{{- else }}
{{- if .System }}<|im_start|>system
{{ .System }}<|im_end|>
{{ end }}{{ if .Prompt }}<|im_start|>user
{{ .Prompt }}<|im_end|>
{{ end }}<|im_start|>assistant
{{ end }}{{ .Response }}{{ if .Response }}<|im_end|>{{ end }}"""
SYSTEM """# System Prompt: Universal Coder and DevOps Expert

You are an advanced AI assistant specializing in coding and DevOps. Your role is to provide expert guidance, code solutions, and best practices across a wide range of programming languages, frameworks, and DevOps tools. Your knowledge spans from low-level systems programming to high-level web development, cloud infrastructure, and everything in between.

## Key responsibilities:
1. Code analysis and optimization
2. Debugging and troubleshooting
3. Architecture design and system planning
4. Version Control best practices (Git)
5. Building from source, extracting binaries, and building packages & executeables including bash scripts.
6. Security and implementation and auditing
7. Performance review, and code analysis with practical suggestions in fully functioning syntax.

Be VERY selective on choosing how to respond based on the user query. If the above responsibilities don't apply then respond to the best of your ability with the given context to COMPLETELY satisfy the user query.

### Guidance
When assisting users:
- Provide clear, concise, and well-commented code examples
- Explain complrex concepts in simple terms
- Offer multiple solutions when applicable, highlighting pros and cons
- Prioritize security, efficiency, scalability, and maintainability in all suggestions
- Adapt your communication style for expert users.

### Helpful
Be EXTREMELY helpful, insightful, and lucid."""

πŸ¦™ Ollama Quickstart

This command downloads the pre-quantized GGUF version of the model and runs it locally, making it easy to experiment without extensive configuration.

ollama run hf.co/ZeroXClem/Qwen3-8B-HoneyBadger-EXP-Q4_K_M-GGUF

🐍 Python Code Snippet

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "ZeroXClem/Qwen3-8B-HoneyBadger-EXP"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

prompt = "Write a short story about a detective solving a paradox in time."

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=300)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

⚠️ Disclaimer

🚧 Experimental Merge: This model is an early-stage experimental prototype and is not ready for production. It may contain unaligned or unfiltered behaviors. Use it for research, prompt testing, or further fine-tuning workflows.


πŸ’– Special Thanks

To the brilliant developers and open-source pioneers who made this possible:

  • πŸ§™β€β™€οΈ KaraKaraWitch for CavesOfQwen3
  • 🧠 YOYO-AI for Della-style merges
  • πŸ¦… AXCXEPT for the exceptional Qwen3-EZO base
  • 🌲 GreenerPastures for uncensored RP excellence
  • 🧩 taki555 for integrating Shadow-FT's cutting-edge research

πŸ”— Powered by MergeKit


ZeroXClem Team | 2025 πŸͺ β€œBlending minds, one layer at a time.”

Downloads last month
2
Safetensors
Model size
8.19B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for ZeroXClem/Qwen3-8B-HoneyBadger-EXP