|
--- |
|
license: apache-2.0 |
|
tags: |
|
- merge |
|
- mergekit |
|
- lazymergekit |
|
- bunnycore/Qwen3-4B-dot-exp |
|
- fakezeta/amoral-Qwen3-4B+bunnycore/Qwen-3-4B-Persona-lora_model |
|
base_model: |
|
- bunnycore/Qwen3-4B-dot-exp |
|
- fakezeta/amoral-Qwen3-4B |
|
- mlabonne/Qwen3-4B-abliterated |
|
- bunnycore/Qwen-3-4B-Persona-lora_model |
|
--- |
|
|
|
# Qwen3-4B-Dot-Goat |
|
|
|
Qwen3-4B-Dot-Goat is a 4-billion-parameter language model derived from the Qwen3 series. It merges capabilities from a diverse set of specialized models—reasoning prowess from Polars, uncensored depth from Amoral Qwen, personalized behavior via Persona LoRA, and tool-calling agility of Jan-Nano—into a single lightweight model. |
|
|
|
## Intended Use |
|
- Chatbots, multi-turn dialogue systems |
|
- Instruction following, code generation, logical reasoning |
|
- Summarization, translation, creative writing |
|
- Agentic workflows with external tool integration |
|
|
|
Qwen3-4B-Dot-Goat is a merge of the following models using [mergekit](https://github.com/cg123/mergekit): |
|
* [bunnycore/Qwen3-4B-dot-exp](https://huggingface.co/bunnycore/Qwen3-4B-dot-exp) |
|
* [fakezeta/amoral-Qwen3-4B+bunnycore/Qwen-3-4B-Persona-lora_model](https://huggingface.co/fakezeta/amoral-Qwen3-4B+bunnycore/Qwen-3-4B-Persona-lora_model) |
|
|
|
## 🧩 Configuration |
|
|
|
```yaml |
|
models: |
|
- model: bunnycore/Qwen3-4B-dot-exp |
|
parameters: |
|
density: 0.5 |
|
weight: 0.5 |
|
- model: fakezeta/amoral-Qwen3-4B+bunnycore/Qwen-3-4B-Persona-lora_model |
|
parameters: |
|
density: 0.4 |
|
weight: 0.4 |
|
|
|
merge_method: ties |
|
base_model: mlabonne/Qwen3-4B-abliterated+bunnycore/Qwen-3-4B-Persona-lora_model |
|
parameters: |
|
normalize: false |
|
int8_mask: true |
|
dtype: float16 |
|
|
|
``` |