Q3-8B-Kintsugi
get it? because kintsugi sounds like kitsune? hahaha-
Overview
Q3-8B-Kintsugi is a roleplaying model finetuned from Qwen3-8B-Base.
During testing, Kintsugi punched well above its weight class in terms of parameters, especially for 1-on-1 roleplaying and general storywriting.
Quantizations
EXL3:
GGUF:
MLX:
- TODO!
Usage
Format is plain-old ChatML (please note that, unlike regular Qwen 3, you do not need to prefill empty think tags for it not to reason -- see below).
Settings used by testers varied, but Fizz and inflatebot used the same settings and system prompt recommended for GLM4-32B-Neon-v2.
The official instruction following version of Qwen3-8B was not used as a base. Instruction-following is trained in post-hoc, and "thinking" traces were not included. As a result of this, "thinking" will not function.
Training Process
The base model first went through a supervised finetune on a corpus of instruction following data, roleplay conversations, and human writing based on the Ink/Bigger Body/Remnant lineage.
Finally, a KTO reinforcement learning phase steered the model away from the very purple prose the initial merge had, and improved its logical+spatial reasoning and sense of overall "intelligence".
Both stages here are very similar to Q3-30B-A3B-Designant, which went through a very similar process with the same data.
Credits
Fizz - Training, Data Wrangling
Toaster, Mango, Bot, probably others I forgot ;-; - Testing
inflatebot - original Designant model card that this one was yoinked from
Artus - Funding
Alibaba - Making the original model
Axolotl, Unsloth, Huggingface - Making the frameworks used to train this model (Axolotl was used for the SFT process, and Unsloth+TRL was used for the KTO process)
All quanters, inside and outside the org, specifically Artus and Lyra
We would like to thank the Allura community on Discord, especially Curse, Heni, Artus and Mawnipulator, for their companionship and moral support. You all mean the world to us <3
There, God is not.
- Downloads last month
- 8