Update README.md
Browse files
README.md
CHANGED
@@ -35,6 +35,12 @@ datasets:
|
|
35 |
This model was converted to GGUF format from [`allura-org/Q3-8B-Kintsugi`](https://huggingface.co/allura-org/Q3-8B-Kintsugi) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
36 |
Refer to the [original model card](https://huggingface.co/allura-org/Q3-8B-Kintsugi) for more details on the model.
|
37 |
|
|
|
|
|
|
|
|
|
|
|
|
|
38 |
## Use with llama.cpp
|
39 |
Install llama.cpp through brew (works on Mac and Linux)
|
40 |
|
|
|
35 |
This model was converted to GGUF format from [`allura-org/Q3-8B-Kintsugi`](https://huggingface.co/allura-org/Q3-8B-Kintsugi) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
36 |
Refer to the [original model card](https://huggingface.co/allura-org/Q3-8B-Kintsugi) for more details on the model.
|
37 |
|
38 |
+
---
|
39 |
+
Q3-8B-Kintsugi is a roleplaying model finetuned from Qwen3-8B-Base.
|
40 |
+
|
41 |
+
During testing, Kintsugi punched well above its weight class in terms of parameters, especially for 1-on-1 roleplaying and general storywriting.
|
42 |
+
|
43 |
+
---
|
44 |
## Use with llama.cpp
|
45 |
Install llama.cpp through brew (works on Mac and Linux)
|
46 |
|