Triangle104's picture
Update README.md
bfa7fd7 verified
metadata
license: apache-2.0
base_model: allura-org/Q3-8B-Kintsugi
library_name: transformers
tags:
  - mergekit
  - axolotl
  - unsloth
  - roleplay
  - conversational
  - llama-cpp
  - gguf-my-repo
datasets:
  - PygmalionAI/PIPPA
  - Alfitaria/nemotron-ultra-reasoning-synthkink
  - PocketDoc/Dans-Prosemaxx-Gutenberg
  - FreedomIntelligence/Medical-R1-Distill-Data
  - cognitivecomputations/SystemChat-2.0
  - allenai/tulu-3-sft-personas-instruction-following
  - kalomaze/Opus_Instruct_25k
  - simplescaling/s1K-claude-3-7-sonnet
  - ai2-adapt-dev/flan_v2_converted
  - grimulkan/theory-of-mind
  - grimulkan/physical-reasoning
  - nvidia/HelpSteer3
  - nbeerbower/gutenberg2-dpo
  - nbeerbower/gutenberg-moderne-dpo
  - nbeerbower/Purpura-DPO
  - antiven0m/physical-reasoning-dpo
  - allenai/tulu-3-IF-augmented-on-policy-70b
  - NobodyExistsOnTheInternet/system-message-DPO

Triangle104/Q3-8B-Kintsugi-Q4_K_M-GGUF

This model was converted to GGUF format from allura-org/Q3-8B-Kintsugi using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.


Q3-8B-Kintsugi is a roleplaying model finetuned from Qwen3-8B-Base.

During testing, Kintsugi punched well above its weight class in terms of parameters, especially for 1-on-1 roleplaying and general storywriting.


Use with llama.cpp

Install llama.cpp through brew (works on Mac and Linux)

brew install llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo Triangle104/Q3-8B-Kintsugi-Q4_K_M-GGUF --hf-file q3-8b-kintsugi-q4_k_m.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo Triangle104/Q3-8B-Kintsugi-Q4_K_M-GGUF --hf-file q3-8b-kintsugi-q4_k_m.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.

git clone https://github.com/ggerganov/llama.cpp

Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1 flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).

cd llama.cpp && LLAMA_CURL=1 make

Step 3: Run inference through the main binary.

./llama-cli --hf-repo Triangle104/Q3-8B-Kintsugi-Q4_K_M-GGUF --hf-file q3-8b-kintsugi-q4_k_m.gguf -p "The meaning to life and the universe is"

or

./llama-server --hf-repo Triangle104/Q3-8B-Kintsugi-Q4_K_M-GGUF --hf-file q3-8b-kintsugi-q4_k_m.gguf -c 2048