About

static quants of https://huggingface.co/MrRikyz/Kitsune-Symphony-V0.0-12B

Usage

If you are unsure how to use GGUF files, refer to one of TheBloke's READMEs for more details, including on how to concatenate multi-part files.

Provided Quants

If you want a specific quant just ask for it in the community tab

(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)

Link Type Size/GB Notes
GGUF Q2_K 4.8 Very low quality, Not recommended
GGUF Q3_K_S 5.6 low quality
GGUF IQ3_M 5.7
GGUF Q3_K_M 6.1 lower quality
GGUF Q3_K_L 6.6
GGUF IQ4_XS 6.8 balanced speed and quality, recomended
GGUF Q4_K_S 7.1 fast, recommended
GGUF Q4_K_M 7.5 fast, recommended
GGUF Q5_K_S 8.6
GGUF Q5_K_M 8.8 good quality
GGUF Q6_K 10.1 very good quality
GGUF Q8_0 13.1 best quality
Downloads last month
3,763
GGUF
Model size
12.2B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for MrRikyz/Kitsune-Symphony-V0.0-12B-GGUF

Quantized
(3)
this model