Note:

This repo hosts only a Q5_K_S of Fimbulvetr 11b v2. GGUF quant is from mradermacher/Fimbulvetr-11B-v2-GGUF. The additional files in this GGUF repo is for personal usage using Text Gen Webui with llamacpp_hf.

Downloads last month
149
GGUF
Model size
10.7B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Clevyby/Fimbulvetr-11B-v2-Q5_K_S-GGUF

Quantized
(13)
this model