README.md exists but content is empty.
Downloads last month
1,655
GGUF
Model size
12.2B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

6-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Disya/Mistral-qwq-12b-merge-gguf

Quantized
(4)
this model