Gemma 2 2B quantized for wllama (under 2gb).

q4_0_4_8 is WAY faster when using llama.cpp, with wllama, it's about the same as q4_k.

Downloads last month
4
GGUF
Model size
2.61B params
Architecture
gemma2
Hardware compatibility
Log In to view the estimation

2-bit

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Fishfishfishfishfish/Gemma-2-2B_wllama_gguf

Base model

google/gemma-2-2b
Quantized
(161)
this model