Gemma 2 2B quantized for wllama (under 2gb).
q4_0_4_8 is WAY faster when using llama.cpp, with wllama, it's about the same as q4_k.
- Downloads last month
- 4
Hardware compatibility
Log In
to view the estimation
2-bit
4-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support