Original Model - https://huggingface.co/google/gemma-2b-it
Quantized With - https://github.com/2stacks/QuantizeLLMs
- Downloads last month
- 39
Hardware compatibility
Log In
to view the estimation
4-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for 2stacks/gemma-2b-it-GGUF-quantized
Base model
google/gemma-2b-it