YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Custom GGUF quants of Googleโs gemma-2-2b-it, where the Output Tensors are quantized to Q8_0 or kept at F32 while the Embeddings are kept at F32. ๐ง ๐ฅ๐
Notes: Great SMOL LLM for on-device inference for mobile devices. ๐
- Downloads last month
- 21
Hardware compatibility
Log In
to view the estimation
4-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support