Gemma 2 9B Sahabat-AI Instruct (GGUF Versions)
This repository contains GGUF converted and quantized versions of the Sahabat-AI/gemma2-9b-cpt-sahabatai-v1-instruct model, converted using llama.cpp
.
This model is an instruction-tuned variant, suitable for chat and following commands.
Available GGUF Files:
1. gemma2-9b-cpt-sahabatai-v1-instruct-f16.gguf
- Format: FP16 (Full Precision)
- Size: ~17.22 GB (approximate, actual size may vary slightly)
- Description: This is the full-precision GGUF conversion. It offers the highest fidelity but requires significant VRAM.
2. gemma2-9b-cpt-sahabatai-v1-instruct-q2k.gguf
- Format: Q2_K (2-bit Quantized)
- Size: ~3.54 GB (approximate, actual size may vary slightly)
- Description: A highly quantized version for maximum compression, suitable for extremely resource-constrained environments. May have noticeable quality degradation.
3. gemma2-9b-cpt-sahabatai-v1-instruct-q3km.gguf
- Format: Q3_K_M (3-bit Quantized)
- Size: ~4.43 GB (approximate, actual size may vary slightly)
- Description: A balance between size and quality, offering better performance than Q2_K with minimal additional VRAM.
4. gemma2-9b-cpt-sahabatai-v1-instruct-q4km.gguf
- Format: Q4_K_M (4-bit Quantized)
- Size: ~5.37 GB (approximate, actual size may vary slightly)
- Description: This is a highly optimized 4-bit quantized version, suitable for devices with limited VRAM. It offers a good balance between model size, performance, and minimal quality loss.
Original Model:
How to Use:
Download the desired .gguf
file and use it with llama.cpp
, LM Studio, Ollama, or any other GGUF-compatible inference tool.
For llama.cpp
CLI, you might use:
./main -m gemma2-9b-cpt-sahabatai-v1-instruct-q4km.gguf -p "Write a story about a futuristic city." -n 128
- Downloads last month
- 18
Hardware compatibility
Log In
to view the estimation
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support