Model Card
- Base model:
google/gemma-3-27b-it
- Quantization method: LNQ with GuidedQuant Hessian
- Target bit-width: 2
- Backend kernel: Any-Precision-LLM kernel (
ap-gemv
)
- Calibration data: RedPajama (1024 sentences / 4096 tokens)
- Calibration objective: Next-token prediction
- num_groups (for GuidedQuant Hessian): 1
How to run
References
Model tree for jusjinuk/gemma-3-27b-it-2bit-GuidedQuant-LNQ
Collection including
jusjinuk/gemma-3-27b-it-2bit-GuidedQuant-LNQ