GLM‑4.1V‑9B‑Thinking • Quantized

🚀 Model Description

This is a quantized version of GLM‑4.1V‑9B‑Thinking, a powerful 9B‑parameter vision‑language model using the “thinking paradigm” and reinforced reasoning. The quantization enables significantly lighter memory usage and faster inference on consumer-grade GPUs while preserving its strong performance on multimodal reasoning tasks.


Quantization Details

Method: torchao quantization Weight Precision: int8 Activation Precision: int8 dynamic Technique: Symmetric mapping Impact: Significant reduction in model size with minimal loss in reasoning, coding, and general instruction-following capabilities.


🎯 Intended Use

Perfect for:

  • Vision‑language applications with long contexts and heavy reasoning
  • On-device or low-VRAM inference for tempo‑sensitive environments
  • Challenging multimodal tasks: image Q&A, reasoning over diagrams, high-resolution visual analysis
  • Research into quantized vision‑language deployment

⚠️ Limitations

  • Minor drop in detailed reasoning accuracy vs full-precision
  • Maintains original model’s general LLM caveats: hallucinations, bias, and prompting sensitivity

Downloads last month
47
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AINovice2005/quantized-GLM-4.1V-9B-Thinking

Quantized
(3)
this model

Collection including AINovice2005/quantized-GLM-4.1V-9B-Thinking