SmolLM3‑3B • Quantized


🚀 Model Description

This is an int8 quantized version of SmolLM3–3B, a highly efficient, open-source 3 B parameter LLM. It delivers nearly state-of-the-art multilingual reasoning and long-context performance (up to 128k tokens) with drastically reduced memory usage and inference cost, enabling fast deployment on mid‑range GPUs and edge devices.


📏 Quantization Details

  • Library: torchao
  • Precision: int8 weights and activations
  • Benefits: ~50–75% reduction in VRAM usage, enabling 12–16 GB GPU usage, with minimal performance drop on reasoning, coding, and long-context abilities

🎯 Intended Use

Ideal for:

  • Scenarios requiring fast LLM inference under constrained VRAM (e.g. small servers or laptops)
  • Multilingual reasoning tasks, chain-of-thought logic, and long-context document understanding
  • Deployments of dual-mode (think/no_think) conversational agents
  • Research into efficient LLM deployment and quantization techniques

⚠️ Limitations

  • Slight performance loss compared to full-precision SmolLM3‑3B
  • Requires proper benchmarking in your actual environment
  • Continues to exhibit standard LLM risks: hallucination, bias, etc.
  • Quant performance may vary across languages or context lengths
Downloads last month
5
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AINovice2005/quantized-SmolLM3-3B

Quantized
(43)
this model

Collection including AINovice2005/quantized-SmolLM3-3B