This is Qwen/Qwen3-1.7B quantized with AutoRound (symmetric quantization) and serialized with the GPTQ format in 8-bit.

Downloads last month
1
Safetensors
Model size
678M params
Tensor type
I32
·
BF16
·
FP16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including Siddharth63/Qwen3-1.7B-8bits-GPTQ-Autoround-sym