EXL3 Quants of ArliAI/QwQ-32B-ArliAI-RpR-v4

EXL3 quants of ArliAI/QwQ-32B-ArliAI-RpR-v4 using exllamav3 for quantization.

Quants

Quant(Revision) Bits per Weight Head Bits
2.5_H6 2.5 6
3.0_H6 3.0 6
3.25_H6 3.25 6
4.0_H6 4.0 6
4.5_H6 4.5 6
5.0_H6 5.0 6
6.0_H6 6.0 6
8.0_H6 8.0 6
8.0_H8 8.0 8

Downloading quants with huggingface-cli

Click to view download instructions

Install hugginface-cli:

pip install -U "huggingface_hub[cli]"

Download quant by targeting the specific quant revision (branch):

huggingface-cli download ArtusDev/ArliAI_QwQ-32B-ArliAI-RpR-v4-EXL3 --revision "5bpw_H6" --local-dir ./
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ArtusDev/ArliAI_QwQ-32B-ArliAI-RpR-v4-EXL3

Base model

Qwen/Qwen2.5-32B
Finetuned
Qwen/QwQ-32B
Quantized
(18)
this model