EXL3 quantization of Josiefied-Qwen3-14B-abliterated-v3, 6 bits per weight.

HumanEval (argmax)

Model Q4 Q6 Q8 FP16
Josiefied-Qwen3-14B-abliterated-v3-exl3-4bpw 71.3 70.1 69.5 71.3
Josiefied-Qwen3-14B-abliterated-v3-exl3-6bpw 73.2 78.0 76.2 75.6
Qwen3-14B-exl3-4bpw 88.4 89.0 89.0 89.0
Qwen3-14B-exl3-6bpw 89.6 88.4 89.6 89.0
Downloads last month
14
Safetensors
Model size
6.03B params
Tensor type
FP16
·
I16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for isogen/Josiefied-Qwen3-14B-abliterated-v3-exl3-6bpw

Finetuned
Qwen/Qwen3-14B
Quantized
(15)
this model