EXL3 Quants of CharGen/CharGen-v3-mini

EXL3 quants of CharGen/CharGen-v3-mini using exllamav3 for quantization.

Quants

Quant(Revision) Bits per Weight Head Bits
3.0_H6 3.0 6
3.5_H6 3.5 6
4.0_H6 4.0 6
4.5_H6 4.5 6
5.0_H6 5.0 6
6.0_H6 6.0 6
8.0_H6 8.0 6
8.0_H8 8.0 8

Downloading quants with huggingface-cli

Click to view download instructions

Install hugginface-cli:

pip install -U "huggingface_hub[cli]"

Download quant by targeting the specific quant revision (branch):

huggingface-cli download ArtusDev/CharGen_CharGen-v3-mini-EXL3 --revision "5bpw_H6" --local-dir ./
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ArtusDev/CharGen_CharGen-v3-mini-EXL3

Quantized
(3)
this model