Attention quantization: HQQ 4-bit, groupsize 64, compress zero, compress scale with groupsize 256 Experts quantization: HQQ 2-bit, groupsize 16, compress zero, compress scale with groupsize 128
Chat template
Files info