Attention quantization: HQQ 4-bit, groupsize 64, compress zero, compress scale with groupsize 256
Experts quantization: HQQ 2-bit, groupsize 16, compress zero, compress scale with groupsize 128
- Downloads last month
- 451
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support