EtherealAurora 12B v2
Released by Yamatazen
Quant by FrenzyBiscuit

AWQ Details
- Model was quantized down to INT4 using GEMM Kernels.
- Zero point quantization
- Group size of 64
- Downloads last month
- 9
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for FrenzyBiscuit/EtheralAurora-12B-v2-AWQ
Base model
yamatazen/EtherealAurora-12B-v2