This is a 2.4-bit EXL2 quantization of Aurelian v0.5 70B 32K, an interim checkpoint before v1.0. See that page for more details.

This quantization fits in a single 24GB using Exllamav2 & 8-bit cache @ 10K context. It uses the newer experimental quantization method from turboderp.

Downloads last month
5
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.