Athene-V2-Chat AWQ 4-Bit Quantized Version
This repository provides the AWQ 4-bit quantized version of the Athene-V2-Chat model, originally developed by Nexusflow. This model's weights are padded with zeros before quantization to ensure compatibility with multi-GPU tensor parallelism by resolving divisibility constraints. The padding minimally impacts computation while enabling efficient scaling across multiple GPUs.
- Downloads last month
- 1,190
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support