AWQ Quant of Tarek07/Legion-V2.1-LLaMa-70B
AWQ quant of Tarek07/Legion-V2.1-LLaMa-70B using AutoAWQ for quantization.
Model was quantized down to INT4 using GEMM kernels, with zero-point quantization and a group size of 128.
- Downloads last month
- 11
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
馃檵
Ask for provider support
Model tree for ArtusDev/Tarek07_Legion-V2.1-LLaMa-70B-AWQ
Base model
Tarek07/Legion-V2.1-LLaMa-70B