Mistral-Small-24B-Instruct-2501-writer-AWQ
This model is the 4-bit AWQ-quantized version of Mistral-Small-24B-Instruct-2501-writer.
- Quantization Method: AWQ (Activation-aware Weight Quantization)
- Quantization Configuration:
- Bit Width: 4-bit
- Group Size: 128
- Zero Point: Enabled
- Version: GEMM
- Downloads last month
- 5
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for lars1234/Mistral-Small-24B-Instruct-2501-writer-AWQ
Base model
mistralai/Mistral-Small-24B-Base-2501