Hugging Face
Models
Datasets
Spaces
Community
Docs
Enterprise
Pricing
Log In
Sign Up
ConfidentialMind
/
Arcee-Blitz-GPTQ-G32-W4A16-MSE
like
0
Follow
ConfidentialMind
6
Text Generation
Safetensors
mistral
gptq
quantization
4bit
confidentialmind
apache2.0
mistral-small-24b
conversational
4-bit precision
Model card
Files
Files and versions
Community
Use this model
馃敟 Quantized Model: Arcee-Blitz_gptq_g32_4bit 馃敟
馃敟 Quantized Model: Arcee-Blitz_gptq_g32_4bit 馃敟
The MSE based test did not work out too well; use non-MSE quant.
Downloads last month
4
Safetensors
Model size
4.91B params
Tensor type
I32
路
BF16
路
FP16
路
Chat template
Files info
Inference Providers
NEW
Text Generation
This model isn't deployed by any Inference Provider.
馃檵
Ask for provider support