ysn-rfd/HelpingAI2.5-5B-GGUF
This model was converted to GGUF format from HelpingAI/HelpingAI2.5-5B
using llama.cpp via the ggml.ai's all-gguf-same-where space.
Refer to the original model card for more details on the model.
β Quantized Models Download List
β¨ Recommended for CPU: Q4_K_M
| β‘ Recommended for ARM CPU: Q4_0
| π Best Quality: Q8_0
π Download | π’ Type | π Notes |
---|---|---|
Download | Basic quantization | |
Download | Small size | |
Download | Balanced quality | |
Download | Better quality | |
Download | Fast on ARM | |
Download | Fast, recommended | |
Download | Best balance | |
Download | Good quality | |
Download | Balanced | |
Download | High quality | |
Download | Very good quality | |
Download | Fast, best quality | |
Download | Maximum accuracy |
π‘ Tip: Use F16
for maximum precision when quality is critical
- Downloads last month
- 17
Hardware compatibility
Log In
to view the estimation
4-bit
5-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for ysn-rfd/HelpingAI2.5-5B-GGUF
Base model
HelpingAI/HelpingAI2.5-5B