deepseek-ai/DeepSeek-R1-Distill-Qwen-14B GGUF Quantizations 🚀
Optimized GGUF quantization files for enhanced model performance
Powered by Featherless AI - run any model you'd like for a simple small fee.
Available Quantizations 📊
Quantization Type | File | Size |
---|---|---|
IQ4_XS | deepseek-ai-DeepSeek-R1-Distill-Qwen-14B-IQ4_XS.gguf | 7806.96 MB |
Q2_K | deepseek-ai-DeepSeek-R1-Distill-Qwen-14B-Q2_K.gguf | 5503.17 MB |
Q3_K_L | deepseek-ai-DeepSeek-R1-Distill-Qwen-14B-Q3_K_L.gguf | 7557.65 MB |
Q3_K_M | deepseek-ai-DeepSeek-R1-Distill-Qwen-14B-Q3_K_M.gguf | 6999.21 MB |
Q3_K_S | deepseek-ai-DeepSeek-R1-Distill-Qwen-14B-Q3_K_S.gguf | 6351.09 MB |
Q4_K_M | deepseek-ai-DeepSeek-R1-Distill-Qwen-14B-Q4_K_M.gguf | 8571.73 MB |
Q4_K_S | deepseek-ai-DeepSeek-R1-Distill-Qwen-14B-Q4_K_S.gguf | 8176.26 MB |
Q5_K_M | deepseek-ai-DeepSeek-R1-Distill-Qwen-14B-Q5_K_M.gguf | 10022.04 MB |
Q5_K_S | deepseek-ai-DeepSeek-R1-Distill-Qwen-14B-Q5_K_S.gguf | 9790.95 MB |
Q6_K | deepseek-ai-DeepSeek-R1-Distill-Qwen-14B-Q6_K.gguf | 11563.00 MB |
Q8_0 | deepseek-ai-DeepSeek-R1-Distill-Qwen-14B-Q8_0.gguf | 14974.21 MB |
⚡ Powered by Featherless AI
Key Features
- 🔥 Instant Hosting - Deploy any Llama model on HuggingFace instantly
- 🛠️ Zero Infrastructure - No server setup or maintenance required
- 📚 Vast Compatibility - Support for 2400+ models and counting
- 💎 Affordable Pricing - Starting at just $10/month
Links:
Get Started | Documentation | Models
- Downloads last month
- 224
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for featherless-ai-quants/deepseek-ai-DeepSeek-R1-Distill-Qwen-14B-GGUF
Base model
deepseek-ai/DeepSeek-R1-Distill-Qwen-14B