roleplaiapp/deepseek-r1-qwen-2.5-32B-ablated-Q2_K-GGUF
Repo: roleplaiapp/deepseek-r1-qwen-2.5-32B-ablated-Q2_K-GGUF
Original Model: deepseek-r1-qwen-2.5-32B-ablated
Quantized File: deepseek-r1-qwen-2.5-32B-ablated-Q2_K.gguf
Quantization: GGUF
Quantization Method: Q2_K
Overview
This is a GGUF Q2_K quantized version of deepseek-r1-qwen-2.5-32B-ablated
Quantization By
I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.
Andrew Webby @ RolePlai.
- Downloads last month
- 25
Hardware compatibility
Log In
to view the estimation
2-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support