roleplaiapp/deepseek-r1-qwen-2.5-32B-ablated-f16-GGUF
Repo: roleplaiapp/deepseek-r1-qwen-2.5-32B-ablated-f16-GGUF
Original Model: deepseek-r1-qwen-2.5-32B-ablated
Quantized File: deepseek-r1-qwen-2.5-32B-ablated-bf16/deepseek-r1-qwen-2.5-32B-ablated-bf16-00001-of-00002.gguf
Quantization: GGUF
Quantization Method: f16
Overview
This is a GGUF f16 quantized version of deepseek-r1-qwen-2.5-32B-ablated
Quantization By
I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.
Andrew Webby @ RolePlai.
- Downloads last month
- 52
Hardware compatibility
Log In
to view the estimation
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support