roleplaiapp/DeepSeek-R1-Distill-Llama-3.1-16.5B-Brainstorm-i1-Q3_K_M-GGUF

Repo: roleplaiapp/DeepSeek-R1-Distill-Llama-3.1-16.5B-Brainstorm-i1-Q3_K_M-GGUF Original Model: DeepSeek-R1-Distill-Llama-3.1-16.5B-Brainstorm-i1 Quantized File: DeepSeek-R1-Distill-Llama-3.1-16.5B-Brainstorm.i1-Q3_K_M.gguf Quantization: GGUF Quantization Method: Q3_K_M

Overview

This is a GGUF Q3_K_M quantized version of DeepSeek-R1-Distill-Llama-3.1-16.5B-Brainstorm-i1

Quantization By

I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.

Andrew Webby @ RolePlai.

Downloads last month
11
GGUF
Model size
16.5B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

3-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support