GGUF
draft
speculative-decoding
conversational

image/png

A 0.6B parameter draft (speculative decoding) model for use with deepseek-ai/DeepSeek-R1-0528 and deepseek-ai/DeepSeek-R1.

NOTE: This is a draft model for the full-sized DeepSeek-R1-0528 / DeepSeek-R1 models and not the smaller "distilled" models!


I've only included the Q4_0 quant: DeepSeek-R1-0528-CODER-DRAFT-0.6B-Q4_0.gguf

as the 14 heads of this model doesn't allow for any of the other 4-bit quants to be made, and experimentation has shown using more or less than 4-bits for speculative decoding is a waste of time.

Downloads last month
55
GGUF
Model size
590M params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for jukofyork/DeepSeek-R1-0528-CODER-DRAFT-0.6B-v1.0-GGUF

Dataset used to train jukofyork/DeepSeek-R1-0528-CODER-DRAFT-0.6B-v1.0-GGUF

Collection including jukofyork/DeepSeek-R1-0528-CODER-DRAFT-0.6B-v1.0-GGUF