Draft Models
Collection
Tiny "draft" models for speculative decoding.
•
32 items
•
Updated
•
2
A 0.6B
parameter draft (speculative decoding) model for use with Kimi-K2-Instruct.
See Kimi-K2-Instruct-DRAFT-0.6B-v3.0 for the models in transformers
format, and a detailed explanation of how the model was created.
I've included the Q4_0
quants for 3 different context lengths:
Qwen2.5-0.5B
doesn't allow for any of the other 4-bit quants to be made (and experimentation has shown using more or less than 4-bits for speculative decoding is a waste of time anwyay).llama.cpp
using "static-YaRN" the scaling factor remains constant regardless of input length! Only use the longer context versions when processing long contexts is required...TikToken
/ SentencePiece
tokenizer mismatch requires a small hack to convert_hf_to_gguf.py
(see main model page for details).4-bit