DeepSeek Draft Models
Collection
Tiny "draft" models for speculative decoding.
•
16 items
•
Updated
•
1
A 0.6B
parameter draft (speculative decoding) model for use with deepseek-ai/DeepSeek-R1-0528 and deepseek-ai/DeepSeek-R1.
NOTES:
DeepSeek-R1-0528
/ DeepSeek-R1
models and not the smaller "distilled" models!See jukofyork/DeepSeek-R1-0528-CODER-DRAFT-0.6B-v1.0-GGUF for the models in GGUF format.
python ./transplant_vocab.py \
DeepSeek-V3-0324-CODER-DRAFT-0.6B \
DeepSeek-R1-0528-BF16 \
DeepSeek-R1-0528-CODER-DRAFT-0.6B-UNTRAINED
formatted using the proper DeepSeek-R1
Jinja template (ie: with <think>
tags added around the reasoning, etc).
# Resume a prior run
resume_from_checkpoint = false
# Paths
model = 'DeepSeek-R1-0528-CODER-DRAFT-0.6B-UNTRAINED'
output_dir = 'DeepSeek-R1-0528-CODER-DRAFT-0.6B'
# Optimization configuration
full_fine_tune = true
epochs = 1
lr_scheduler = 'cosine'
warmup_steps = 100
# Performance settings
pipeline_stages = 1
logging_steps = 1
eval_steps = 100
save_steps = 100
checkpoint_every_n_minutes = 60
eval_before_first_step = true
eval_after_last_step = true
model_weight_dtype = 'bfloat16'
keep_states = 3
group_by_length = true
activation_checkpointing = 'unsloth'
# Dataset configuration
dataset_combination_mode = 'concatenate'
eval_gradient_accumulation_steps = 20
[optimizer]
type = 'adamw_kahan'
lr = 5e-5
beta1 = 0.9
beta2 = 0.999
weight_decay = 0.01
[[datasets]]
name = 'mixed_data'
dataset_type = 'textfile'
dataset_path = 'mixed_data/*.txt'
sequence_len = 32768
eval_size = 0.01
{
"train_micro_batch_size_per_gpu": 1,
"gradient_accumulation_steps": 20,
"gradient_clipping": 1.0,
"steps_per_print": 1
}
I used six RTX A6000
GPUs over three nodes and hence the 120
batch size (6 x 20 gradient accumulation steps = 120
).
Base model
Qwen/Qwen2.5-0.5B