File size: 588 Bytes
c6e4ed5
 
4a1b955
 
 
 
c6e4ed5
 
4a1b955
c6e4ed5
4a1b955
c6e4ed5
4a1b955
 
 
 
 
c6e4ed5
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
---
library_name: transformers
datasets:
- PowerInfer/QWQ-LONGCOT-500K
base_model:
- Qwen/Qwen2.5-Coder-0.5B-Instruct
---

# Qwen2.5-Coder-0.5B-QwQ-draft

A draft model trained for [Qwen/QwQ-32B-Preview](Qwen/QwQ-32B-Preview)

- vocabulary size of 152064, same as QwQ-32B-Preview (can be used in VLLM directly without any hack)
- trained from [Qwen/Qwen2.5-Coder-0.5B-Instruct](Qwen/Qwen2.5-Coder-0.5B-Instruct)
- on [PowerInfer/QWQ-LONGCOT-500K](PowerInfer/QWQ-LONGCOT-500K) 2 epochs
- draft acceptance rate above 0.8
- up to x2.5 token speed in math problems (33 toks/s vs. 85 toks/s)