File size: 2,056 Bytes
67ff38a a6eab6c 9b1b5d2 67ff38a 5514a74 67ff38a 5514a74 67ff38a fb9163c 38ae50e fb9163c 67ff38a 011743e 67ff38a b71f3e0 67ff38a 434f731 fb9163c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 |
---
datasets:
- PowerInfer/QWQ-LONGCOT-500K
base_model:
- Qwen/Qwen2.5-3B-Instruct
---
# SmallThinker-3B-preview
We introduce **SmallThinker-3B-preview**, a new model fine-tuned from the [Qwen2.5-3b-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) model.
## Benchmark Performance
| Model | AIME24 | AMC23 | GAOKAO2024_I | GAOKAO2024_II | MMLU_STEM | AMPS_Hard | math_comp |
|---------|--------|-------|--------------|---------------|-----------|-----------|-----------|
| Qwen2.5-3B-Instruct | 6.67 | 45 | 50 | 35.8 | 59.8 | - | - |
| SmallThinker | 16.667 | 57.5 | 64.2 | 57.1 | 68.2 | 70 | 46.8 |
| GPT-4o | 9.3 | - | - | - | 64.2 | 57 | 50 |
Limitation: Due to SmallThinker's current limitations in instruction following, for math_comp we adopt a more lenient evaluation method where only correct answers are required, without constraining responses to follow the specified AAAAA format.
## Intended Use Cases
SmallThinker is designed for the following use cases:
1. **Edge Deployment:** Its small size makes it ideal for deployment on resource-constrained devices.
2. **Draft Model for QwQ-32B-Preview:** SmallThinker can serve as a fast and efficient draft model for the larger QwQ-32B-Preview model. From my test, in llama.cpp we can get 70% speedup (from 40 tokens/s to 70 tokens/s).
## Limitations & Disclaimer
Please be aware of the following limitations:
* **Language Limitation:** The model has only been trained on English-language datasets, hence its capabilities in other languages are still lacking.
* **Limited Knowledge:** Due to limited SFT data and the model's relatively small scale, its reasoning capabilities are constrained by its knowledge base.
* **Unpredictable Outputs:** The model may produce unexpected outputs due to its size and probabilistic generation paradigm. Users should exercise caution and validate the model's responses.
* **Repetition Issue:** The model tends to repeat itself when answering high-difficulty questions. Please increase the `repetition_penalty` to mitigate this issue. |