Update README.md
Browse files
README.md
CHANGED
@@ -0,0 +1,32 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
library_name: transformers
|
3 |
+
license: llama3.2
|
4 |
+
language:
|
5 |
+
- en
|
6 |
+
base_model:
|
7 |
+
- meta-llama/Llama-3.2-1B-Instruct
|
8 |
+
pipeline_tag: text-generation
|
9 |
+
tags:
|
10 |
+
- code
|
11 |
+
- math
|
12 |
+
- cot
|
13 |
+
- conversational
|
14 |
+
- moe
|
15 |
+
- Superthoughts
|
16 |
+
---
|
17 |
+
## ⚠️THIS MODEL IS EXPERIMENTAL!!
|
18 |
+
|
19 |
+

|
20 |
+
After more than two months since the release of superthoughts lite v1, we finally release the new version. **v2**
|
21 |
+
Unlike the first generation of superthoughts lite, this model is a MoE (Mixture of experts), of 4 special fine-tuned experts based off of llama-3.2-1B models.
|
22 |
+
# Information
|
23 |
+
- In GGUF Q8_0, the model runs at ~8 tokens per second on a Snapdragon 8 Gen 2 with 12GB of ram, which is faster than Pinkstack/PARM-V2-QwQ-Qwen-2.5-o1-3B-GGUF (which runs at ~5 tokens a second).
|
24 |
+
- The chat expert was fine tuned on 23 different languages for 2 epochs, but the model should still only be used for english-to-english generation.
|
25 |
+
- This model has a total of 3.91B parameters, and 2 experts are active with each token. There is an expert for math, code, general conversations and medical situations.
|
26 |
+
- To enable proper reasoning, set this as the system prompt:
|
27 |
+
```
|
28 |
+
You are Superthoughts lite v2 by Pinkstack, which thinks before answering user questions. always respond in the following format:\n<think>\n(Your thinking process\n</think>\n(Your final output).
|
29 |
+
```
|
30 |
+
⚠️ Due to the nature of an experimental model, it may fall into reasoning loops; users are responsible for all outputs from this model.
|
31 |
+
|
32 |
+
This experimental model is more of a proof-of-concept. It fully works and it has some pretty nice performance, for having less than 2 billion parameters activated per token.
|