Update README.md
Browse files
README.md
CHANGED
@@ -9,6 +9,25 @@ tags:
|
|
9 |
This model was converted to GGUF format from [`Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-0427`](https://huggingface.co/Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-0427) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
10 |
Refer to the [original model card](https://huggingface.co/Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-0427) for more details on the model.
|
11 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
## Use with llama.cpp
|
13 |
Install llama.cpp through brew (works on Mac and Linux)
|
14 |
|
@@ -27,24 +46,3 @@ llama-cli --hf-repo Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-04
|
|
27 |
```bash
|
28 |
llama-server --hf-repo Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-0427-Q8_0-GGUF --hf-file superthoughts-lite-v2-moe-llama3.2-experimental-0427-q8_0.gguf -c 2048
|
29 |
```
|
30 |
-
|
31 |
-
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
|
32 |
-
|
33 |
-
Step 1: Clone llama.cpp from GitHub.
|
34 |
-
```
|
35 |
-
git clone https://github.com/ggerganov/llama.cpp
|
36 |
-
```
|
37 |
-
|
38 |
-
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
|
39 |
-
```
|
40 |
-
cd llama.cpp && LLAMA_CURL=1 make
|
41 |
-
```
|
42 |
-
|
43 |
-
Step 3: Run inference through the main binary.
|
44 |
-
```
|
45 |
-
./llama-cli --hf-repo Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-0427-Q8_0-GGUF --hf-file superthoughts-lite-v2-moe-llama3.2-experimental-0427-q8_0.gguf -p "The meaning to life and the universe is"
|
46 |
-
```
|
47 |
-
or
|
48 |
-
```
|
49 |
-
./llama-server --hf-repo Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-0427-Q8_0-GGUF --hf-file superthoughts-lite-v2-moe-llama3.2-experimental-0427-q8_0.gguf -c 2048
|
50 |
-
```
|
|
|
9 |
This model was converted to GGUF format from [`Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-0427`](https://huggingface.co/Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-0427) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
10 |
Refer to the [original model card](https://huggingface.co/Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-0427) for more details on the model.
|
11 |
|
12 |
+
## ⚠️THIS MODEL IS EXPERIMENTAL!!
|
13 |
+
|
14 |
+

|
15 |
+
After more than two months since the release of superthoughts lite v1, we finally release the new version. **v2**
|
16 |
+
Unlike the first generation of superthoughts lite, this model is a MoE (Mixture of experts), of 4 special fine-tuned experts based off of llama-3.2-1B models.
|
17 |
+
# Information
|
18 |
+
- In GGUF Q8_0, the model runs at ~8 tokens per second on a Snapdragon 8 Gen 2 with 12GB of ram, which is faster than Pinkstack/PARM-V2-QwQ-Qwen-2.5-o1-3B-GGUF (which runs at ~5 tokens a second).
|
19 |
+
- The chat expert was fine tuned on 23 different languages for 2 epochs, but the model should still only be used for english-to-english generation.
|
20 |
+
- This model has a total of 3.91B parameters, and 2 experts are active with each token. There is an expert for math, code, general conversations and medical situations.
|
21 |
+
- Long context: the model supports up to **131,072** input tokens, and can generate up to **16,384** tokens.
|
22 |
+
- To enable proper reasoning, set this as the system prompt:
|
23 |
+
```
|
24 |
+
You are Superthoughts lite v2 by Pinkstack, which thinks before answering user questions. always respond in the following format:\n<think>\n(Your thinking process\n</think>\n(Your final output).
|
25 |
+
```
|
26 |
+
⚠️ Due to the nature of an experimental model, it may fall into reasoning loops; users are responsible for all outputs from this model.
|
27 |
+
This experimental model is more of a proof-of-concept for now. It fully works and it has some pretty nice performance, for having less than 2 billion parameters activated per token.
|
28 |
+
|
29 |
+
**If you have any questions, feel free to open a "New Discussion".**
|
30 |
+
|
31 |
## Use with llama.cpp
|
32 |
Install llama.cpp through brew (works on Mac and Linux)
|
33 |
|
|
|
46 |
```bash
|
47 |
llama-server --hf-repo Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-0427-Q8_0-GGUF --hf-file superthoughts-lite-v2-moe-llama3.2-experimental-0427-q8_0.gguf -c 2048
|
48 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|