File size: 3,089 Bytes
1e9c9ab
 
 
 
 
2bd59b3
 
 
 
 
 
9792daa
 
 
 
1e9c9ab
 
 
 
 
 
cb0c613
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1e9c9ab
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9792daa
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
---
base_model: Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-0427
tags:
- llama-cpp
- gguf-my-repo
- code
- math
- cot
- conversational
- moe
- Superthoughts
language:
- en
license: llama3.2
pipeline_tag: text-generation
---

# Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-0427-Q8_0-GGUF
This model was converted to GGUF format from [`Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-0427`](https://huggingface.co/Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-0427) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-0427) for more details on the model.

## ⚠️THIS MODEL IS EXPERIMENTAL!!

![image/png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/zeYeNGUPCXFPUodR2Gj8u.png)
After more than two months since the release of superthoughts lite v1, we finally release the new version. **v2**
Unlike the first generation of superthoughts lite, this model is a MoE (Mixture of experts), of 4 special fine-tuned experts based off of llama-3.2-1B models.
# Information
- In GGUF Q8_0, the model runs at ~8 tokens per second on a Snapdragon 8 Gen 2 with 12GB of ram, which is faster than Pinkstack/PARM-V2-QwQ-Qwen-2.5-o1-3B-GGUF (which runs at ~5 tokens a second).
- The chat expert was fine tuned on 23 different languages for 2 epochs, but the model should still only be used for english-to-english generation.
- This model has a total of 3.91B parameters, and 2 experts are active with each token. There is an expert for math, code, general conversations and medical situations.
- Long context: the model supports up to **131,072** input tokens, and can generate up to **16,384** tokens.
- To enable proper reasoning, set this as the system prompt:
```
You are Superthoughts lite v2 by Pinkstack, which thinks before answering user questions. always respond in the following format:\n<think>\n(Your thinking process\n</think>\n(Your final output).
```
⚠️ Due to the nature of an experimental model, it may fall into reasoning loops; users are responsible for all outputs from this model.
This experimental model is more of a proof-of-concept for now. It fully works and it has some pretty nice performance, for having less than 2 billion parameters activated per token.

**If you have any questions, feel free to open a "New Discussion".**

## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)

```bash
brew install llama.cpp

```
Invoke the llama.cpp server or the CLI.

### CLI:
```bash
llama-cli --hf-repo Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-0427-Q8_0-GGUF --hf-file superthoughts-lite-v2-moe-llama3.2-experimental-0427-q8_0.gguf -p "The meaning to life and the universe is"
```

### Server:
```bash
llama-server --hf-repo Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-experimental-0427-Q8_0-GGUF --hf-file superthoughts-lite-v2-moe-llama3.2-experimental-0427-q8_0.gguf -c 2048
```