File size: 4,057 Bytes
b269b2a
 
 
 
 
7a27fba
 
 
 
 
 
 
 
 
 
 
 
452ce6e
 
 
077ce71
 
 
 
 
 
64ee892
 
077ce71
 
 
 
 
 
64ee892
 
077ce71
 
 
 
 
 
b269b2a
86d3990
111853b
7a27fba
0b27704
dcd846a
c8890a9
7a27fba
2b47c1b
86d3990
 
 
 
 
 
 
 
 
7a27fba
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9b2e667
7a27fba
 
 
 
 
 
 
 
 
 
 
 
e63103e
7a27fba
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
library_name: transformers
tags:
- trl
- grpo
- rl
- superthoughts
- reasoning
- cot
license: apache-2.0
datasets:
- openai/gsm8k
- Pinkstack/intructions-sft-sharegpt
language:
- en
base_model:
- HuggingFaceTB/SmolLM2-1.7B-Instruct
widget:
- messages:
  - role: user
    content: >-
      You must act in a conversational matter and always include at the start
      <think> ... </think> <output> ... </output> tokens.


      Are cats cool?
- messages:
  - role: user
    content: >-
      You must act in a conversational matter and always include at the start
      <think> ... </think> <output> ... </output> tokens.


      Hello!
- messages:
  - role: user
    content: >-
      You must act in a conversational matter and always include at the start
      <think> ... </think> <output> ... </output> tokens.


      2x-2=6, how much is X?
---

V2: Pinkstack/Superthoughts-lite-v2-MOE-Llama3.2-bf16 
# Information
Advanced, high-quality and **lite** reasoning for a tiny size that you can run on your phone.

At original quality, it runs at ~400 tokens/second on a single H100 Nvidia GPU from Friendli. 

Trained similarly to Deepseek R1, we used Smollm2 as a base model, then we've SFT fine tuned on reasoning using our own private superthoughts instruct dataset which includes a mix of code, website generation, day-to-day chats, math and counting problems. And then we modified the tokenizer slightly, after the SFT fine tuning we used Grpo (arXiv:2402.03300) to further amplify it's mathematics & problem solving abilities.

<div style="background-color: #ffebee; padding: 16px; border-radius: 4px; border-left: 4px solid #ef5350;">
  <h1 style="color: #c62828; margin: 0 0 8px 0;">⚠️ WARNING</h1>
  <p style="color: #c62828; font-weight: bold; margin: 0;">
    We did not put additional safety filters when doing SFT, thus this AI is pretty uncensored and can be rude at times. unless you specify in the system prompt that it is harmless, it won't be. users are soley responsible for the use of this AI. no output from the AI represents the views of Pinkstack or any other third party, it may create biased, inccorect and harmful information unless you set it up properly. for commercial use, it is reccomeneded that you either use another AI like llamaguard for filtering or in the system prompt ensure that it would be harmless.
  </p>
</div>


# Format
```
<|im_start|>user
How many R's in strawberry<|im_end|>
<|im_start|>assistant
<think>
Alright, the user has asked how many R's in the word strawberry, that's easy! I just need to count each instance of the letter 'R' in the word 's-t-r-a-w-b-e-r-r-y' and then find out how many R's there are, lets count!
S - Not an R,
T - Not an R,
R - First instance of the letter R! (1),
A - Not an R,
W - Not an R,
B - Not an R,
E - Not an R,
R - Great! Second instance of the letter R. (2),
R - Third instance of the letter R. (3),
Y - Not an R.

So, i've counted all the letters correctly, meaning that I am sure that there are 3 R's in the word Strawberry. I should probably let the user know.
</think>
<output>3
</output><|im_end|>
```
It is very reccomend to use a low temperature, higher temperatures may cause it to not think. 
# system prompt
(important to ensure it would always think, output).
```
respond in the following format:
<think>
...
</think>
<output>
...
</output>
```
# Examples:
all responses below generated with our system prompt and a temperature of 0.7.
Generated inside the android application, ChatterUI via GGUF Q8, using the model's prompt format. and our 
1)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/5veZJmkjuv_7W7pKhvsu0.png)
2)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/pAwPdVkEZ7rnFf-TZ5tMU.png)
3)
![image/png](https://cdn-uploads.huggingface.co/production/uploads/6710ba6af1279fe0dfe33afe/FDaWAAqgv2kvoZvjl8gjl.png)

# Uploaded  model

- **Developed by:** Pinkstack
- **License:** apache-2.0
- **Finetuned from model :** HuggingFaceTB/SmolLM2-1.7B-Instruct