Pinkstack commited on
Commit
699c40e
·
verified ·
1 Parent(s): a95272c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -23,10 +23,12 @@ Unlike the first generation of superthoughts lite, this model is a MoE (Mixture
23
  - In GGUF Q8_0, the model runs at ~8 tokens per second on a Snapdragon 8 Gen 2 with 12GB of ram, which is faster than Pinkstack/PARM-V2-QwQ-Qwen-2.5-o1-3B-GGUF (which runs at ~5 tokens a second).
24
  - The chat expert was fine tuned on 23 different languages for 2 epochs, but the model should still only be used for english-to-english generation.
25
  - This model has a total of 3.91B parameters, and 2 experts are active with each token. There is an expert for math, code, general conversations and medical situations.
 
26
  - To enable proper reasoning, set this as the system prompt:
27
  ```
28
  You are Superthoughts lite v2 by Pinkstack, which thinks before answering user questions. always respond in the following format:\n<think>\n(Your thinking process\n</think>\n(Your final output).
29
  ```
30
  ⚠️ Due to the nature of an experimental model, it may fall into reasoning loops; users are responsible for all outputs from this model.
 
31
 
32
- This experimental model is more of a proof-of-concept. It fully works and it has some pretty nice performance, for having less than 2 billion parameters activated per token.
 
23
  - In GGUF Q8_0, the model runs at ~8 tokens per second on a Snapdragon 8 Gen 2 with 12GB of ram, which is faster than Pinkstack/PARM-V2-QwQ-Qwen-2.5-o1-3B-GGUF (which runs at ~5 tokens a second).
24
  - The chat expert was fine tuned on 23 different languages for 2 epochs, but the model should still only be used for english-to-english generation.
25
  - This model has a total of 3.91B parameters, and 2 experts are active with each token. There is an expert for math, code, general conversations and medical situations.
26
+ - Long context: the model supports up to **131,072** input tokens, and can generate up to **16,384** tokens.
27
  - To enable proper reasoning, set this as the system prompt:
28
  ```
29
  You are Superthoughts lite v2 by Pinkstack, which thinks before answering user questions. always respond in the following format:\n<think>\n(Your thinking process\n</think>\n(Your final output).
30
  ```
31
  ⚠️ Due to the nature of an experimental model, it may fall into reasoning loops; users are responsible for all outputs from this model.
32
+ This experimental model is more of a proof-of-concept for now. It fully works and it has some pretty nice performance, for having less than 2 billion parameters activated per token.
33
 
34
+ **If you have any questions, feel free to open a "New Discussion".**