Update README.md
Browse files
README.md
CHANGED
@@ -24,7 +24,7 @@ Unlike the first generation of superthoughts lite, this model is a MoE (Mixture
|
|
24 |
- The chat expert was fine tuned on 23 different languages for 2 epochs, but the model should still only be used for english-to-english generation.
|
25 |
- This model has a total of 3.91B parameters, and 2 experts are active with each token. There is an expert for math, code, general conversations and medical situations.
|
26 |
- Long context: the model supports up to **131,072** input tokens, and can generate up to **16,384** tokens.
|
27 |
-
- Unhinged at times: As this is an experimental version, it is extremly sensitive to prompts, so it is incredibly unhindged someties.
|
28 |
- To enable proper reasoning, set this as the system prompt:
|
29 |
```
|
30 |
You are Superthoughts lite v2 by Pinkstack, which thinks before answering user questions. always respond in the following format:\n<think>\n(Your thinking process\n</think>\n(Your final output).
|
|
|
24 |
- The chat expert was fine tuned on 23 different languages for 2 epochs, but the model should still only be used for english-to-english generation.
|
25 |
- This model has a total of 3.91B parameters, and 2 experts are active with each token. There is an expert for math, code, general conversations and medical situations.
|
26 |
- Long context: the model supports up to **131,072** input tokens, and can generate up to **16,384** tokens.
|
27 |
+
- Unhinged at times: As this is an experimental version, it is extremly sensitive to prompts, so it is incredibly unhindged someties. please use a tempature of around or at 0.85.
|
28 |
- To enable proper reasoning, set this as the system prompt:
|
29 |
```
|
30 |
You are Superthoughts lite v2 by Pinkstack, which thinks before answering user questions. always respond in the following format:\n<think>\n(Your thinking process\n</think>\n(Your final output).
|